The World Economic Forum recently interviewed 130 Chief People Officers and found that advancing workforce AI is a top priority. The focus is not solely on implementing technology, but on upskilling, reskilling, and integrating AI in a way that supports both people and performance. Doing so requires a human-centered approach to AI deployment.
But what does human-centered really mean?
First, not all workers are thinking about AI in the same way. Millennials are out front, integrating advanced AI tools into their everyday work more than any other group. They see AI as a natural extension of how they operate and are normalizing its role in the workplace. Gen Z, despite being the first generation raised entirely in a digital world, is approaching AI more cautiously. They’re quick to experiment with new technology, but they’re also the most likely to voice concerns with 70% reporting ethical reservations about AI, a higher share than any other generation. Meanwhile, Gen X and Boomers are moving more slowly. Both groups want hard evidence that a tool works before they’ll embrace it, and Boomers in particular remain wary of technology’s impact on human connection. For them, AI and other digital tools can feel like a threat to the relationship-driven ways they built their careers. These generational differences underscore the need for thoughtful, targeted strategies (Lattice, 2026 State of People Strategy Report).
A recent article in The Atlantic also pointed out a key tension: AI struggles to collaborate and automate simultaneously. Yet organizations often demand both. As we think about integration, we need to start by getting clear on a few fundamentals:
• What are we designing or leveraging AI to do?
• How, where, when, and why will it add value?
• What are the benefits/risks of removing humans from the loop? What are the benefits/risks of keeping humans in the loop?
• And how do we prepare our people to engage with AI effectively?
The same thinking applies to training. Most employees don’t have 10 extra hours to watch explainer videos and even if they did, information alone doesn’t drive behavior change. Bite-sized, experiential learning tied to real work is often more effective. The goal is to fit the training to the people, not force people to fit the training. We know that training can’t be a one-and-done event. In my work at Deutser, I’ve seen the power of operationalizing AI integration in a much more systematic way. For example, we’ve designed a multi-pronged approach in our work with professional sports teams. We start with cross-functional teams and frame targeted sessions with simple questions like: What can AI do for us? From there, we use surveys, focus groups, 1:1 interviews, and ongoing working sessions and workshops (ranging from 90-minutes to several hours) to uncover friction points and opportunities for AI to drive meaningful improvements. We engage these teams to build the strategy for the organization from the bottom-up with leader buy-in and support from the top-down. It’s not just about managing fear. We design these to build excitement and expan agency across the enterprise. We engage folks through play, experimentation, ideation, and by simply talking through fears. People want to feel part of the process and the people closest to the work should be involved in figuring out how AI can serve as a partner, collaborator, teacher, mentor, intern, etc. in their day-to-day. They can help define the role and the interaction. Without their engagement, the likelihood of long-term success drops significantly.
Human-centered design means acknowledging a non-dual reality, that change is hard and that’s okay. In his book, Master of Change, Brad Stulberg makes a compelling comparison between homeostasis and allostasis. While homeostasis is about returning to the familiar (X → Y → X), allostasis embraces change (X → Y → Z). We’re not going back to “how things were.” We’re creating something new and that can feel both disruptive and empowering. In that sense, allostasis is about building capacity and empowering our people to respond with greater allowance for adaptability, to find re-order, to move through change with resilience and a stability despite knowing a different future awaits.
To be truly human-centered, we need to listen deeply. Understand people’s concerns, goals, and motivations. Then, design and test solutions like scientists and entrepreneurs iterating frequently with a systems-thinking mindset.
There is much more to consider when it comes to human-centered deployment but I hope this offers you a good starting place. What’s on your mind as you navigate AI integration? How are you thinking about people, performance, tools, and designing systems for the future?