If you want access to the full recording, join the AIxHR Community on slack where you can get access to a megaset of AIxHR resources, in addition to all the event recordings.
The Shape of Work: AI x HR Playbook 2024 is a dynamic mini-conference series designed to educate and unite HR leaders in exploring how AI is changing the world of HR and the future of work. This event offers a rich blend of engaging panel discussions, expert-led educational sessions, and interactive workshops, making it an essential gathering for HR professionals eager to stay ahead of the curve.
Let’s recap the second iteration of this mini-conference series.
Session 1 | Forget about Use Cases: Focus on Establishing HR’s AI-Readiness with Kevin Martin
In the opening session of the Shape of Work AI X HR Playbook 2024 Mini-Conference, Kevin Martin, Chief Research Officer at the Institute for Corporate Productivity (i4cp), shared valuable insights into the critical role of AI readiness in HR and how it shapes the future of work. Drawing from research conducted in 2023 and early 2024, Martin made a compelling case for why HR professionals need to prepare their organizations for the rise of AI.
Watch the full session here:
Kevin Martin, Chief Research Officer, i4cp
Kevin Martin is the Chief Research Officer at Institute for Corporate Productivity (i4cp), a leading global human capital research firm. Kevin advises HR leaders and corporate boards on topics such as the future of work, people strategy, and leadership effectiveness, and has been recognized as a top HR and HR Tech influencer multiple times. His research is featured in top publications like The Economist, Forbes, and Harvard Business Review.
The AI Readiness Gap: A Warning for HR Leaders
Martin began by emphasizing the importance of AI readiness within HR. He cited data from recent studies, revealing a wide gap between CEOs’ expectations of AI’s impact and the preparedness of their HR departments. “70% of CEOs believe generative AI will significantly change how their organizations create, derive, and deliver value within the next three years,” Martin said, referencing PwC’s 2024 Global CEO Survey. However, when board members were asked how confident they were in their management teams’ vision for AI integration, only 16% said they were confident.
This stark contrast underscored a growing “AI readiness gap” in leadership and workforce preparation.
Martin warned HR leaders that this gap could lead to “fear, uncertainty, and doubt in the workforce.” He stressed that “two-thirds of HR leaders believe their people are using AI tools regardless of their company’s policies.” Without structured guidance and governance, this can lead to significant risk.
Why HR Needs to Prioritize AI Readiness
According to Martin, it’s no longer a question of whether AI will reshape HR, but when. Martin urged HR professionals to stop getting caught up in individual AI use cases and instead think systematically about how AI can improve talent strategies and workforce readiness.
The importance of structured AI implementation is especially crucial in a workplace increasingly reliant on AI for everyday functions. “AI readiness is critical not only to HR but to the organization’s overall preparedness for the future of work,” Martin explained. “What we’re seeing is a direct correlation between HR’s AI readiness and the broader organization’s approach to AI.”
The Maturity Model: Lagging vs. Leading in AI
Martin shared i4cp’s AI maturity model, which categorized organizations into three groups: AI Laggards, AI Enquirers, and AI Innovators.
- AI Laggards: These organizations have leaders who admit to having no clear strategy for AI adoption. Employees in these companies rarely hear about AI from senior leaders, leading to a lack of preparedness and increased risk.
- AI Enquirers: These organizations are cautiously exploring AI’s potential but have yet to fully embrace or experiment with the technology. Leaders are still in a “wait and see” mode, hesitant to commit fully to AI integration.
- AI Innovators: About 13% of surveyed organizations fall into this category. These companies have taken a proactive approach by training employees, making it safe to experiment with AI, and encouraging cross-departmental use of AI. “AI innovators are making it safe for employees to experiment with AI. They’ve created a secure environment, and their workforce readiness is skyrocketing as a result,” Martin pointed out.
9 Talent Practices of AI Innovators
Martin’s research identified nine specific talent practices that separate AI innovators from laggards. According to the study, “seven of these nine practices are already in play at AI innovators, whereas laggards have implemented none.”
These practices include:
- Implementing an AI strategic plan or framework: This involves creating a governance framework that specifies how AI can be used, by whom, and with what data. As Martin noted, “Governance is key to AI success. Without it, you’re in the wild west, and that’s where the danger lies.”
- Developing workers to leverage generative AI: AI innovators provide training to all employees, from entry-level staff to the board of directors. Martin gave an example of a financial institution that rolled out AI training to everyone in the company, including executives. “Once employees completed the training, they were given a badge that allowed them to experiment with their internal large language model,” he shared.
- Deconstructing job roles impacted by AI: One of the standout examples Martin shared was IKEA’s approach to AI-driven changes. Rather than eliminate customer service roles as chatbots took over, IKEA deconstructed those roles and retrained 8,000 employees to become design consultants. “IKEA didn’t just let go of these workers; they upskilled them into roles that added more strategic value to the company,” Martin explained.
Creating a Culture of Experimentation
A key message from Martin’s talk was the importance of fostering a culture that embraces AI experimentation. “HR’s AI readiness is often only possible when the broader business is also ready,” he said. He urged HR leaders to push for a culture where AI experimentation is encouraged and supported by leadership. “If your leaders aren’t communicating their AI strategy, start learning and experimenting with it yourself. Collaborate with other departments, like product and marketing, who are probably already using AI.”
He emphasized the importance of safe environments for experimentation: “Create spaces where employees can safely experiment with AI without fear of making mistakes. The most successful HR teams are those that build and share their AI knowledge as a team.”
Strategic Questions for HR Leaders
To wrap up, Martin provided a framework of strategic questions that HR leaders should ask to ensure they are ready for AI integration:
- How can generative AI give us a market advantage?
- How can it make us vulnerable to competitors?
- What skills do we need to develop in our workforce to make them AI-ready?
- What policies need to be in place to ensure safe and ethical AI use?
These questions, according to Martin, are critical for not only understanding the potential of AI but also preparing the organization for the changes it will bring. “Don’t focus on individual AI use cases,” he reiterated. “Instead, ask what business problems or employee experience barriers need solving, and then see how AI can help.”
Final Takeaways
Kevin Martin’s session provided a wealth of actionable insights for HR leaders. From focusing on AI readiness to creating safe environments for experimentation, the message was clear: HR must lead the way in AI adoption. “HR’s ability to demonstrate AI readiness will determine how integrated they are in their organization’s broader AI strategy,” Martin concluded.
For HR professionals looking to future-proof their departments, Martin’s advice is invaluable:
“Focus on building AI readiness today, so you’re not left behind tomorrow.”
Session 2 | Panel Discussion | AI Red-Teaming: The Way Towards an AI-Led Future of Work with Noelle R., Kait Rohlfing PhD, and Diane Sadowski-Joseph
In this insightful session, three industry experts discussed the critical role of AI Red Teaming and its future implications in the workplace. The panel focused on the importance of AI Red Teaming for organizations, how to implement it, and how HR professionals can lead the charge.
Watch the full session here:
Kait Rohlfing, PhD, Sr. Leadership Trainer, Lifelabs Learning
Kait, PhD in Industrial-Organizational Psychology, is a leadership trainer and facilitator at LifeLabs Learning, working with leaders and teams across global companies. She is the moderator and creator of the AI+HR series, which LifeLabs hosts for a global community of HR professionals. With a background in psychology, leadership development, and research on resilience, she brings a unique perspective to the intersection of AI and HR along with an expertise in leveraging AI for career development, efficiency improvement, and various operational aspects within organizations.
Noelle Russell, Chief AI Officer, AI Leadership Institue
Noelle is a multi-award-winning technologist & founder of the AI Leadership Institute. She has an entrepreneurial spirit and specializes in helping companies with data, cloud, conversational AI, Generative AI, and LLMs. She has led teams at NPR, Microsoft, IBM, AWS and Amazon Alexa, and is a consistent champion for Data and AI literacy and is the founder of the “I ❤️ AI” Community teaching responsible AI for everyone.
Diane Sadowski-Joseph, Co-Founder and Head of Product at Apply AI
Diane is Co-Founder and Head of Product at ApplyAI. With a global career spanning Prague to San Francisco, she has empowered over 100,000 professionals across 1000+ companies, driving organizational performance through leadership development and more recently, AI integration. As a start-up executive, she scaled a company from 8 people to a nine-figure acquisition with record-level employee engagement and NPS. After studying AI strategy at UC Berkeley, Diane is now co-founding a venture revolutionizing how organizations expand their AI capabilities in a human-first way.
What is AI Red Teaming?
Noelle kicked off the panel by explaining the concept of AI Red Teaming, describing it as a process rooted in cybersecurity but applied to AI to test and uncover potential vulnerabilities, biases, and errors in AI systems. She emphasized that AI Red Teaming isn’t just a technical process but one that benefits from diverse and inclusive perspectives to prevent harmful or biased outputs. “It’s about bringing in lived experiences and using them to extract bias from AI systems,” Noelle explained.
Noelle highlighted that red teaming is more than just a role—it’s an activity. “It’s a thing you do, not something you are.” The goal is to evaluate AI models not just for accuracy but for fairness, bias, and ethical outcomes, particularly in systems like generative AI where errors can stem from natural language processing.
HR professionals, Noelle noted, are perfectly positioned to play a key role in this process:
“HR is the best-equipped function to ask critical questions about who could be harmed and how the AI might negatively impact people.”
Overcoming Resistance and Organizational Challenges
Kait emphasized the cultural challenges organizations face when adopting AI Red Teaming, particularly when it requires organizational change. “People will resist adopting AI, even when they ask for it, because they fear losing control or their jobs,” she explained, referencing examples where teams ask for faster, AI-driven processes but resist when those processes are actually implemented.
From an HR leader’s perspective, Kait pointed out that organizations must foster an innovation-friendly culture that encourages critical feedback and transparency. “In a risk-averse, hierarchical environment, red teaming may fail if employees are not encouraged to voice concerns about potential ethical or operational issues.” She underscored that psychological safety is crucial in any red-teaming effort, as it encourages employees to feel comfortable sharing feedback about the potential negative impacts of AI.
Diane further supported this point by highlighting how psychological safety plays a key role in successful red teaming. “For a red team to succeed, employees need to feel safe to speak truth to power—without it, even the best AI initiatives can fail.”
The Red Teaming Process: Key Steps and HR’s Role
1. Assembling a Diverse Team Noel explained that the AI Red Teaming process begins by assembling a team from various departments, functions, and experiences. HR professionals play a crucial role in ensuring diversity within this team. She emphasized that the process involves assessing both technical vulnerabilities and the human impact of AI, such as bias and fairness. “AI Red Teaming involves assessing not only the technical vulnerabilities but also the human impact of AI, like bias and fairness,” she noted.
2. Integrating Red Teaming Early in Development Noel also stressed the importance of integrating red teaming early in the AI development process, rather than treating it as a final check at the end. “The best organizations incorporate AI Red Teaming right from the start of development,” she advised. This proactive approach helps organizations catch potential biases and risks early, before the AI model is widely deployed.
3. Using a Stakeholder Map Diane recommended creating a stakeholder map when forming the red team. “Identify everyone who will use or be impacted by the AI tool—customers, internal teams, and end-users—and ensure they’re represented on the red team,” she explained. This ensures the inclusion of diverse perspectives, which maximizes the effectiveness of the red teaming process. Diane also highlighted the need for cross-functional collaboration to enhance the team’s ability to uncover hidden risks.
In summary, the red teaming process benefits greatly from diversity, early integration, and thoughtful planning through stakeholder mapping, with HR professionals serving as essential facilitators for ensuring fairness and reducing bias in AI systems.
Key Challenges in Implementing Red Teaming
The panelists addressed the challenges organizations might face when implementing AI Red Teaming. Resistance to change and lack of AI literacy were two major barriers.
- Kait highlighted the stress and burnout that often accompanies rapid AI adoption, emphasizing the importance of checking in with teams and ensuring they have the tools and support to handle the pressure. “AI implementation takes time, energy, and focus. Leaders must be equipped to lead through this change, and HR can help facilitate that.”
- Diane pointed out another challenge: ensuring that leadership dedicates time and resources to monitoring AI systems post-launch. Without sufficient oversight, AI models can quickly go off track. She emphasized that continuous monitoring and feedback loops are essential, and HR leaders should ensure this is factored into AI projects.
Real-World Examples of AI Red Teaming
Uber: To make the concept of AI Red Teaming more tangible, Diane shared an example from Uber, where an AI-powered assistant was implemented to handle HR queries. As part of the red-teaming process, the team identified sensitive areas like employee relations that should always involve human intervention. The AI was trained to refer these issues to a person rather than handling them itself, avoiding potential legal or ethical pitfalls. “By doing this, they likely saved millions in potential lawsuits and employee dissatisfaction,” Diane explained.
Microsoft: Noelle shared an example from Microsoft, where red teaming helped ensure that models were built responsibly and safely. She pointed out that collaboration with academia and external researchers was key to creating robust AI systems. “Bringing in experts with different perspectives—from research to the business side—ensured the AI models were built with accuracy and fairness in mind,” Noelle noted.
Why it’s important for HR leaders to understand AI Red Teaming
Kait pointed out that HR professionals can use AI Red Teaming not just to improve operational efficiency but also to enhance their own career development. She shared an example from a retail organization that used AI to screen resumes, where red teaming revealed biases in the tool’s favoring of candidates from certain universities. By incorporating human oversight, they created a more diverse and inclusive hiring process.
She encouraged HR leaders to get involved in AI projects to gain AI literacy, which is becoming increasingly valuable. “The more exposure you have to AI, the more employable you become,” Kait said, noting that HR professionals can bring a unique perspective to AI projects, especially when it comes to understanding human behavior and organizational impact.
Tips for HR Leaders Implementing AI Red Teaming
As the panel concluded, the experts shared specific advice for HR leaders looking to implement AI Red Teaming in their organizations:
- Diane: “Start referring to your normal checks and balances as red teaming—it builds confidence in AI and helps foster a sense of ownership among HR teams.”
- Kait: “Equip your leaders with the skills they need to lead through AI-driven change. HR can play a crucial role in helping leaders embrace AI by fostering transparency and open communication.”
- Noelle: “Get hands-on with AI—experiment with it yourself. Join AI-focused communities and bring these learnings back to your teams. The best way to grow your AI literacy is by actually using it.”
Final Thoughts for HR Leaders
AI Red Teaming offers HR leaders a unique opportunity to shape the future of work by ensuring that AI systems are built with fairness, accuracy, and ethical considerations in mind. By participating in or even leading red teams, HR professionals can drive organizational efficiency, promote a culture of psychological safety, and enhance their own career development in an increasingly AI-driven world.
Incorporating diverse perspectives, encouraging cross-functional collaboration, and embedding AI Red Teaming throughout the AI development process will ensure that AI technologies support both organizational goals and employee well-being.
Session 3 | Integrating AI to Enhance Employee Wellbeing and Mental Health with Stacie Baird
Stacie delivered a powerful session focused on how HR leaders can integrate self-compassion, technology, and AI to improve employee wellbeing. Drawing from her professional expertise and personal experiences, Stacie shared actionable tools that HR professionals can use to foster a culture of mental health awareness and lead by example. This session highlighted the critical role HR plays in shaping a healthier, more compassionate workplace.
Watch the full session here:
Stacie Baird, Chief People Officer, Community Medical Services
Driven by a passion for elevating the Human Experience (HX), Stacie has led innovative HR teams across start-ups, high-growth environments, and large public companies. She hosts a weekly podcast, The HX Podcast, and currently serves as the Chief People Officer for a large mental health organization committed to solving the opioid crisis. With around 50% of staff in recovery themselves, Stacie’s work on curating a strong employee experience is more critical than ever.
The Growing Importance of Employee Wellbeing
Stacie began by emphasizing the rising importance of mental health in the workplace. With healthcare premiums expected to increase and mental health-related costs outpacing general healthcare expenses by 52%, she underscored that “mental health continues to trend as an overall healthcare expense,” making it essential for organizations to address the issue proactively.
She further stressed the need for reducing the stigma around mental health, particularly in high-pressure environments. Using her own organization—focused on opioid and substance use disorder treatment—as an example, she highlighted how critical it is for HR leaders to be intentional about providing mental health resources.
Leading by Example: The Inside-Out Approach
A key theme of Stacie’s presentation was that employee wellbeing starts from within the HR team itself. She challenged HR leaders to “lead by example” and prioritize their own mental health, thereby setting a precedent for the entire organization. According to her, the journey to better employee wellbeing begins with HR leaders asking themselves: “What do I need right now?”
Stacie used her personal story as an example of the importance of self-compassion in the workplace. In 2023, her daughter was diagnosed with leukemia, and Stacie found herself juggling the immense responsibilities of being a mother, HR executive, and caregiver. This life-changing experience taught her the value of setting boundaries and practicing self-awareness.
“I thought I understood employee wellbeing and mental health,” she shared, “but this journey taught me that I didn’t know as much as I thought I did.” From making small decisions—such as parking in a less stressful area at the hospital—to practicing daily self-compassion, Stacie emphasized the importance of prioritizing mental health in the midst of adversity.
Building a Toolkit for Self-Compassion
To make employee wellbeing more accessible, Stacie provided HR leaders with a practical toolkit for integrating self-compassion into their daily routines. She introduced a simple three-step process, adapted from the work of Kristin Neff and Chris Germer, designed to help manage burnout, stress, and overwhelm:
- Pause – Recognize when you are feeling anxious or stressed, and take a moment to pause.
- Normalize – Understand that it’s normal to feel overwhelmed in challenging situations.
- Ask, What Do I Need Right Now? – Take a moment to consider what would help you feel better or what advice you would give to a friend in the same situation.
Stacie encouraged leaders to practice these steps regularly and create space for their teams to do the same. “If I didn’t give myself permission to pause and reflect, no one else on my team would feel they had permission to do so either,” she remarked.
Vicarious Trauma: A Hidden Risk in HR
One of the session’s most eye-opening moments was Stacie’s discussion of vicarious trauma, which often affects HR professionals. She described vicarious trauma as the emotional toll HR leaders experience from dealing with the personal challenges of employees, whether it’s handling a termination, assisting with bereavement leave, or supporting someone through illness.
“Vicarious trauma is the human experience of HR,” Stacie explained. She urged HR leaders to recognize when they are carrying the emotional weight of their roles and to practice self-regulation and self-care to avoid burnout.
Integrating AI for Employee Wellbeing
Stacie also explored how AI can be used to enhance mental health initiatives and wellbeing programs in organizations. She emphasized that AI doesn’t have to be an expensive or complicated tool to implement. Simple AI prompts and tools can help HR leaders regulate their stress levels and better manage their mental health.
She suggested the following prompts to integrate AI into wellbeing practices:
- “What is one step I can take to feel less anxious right now?” – Use AI tools to quickly find techniques for reducing anxiety in the moment.
- “What can I do to up-regulate myself before a big meeting?” – AI can suggest breathing exercises, meditation practices, or even motivational music to help leaders prepare for stressful situations.
Stacie encouraged HR leaders to set reminders for self-regulation techniques and use AI to support regular wellbeing practices. By incorporating AI into daily routines, HR teams can better manage stress and create a culture of self-care.
Conclusion: The Human Experience in HR
Stacie’s session was a heartfelt reminder that employee wellbeing is deeply tied to how HR professionals care for themselves. By leading with self-compassion and integrating AI to support mental health, HR leaders can build a workplace where wellbeing is prioritized for everyone.
As she summed up,
“This work starts with you. If you don’t take care of yourself, you can’t effectively lead your teams or support their wellbeing.”