When considering AI’s impact on HR, recruitment and talent acquisition stand out as the areas with the highest adoption rates. In this age of AI-driven talent acquisition, the promise of efficiency and precision often comes with unanswered questions about fairness, compliance, and transparency.
With AI laws and regulation slowly coming into play, and lawsuits becoming more prominent – how can HR leaders ensure the tools they adopt align with their organizational values and meet legal standards?
In this thought-provoking conversation with Dr. Cari Miller, we dive into the critical process of auditing AI tools. From identifying red flags to navigating compliance challenges and adopting best practices, this discussion offers actionable insights to help HR professionals make informed decisions and safeguard their hiring processes.
Navigating the AI governance landscape can be challenging and complex. This guide can help you start understanding by first learning how to evaluate AI hiring tools effectively.
Q: Without complicating things too much – what do HR professionals need to know to understand the concept of AI Governance?
Dr. Cari: Well, I can tell you about this. There’s a series of books called the Oxford Handbooks, published out of Europe, connected to Oxford University. They’ve defined organizational governance, and when we talk about AI governance, it tends to follow a similar framework.
In a nutshell, governance is built on three pillars: people, policy, and process. And then there’s the overarching culture that shapes how these pillars function.
- People: Having the right individuals in the right roles.
- Policies: Ensuring there are clear guidelines in place, rather than operating without them.
- Processes: Operationalizing those policies—putting the rubber on the road.
For example:
You might have a policy that states, “We want to ensure all AI tools we use are fair and equitable, without discrimination. If an issue arises, there will be a process for reporting, investigating, and addressing it.”
The process, in this case, would be how you make that policy actionable. In procurement, for instance, you’d include steps like:
- Asking the right questions.
- Ensuring specific gates are checked off before moving forward.
It’s about aligning people, policies, and processes to ensure that governance isn’t just theoretical but effectively implemented.
Q: Moving on to AI vendors – why is it important for HR leaders to understand the legal aspects of verifying AI claims for talent acquisition tools?
Dr. Cari: We just had an election, and a lot of people are asking, “What does this mean for AI governance and responsible AI?”
That is a big question on everyone’s mind right now. As best I can tell, here’s what’s going to continue to happen:
- States will continue to make the rules, laws, and set regulations that impact all these things.
- Existing laws—such as the right to privacy and civil rights—are unlikely to change.
For example, you cannot be discriminated against. These protections have existed since the 1960s. You can’t build a system that systematically declines jobs for people in protected classes—that’s not going to be okay.
With these two points in mind:
- The legal landscape will evolve at the state level.
- Your civil rights must continue to be protected.
It’s critical to stay vigilant and ensure these systems perform in fair and equitable ways. No matter who won the election, this is where we’re headed.
Q: What are some practical steps HR leaders can take to establish clear standards and expectations with vendors right from the start?
Dr. Cari: There are a couple of things happening here, and it’s just so human nature—we see this all the time.
You’re almost always approached by a vendor with a shiny new toy. It looks really cool, and they say, “We’ll let you try it for a year, six months, or even 30 days,” or whatever the dangling carrot might be. And you think, “Okay, it’s free—what’s the harm?”
Next thing you know, you’ve adopted something without:
- Truly knowing if you needed it.
- Fully investigating it.
That’s red flag number one. We always want to make sure there’s a legitimate business need before moving forward.
Before implementing a new AI tool,
- it’s important to pause,
- gather the team, and
- ensure there’s an actual issue that needs to be addressed.
Remember: AI isn’t going to magically fix a messy process inside your organization—that’s not how technology works. So, proceed with caution. You are in control of what you sign up for. If you manage this step wisely, you’ve already won half the battle.
You also need to ask the right questions and establish responsible expectations and guardrails. For example:
- Say you don’t want the system to discriminate.
- Specify that the system needs to be fair, transparent, and capable of explaining to your people—and to candidates—how it works.
These are the basics, but they’re crucial gates to ensure you’re making informed decisions.
Q: You mentioned that free trials can be a great incentive to explore a tool. What key factors should HR professionals evaluate before signing up for one?
Dr. Cari: A free trial is excellent—don’t look that gift horse in the mouth. However, it’s the cart before the horse. What you want to do first is make sure you have a legitimate business problem. Investigate the solution thoroughly.
- Does it have explainability?
- Is it transparent?
- Do they handle fairness and bias in a way that aligns with your values?
- How do they deal with incidents and end-user appeals?
If all of that checks out, then absolutely, go for the free trial. But don’t start with the free trial and then wonder afterward, “Maybe I should check them out?”
That approach just doesn’t make sense. So, put the cart behind the horse, and you’ll make smarter, more informed decisions.
Q: When an AI vendor tells you their tool is “bias-free,” what can you ask the vendor to validate that claim?
Dr. Cari: It’s really important to investigate what you’re about to buy—or even what you already have in-house. Instead of reacting on the backside—saying, “Oh, something happened; now I need to hold you accountable,”—focus on the front end.
Ask questions like:
- Can you tell me more about your training data?
- How did you verify that it’s robust and representative?
- Where did the data come from?
- How old is it?
Remember, data tends to decay. If they’re pulling resume data from 20 years ago, that may not reflect today’s reality. I need to know what’s in the system, where it came from, and whether that data was appropriate for its intended purpose.
For example:
- Did you scrape it from social media? Because those sources can introduce biases.
- Did you test it? How did you test it? How often do you test it?
- Can I see those test results?
It’s important to scrutinize these aspects upfront. If the vendor starts responding with, “We don’t know,” or, “We’re not sure,” or, “It’s private,”—those are potential red flags.
Q: Is it better to hire a third-party compliance professional who truly understands what a tool is claiming to do? Is that a good way to approach it—if, of course, it’s feasible?
Dr. Cari: Well, that’s a tough question. If I put on my ethics hat, I’d say absolutely—you should do that. You need to do that.
But if I put on my practical business hat, I’d acknowledge there’s an expense involved, and there’s no definitive answer. Ethics are subjective—your ethics, my ethics—and they don’t always align.
Here’s the balance:
- If you’re a large organization with high hiring volumes of hiring, and therefore higher risk, then yes, it becomes obvious that you should invest in someone who can properly evaluate these tools.
- If you’re a smaller organization with a lower hiring volume, you likely won’t be in a position to make that investment.
In that case, you’ll need to rely on the rules and regulations as they develop to guide you on what’s working and what isn’t.
Q: What is the current state of companies hiring teams for AI red-teaming, and how are they addressing or overlooking AI governance to ensure responsible use of AI tools?
Dr. Cari: It’s a spectrum, as you’d imagine—a bell curve.
On one end, you have very large organizations, particularly multinational companies. They often have entire departments dedicated to this because:
- Laws in some other countries tend to be more advanced.
- They’re more exposed to lawsuits due to higher hiring volumes.
- On the other end, smaller organizations often don’t even fully understand the impact of AI yet. They might not have the resources or knowledge to address these complexities.
That said, I’m definitely seeing a movement toward greater AI literacy. People are starting to realize, “Wait a minute, these tools aren’t perfect. Tell me more about that.” That’s a positive shift, but overall, I’d say it’s still a bit slow going.
Q: What tools have you seen raising red flags versus those that seem to be taking this seriously?
Dr. Cari:
- Any tools claiming to measure someone’s personality traits—like bravery, boldness, or confidence—based on video interviews is a major red flag. These tools often analyze voice inflections, eye movements, or facial expressions to make judgments, but:
- You cannot read someone’s mind.
- There’s extensive research showing how such systems can fail, particularly with disabilities and neurodivergent individuals.
For example, an employee could have a neurodivergent condition or even something as simple as an eye infection—factors that these tools often overlook. It’s a broad spectrum, but anything involving voice or video assessments requires extra caution.
- This brings us to tools with problematic components. A great example is LinkedIn. LinkedIn is a trusted platform—they promote themselves as being safe and responsible, and I genuinely see them working hard in this space. But there are elements that raise eyebrows.
For instance, LinkedIn has a Skills Cloud feature. Have you ever downloaded your skill data? It’s fascinating—and sometimes concerning.
- I manually select my skills on LinkedIn.
- But LinkedIn also tracks hundreds of other skills I didn’t input, and many are outdated or irrelevant.
For example, they label me as an “expert digital marketer,” which isn’t accurate anymore due to the decay of those skills. It’s a reminder that even trusted tools have areas requiring scrutiny, especially when it comes to the data they rely on.
As I’ve read through some of their work, the engineers often start with this noble intention—saying things like, “We’re doing this to match people so they’re well-suited for opportunities.” But then, in a small paragraph at the bottom, you’ll find something like, “Also, this is a revenue-generating tool.” Wait a minute—those two things shouldn’t be true at the same time. That’s not okay.
I’ve noticed that HR professionals are struggling to sift through the muck and get to the bottom of these issues. And honestly, this is where you really need an expert to help guide you.
Over time, I think this will get worked out. But for now, it’s a really murky area.
Q: Going back to the video interview tools we were just discussing, would you say it’s too soon to rely on those tools at all?
Dr. Cari: Yes. Let me give you an example from one organization. I won’t name names, but here’s what they did:
I spoke with their engineers, and they designed their video interview tool in a very specific way. They asked job-specific questions based on O*NET standards—questions tailored directly to the job role.
Here’s the key part:
- They used nothing from the video itself—no analysis of facial expressions, voice intonation, or any visual cues.
- They simply transcribed the interview into text and provided the transcript to the hiring manager.
That’s it. They included some summaries, but the process only involved the words coming out of the candidate’s mouth. This approach removed potential biases, like skin color, lighting conditions, or any other visual factor. I can get on board with that because it focuses solely on what the candidate is saying.
But here’s the problem—most video interview tools aren’t even doing it that way.
Q: What are best practices for companies when it comes to adopting AI tools?
Dr. Cari:
1. Data Minimization (or Lack Thereof)
What companies should be doing is practicing data minimization—collecting only the data that’s relevant to the job. But instead, many tools are collecting as much data as possible. For example, they might ask:
- What sports did you play in high school?
- Did you earn a karate badge in middle school?
And you’re left wondering, How is this relevant to being a data engineer?
Data brokers are often supplying this irrelevant, superfluous information. Companies should focus on minimizing data collection and proudly talking about it. But unfortunately, most don’t. Engineers love data—it’s almost like kryptonite to suggest using less of it.
2. Purpose Limitation
Another issue is purpose limitation—data should only be used for the purpose it was originally collected for.
Often, extra data was provided for completely different contexts. For instance, someone might embellish their past achievements (like claiming they earned a karate badge) for fun or social reasons, but that data shouldn’t be reused to make job-related decisions.
If data isn’t collected specifically for hiring purposes, it can:
- Be inaccurate.
- Lead to misunderstandings or bias.
3. Inference Problems
This one is huge. When systems don’t have certain information (like race or ethnicity), they’re often programmed to infer it. This creates significant issues:
- Inference can be wildly inaccurate and reinforce stereotypes.
- The systems essentially “guess” at characteristics or even skills based on incomplete or irrelevant data.
Going back to LinkedIn – they are notorious for this. I input 50 skills into my profile and uploaded my resume. LinkedIn inferred 300 additional skills about me, one of which is running Google Adwords campaigns.
Thanks, LinkedIn, but here’s the problem: I cannot run a campaign on Google right now. That’s simply not a skill I have anymore. But LinkedIn thinks I do—and then shares that misinformation with others.
These issues—irrelevant data, a lack of purpose limitation, and problematic inference—highlight why we need to ask tough questions about how these systems work.
Q: How often should HR leaders reevaluate an AI tool’s accuracy?
Dr. Cari: Ideally, vendors should handle this themselves by working with an independent third party to audit their systems on a routine basis.
The best vendors do this because they want to be seen as upstanding and competitive in the market. They’ll go out of their way to hire third-party auditors and publish the results. That’s very honorable, but it’s still rare. When vendors don’t do this, HR leaders should step in.
Auditing timelines typically look like this:
- At a minimum, audits should happen annually.
- Every two years is also acceptable in some cases.
And these audits must be conducted by an independent party to ensure transparency and trustworthiness.
Let’s compare it to financial auditing. In financial audits, there’s a clear checklist:
- Review the profit and loss statement.
- Verify the balance of accounts.
- Ensure incoming funds equal outgoing funds.
It’s very black and white—pass or fail. But that clarity doesn’t exist yet in AI auditing. In the absence of standardized methods, we’ve created a variety of approaches to evaluate whether a system is good or not.
Q: When assessing AI tools, does the evaluation process also involve certifications, and is there an overall assessment framework?
Dr. Cari: Yes, that’s one approach. Let me break it down into different types of audits or assessments:
1. Governance Assessment
- Does the organization have proper policies in place?
- Are their ethical decisions documented?
- Do they resolve issues in a timely manner and without excessive complaints?
This is more of a basic assessment that ensures they’re taking care of what they should.
2. Contract Compliance Audit
- Did the vendor meet the agreed-upon metrics?
- Did they deliver what they promised, when they promised it?
- Were incidents kept to a minimum, as outlined in the contract?
This type of audit focuses on ensuring the vendor fulfilled their contractual obligations.
3. Procedural and Process Audit
- Did they train their staff as promised, and is that training documented?
- Where did their data come from, and is that source verified?
This goes a bit deeper, examining whether they’re following the processes they claim to follow.
4. Statistical Audit
- This is the most rigorous type.
- It involves analyzing test results, running the data independently, and attempting to replicate outcomes.
- The goal is to ensure the system’s statistical accuracy and reliability.
This level of audit is far more intensive and typically requires specialized expertise, making it more expensive.
Q: This is a lot of information to absorb. It feels overwhelming—how can one person or one team be responsible for understanding all of this? Especially in smaller companies, this often falls on HR and people functions. Do you think it’s necessary for HR leaders to understand this end to end?
Dr. Cari: While it’s always better to understand the process end to end, it’s not always realistic.
What I’m seeing mirrors exactly what you’re struggling with. Most HR leaders are saying:
- “This sounds like too much.”
- “I don’t have time for this.”
- “Do I really have to do all of that?”
The same challenges are happening in procurement. When purchasing these tools, think of all the questions you need to know to ask—it’s overwhelming. Procurement teams are saying:
- “Are you kidding me? I already have so much to do. I can’t learn all this!”
This sense of overwhelm is unfortunately the norm right now.
Q: So, I’m sure there is no immediate solution to address this overwhelm right now – but what can HR professionals take away from all this information?
Dr. Cari: While I don’t know the final answer, here’s what I do know:
- Recognize the Complexity: It’s okay to admit, “This is too much for me to handle alone.”
- Don’t Rush Decisions: Avoid jumping into pilots or contracts without reviewing them properly.
- Bring in Expertise: Even if you don’t fully understand every detail, someone else can. Bring in experts who can look at these systems and help you navigate the complexities.
Sometimes, the first step is just acknowledging, “Okay, there’s more to this than I initially thought, and I might need some outside help.”
Q: Looking ahead to 2025, what emerging trends in AI governance should HR leaders be paying attention to as they develop procurement standards?
Dr. Cari: Watch the state legislatures.I think some states will start addressing these concerns, though not all. Here’s the key: extraterritoriality.
- When a state like California makes a rule, it often impacts companies beyond its borders.
- So, even if one state enacts a law, others will benefit from it indirectly.
Now, unfortunately, this creates a patchwork of regulations, which can make doing business harder for companies. But that’s the landscape I foresee for 2025.
Organizations like SHRM will likely track these developments and keep everyone informed. I don’t think you’ll miss any major movements—it’ll all be well publicized.