Most Americans Believe AI Is Making Bias Worse, Not Better

Most Americans Believe AI Is Making Bias Worse, Not Better

Artificial intelligence is becoming a regular part of hiring, but many American workers aren’t convinced it’s helping. In fact, a new report suggests the opposite: most people believe AI is actually increasing bias in the workplace.

According to a recent survey by talent solutions company SHL, 59% of workers say AI adds bias rather than reducing it, and 56% prefer that job applications be reviewed only by humans. The findings highlight a growing discomfort with handing critical career decisions over to algorithms.

Why This Matters

AI is now everywhere—from job recruiting and classrooms to dating apps. While companies often promote it as a tool for efficiency and fairness, many workers worry about the long-term consequences.

There’s growing concern that AI could:

  • Eliminate jobs over time
  • Block qualified candidates from opportunities
  • Reinforce existing inequalities instead of fixing them

When technology plays a role in decisions that affect livelihoods, trust becomes a big deal—and right now, that trust is shaky.

What the Survey Found

The SHL report revealed widespread uncertainty and concern around AI in hiring:

  • 91% said being interviewed by an AI system would change how they view a company
  • Still, 54.6% said they would accept an AI interview if it meant getting a job
  • 48% are willing to take AI-related courses to stay competitive
  • Yet 25% admitted they don’t even know what “AI skills” really mean

On top of that:

  • 67.1% believe AI is reducing job opportunities
  • 66% think companies should be legally required to disclose when AI is used in hiring

Sara Gutierrez, SHL’s Chief Science Officer, summed it up well: AI can quickly make a company seem either innovative—or cold and impersonal. While workers are open to AI that improves efficiency, they expect transparency, especially when careers are on the line.

What Experts Are Saying

HR consultant Bryan Driscoll didn’t mince words. Speaking to Newsweek, he explained that AI systems are often trained on biased data, meaning they can simply automate discrimination rather than eliminate it.

“AI may reduce bias in very narrow, controlled situations,” Driscoll said, “but only when the data is audited, corrected, and constantly monitored—which very few companies actually do.”

Instead, many organizations buy AI tools and blindly trust the results, effectively outsourcing human judgment to machines.

What Comes Next

Driscoll warns that using AI in hiring, performance reviews, or internal investigations without transparency or safeguards only deepens existing inequalities.

“AI should never replace human judgment in high-stakes decisions,” he said. “It can be helpful—but only when companies treat it as a tool, not an all-knowing authority.”

Without strong regulation, validation, and ongoing oversight, experts fear AI will continue reinforcing the very problems it’s supposed to solve.



Post a Comment

0 Comments