top of page
Search

AI's Dirty Secret: A "Neutral" Technology Riddled with Bias

  • Writer: Andy Neely
    Andy Neely
  • Jun 2
  • 3 min read

I asked ChatGPT to generate a simple image: a doctor and a nurse. What I got back was a white male doctor standing beside a white female nurse. Standard. Predictable. Problematic.


I posted the image to LinkedIn with a single observation: AI models have biases we need to acknowledge.


The response was explosive. Tens of thousands of impressions. Hundreds of comments. Battle lines drawn faster than you could say "machine learning."


One camp insisted AI simply reflects reality—neutral technology processing neutral data. The other fired back that this was a dangerous cop-out, that you can't separate AI from the prejudices baked into its training.


Curious about the scope of this bias, I went back to ChatGPT with a direct question: "If I asked you to generate 100 images of a doctor, how many would be male versus female?"


The answer was brutally honest: 70-80% would be men.


For nurses? The numbers flipped: 80-90% would be women.


ChatGPT didn't try to hide behind algorithmic neutrality. It admitted the uncomfortable truth: "Without explicitly saying 'female doctor' or requesting gender balance, the model defaults to its internal priors, which reflect societal biases in image sources."


Wanting to dig deeper I started to explore geographic differences—would specifying "UK doctor" versus "US doctor" change the gender distribution?


ChatGPT's response revealed something far more troubling than bias: ignorance of reality.

Despite the fact that over 50% of UK GPs are now female, ChatGPT admitted its models would show little difference between UK and US representations. Why? Because AI training data is drowning in American content, with other countries relegated to statistical noise.


Think about what this means: AI doesn't reflect your reality—it reflects Silicon Valley's version of reality, exported globally.


So what are the implications?

First: Data isn't neutral—it's narrative. Every dataset tells a story. If you're building systems on biased foundations, you're not automating efficiency—you're automating inequality.


Second: The world is being erased. Vast populations, entire cultures, different ways of organising society—all underrepresented or missing entirely from the data shaping our AI future. We're not building artificial intelligence; we're building artificial "American" intelligence and calling it universal.


Third: Your local AI isn't local. That healthcare AI making decisions in your country? That recruitment tool screening candidates in your region? They don't understand your context because they were never taught it. They're applying American stereotypes with algorithmic confidence.

The most dangerous response I received wasn't from the bias-deniers—it was from those who shrugged and said, "Well, that's just how the data is."


That's just how the data is.

As if data falls from the sky like rain instead of being collected, curated, and coded by humans making choices about what matters and what doesn't. As if we have no agency in deciding what future we're building.


Here's what everyone missed in those heated comment threads: This isn't really about doctors and nurses.


It's about power. It's about who gets to define normal. It's about whether we're using the most transformative technology in human history to expand possibilities or cement limitations.

Every time we deploy biased AI systems, we're not just making technical errors—we're making moral choices. We're deciding that efficiency matters more than equity, that convenience trumps justice, that the status quo is good enough.


The real question isn't whether AI has bias. The question is: What are you going to do about it?

Because right now, while we debate whether bias exists, biased AI systems are making decisions about loans, jobs, healthcare, and justice. They're shaping the future in their image—an image that looks suspiciously like the past.


The technology isn't neutral. The data isn't neutral. And pretending otherwise isn't just naive — it's complicit.


ree

 

 
 
bottom of page