The Inherent Bias in AI Applications


I remember when my roommate, Kyle, asked Google if six potatoes was enough to make six servings of latkes. The quick AI-generated answer? "Not nearly enough." Kyle doesn't cook much, so he took this as gospel and bought ten more potatoes. Our fridge was stocked with leftovers for days. Any restaurant chef would have caught that mistake or at least asked follow-up questions like "Is this a side dish?" or "Have you considered using just three more? That's a safe buffer without going overboard."

Generative AI can confidently lead users to the wrong answer. And while extra carbs are a minor inconvenience, the stakes aren't always so low. When AI gets it wrong about people, the consequences are real, lasting, and often invisible to those not on the receiving end.

When Algorithms Decide Who Gets a Chance

Let's start with hiring. Historically, finance leadership has been vastly male-dominated. AI recruitment software can easily scrape LinkedIn data and might find that in Portland, Oregon, out of 100 CFOs, 83 have commonly male names (ex: Nick, John, Frank) while only 17 have commonly female names (ex: Jess, Emily). Many of these being common white names. These stats may not be a stretch guess; nationwide, as of 2024, only 17.6% of CFO roles in Fortune 500 and S&P 500 companies are held by women.

So what happens when Generative AI is asked to surface "top qualified" candidates? It learns from the patterns it's fed. Basically, its logic is to give the highest probability of being the most favorable answer.  And if history says CFOs are mostly men with names like John, then candidates named Aaliyah, Deja, and Jamal get deprioritized or removed. Not because they're unqualified, but because they don't match the pattern. That AI system is often the hiring manager's first filter. Aaliyah, Deja and Jamal never even get a first-round interview.

This isn't a glitch. It's the system working exactly as designed: on biased information.

Beyond that, AI design setup often constrains its system to operate in ways that are fundamentally misaligned, for the purpose of corporate ease and profit. This can look like forgoing safeguards, limiting the system’s ability to adapt or be extended, and excluding mechanisms for bias detection and mitigation (ex: Proxy Variables, which are easily measurable variables that analysts include in a model in place of variables that are difficult to measure). 

The result of this? A reinforcement of our human bias.

The Same Problem, Different Contexts

Hiring is just one example. The pattern repeats everywhere Generative AI is used to classify people.

Education: Generative AI proctoring software can flag a student for "suspicious behavior" during a test, say fidgeting or looking around the room. For students with undiagnosed ADHD, this is a normal Tuesday. Now consider that undiagnosed kid is Black, whose will face harsher discipline for accused cheating. You've now compounded the bias.

Security Checkpoint: Queue of Diverse Passengers Passing Biometric AI Face Scanning at Border Control in Airport Terminal. Advanced Facial Recognition Technology. People Screening for Boarding Flight.

Travel: TSA uses Generative AI-assisted screening for "random" security checks. Except the randomness has a pattern. Ask anyone who's been pulled aside repeatedly while their white travel companions walk through untouched.

These systems don't announce themselves as discriminatory. They present as neutral, efficient, objective. That's what makes them dangerous.

This Is Bigger Than Bad Code

It's tempting to frame this as an algorithm problem. But the issue runs deeper.

"Any technology involved with classifying people by necessity would be shaped by subjective human choices… you cannot study machines created to analyze humans without also considering the social conditions and power relations involved."

— Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines

The dilemma of biased AI isn't just about which data gets used. It's about who collected it, who consented, whose experiences were included, and whose were erased. Was there diversity in income, geography, ability, race? Were users even asked? These aren't edge cases. They're foundational questions that too often go unasked.

So What Do We Do?

I'm not here to say we should abandon AI entirely. That's neither realistic nor desirable. There has never been a major technological shift that was successfully revolted into nonexistence. Think the internet, social media, even cameras. AI is here to stay.

And honestly? I'm an avid user. If designed and deployed responsibly, AI has real potential to improve healthcare access, streamline daily tasks, and address systemic issues like housing. The goal isn't rejection. It's accountability.

We need to keep our eyes open about the tools we use, just as we expect the corporations building them to do. Blind acceptance helps no one. 

How You Can Push Back

You don't have to be a developer to make a difference. Here's where to start:

  • Document and share. If you experience harm from an AI system, document it. Your story puts pressure on companies and policymakers to act.

  • Pause before you use. Think before doing one of those TikTok AI trends.

  • Read the fine print. Review user agreements and privacy policies. You can even feed them into an AI chatbot and ask it to summarize the risks.

For those who are building AI—or considering it—I recommend reading NIST's Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (Special Publication 1270). It's a meaningful first step toward governance frameworks that actually address harm.


AI won't fix itself. But if we stay critical, stay vocal, and refuse to treat these systems as infallible, we have a shot at shaping them into something better.


About Lauren

Lauren brings multi-faceted AI experience spanning product development, marketing, and L&D across roles at KPMG. Her work includes product development, use case videos, coursework, and research articles for Data & Analytics and AI solutions. She’s developed and executed omni-channel marketing plans for AI products that drive user engagement through strategic distribution. Her creative interests extend to AI filmmaking, where she combines technical expertise with storytelling to explore new frontiers in AI-generated media.

LinkedIn

Works Cited

Buolamwini, Joy. Unmasking AI: My Mission to Protect What Is Human in a World of Machines. First edition, Random House, 2023.

Estrada, Sheryl. "BCG Studied Dozens of Women Who Became CFOs. Here's What They Had in Commom

Jim, Frost. “Proxy Variables: The Good Twin of Confounding Variables.” Statisticsbyjim.Com, 2021, statisticsbyjim.com/regression/proxy-variables/.



Next
Next

Leave Me Alone, I Have Work to Do: The Story of Barbara McClintock