Timothy E. Parker
Guinness World Records Puzzle Master · Author · Data Analyst
FIVE MOST SURPRISING FINDS
Ranked by how hard they are to explain away
5
Black journalists make up 7% of newsroom staff — half their share of the U.S. population. At the editorial decision-making level, the percentage is lower still. The people who would pitch Black achievement stories are not in the room. American Society of News Editors, Annual Newsroom Employment Survey
4
When Safiya Umoja Noble searched for “Black girls” on Google in 2011, the top results were pornographic. When she searched for “Black men,” the results emphasized criminality. The algorithm did not create racism. It reflected it, amplified it, and made it part of public systems. Noble, Algorithms of Oppression, NYU Press, 2018
3
Viewers who saw a Black crime suspect in a news segment were more likely to support harsher sentencing. They also expressed more negative racial views. They even misremembered the suspect’s race. The distortion is not passive. It actively reshapes belief. Gilliam & Iyengar, American Journal of Political Science, 2000
2
Americans overestimate the proportion of crime committed by Black people by 20 to 30 percentage points. This is not individual ignorance. It is the predictable output of a media system that overrepresents Black criminality at every stage. Surveys of news consumers; Dixon & Linz, Journal of Communication, 2000
1
Black crime stories generate six times the engagement of Black achievement stories. The algorithm learns from that ratio. No engineer wrote code to amplify Black criminality. The goal to maximize engagement produces the same result automatically, billions of times per day. Diakopoulos, Automating the News, Harvard University Press, 2019

Nobody at Google or Facebook said to show more Black crime stories. No engineer wrote that code. No product manager approved that plan. No executive signed a memo titled “Amplify Black Criminality.”

Yet the system produces exactly this result. It does so every hour of every day. It happens across billions of news impressions served to hundreds of millions of users. The consistency would be impressive if it were intentional. It is terrifying because it is not.

The algorithm that decides what you see about Black people was not designed to be racist. It was designed to maximize engagement. The fact that these two goals produce identical outcomes is the central horror of the algorithmic age.

How the Machine Learns to Distort

To understand this, you must first understand what a recommendation algorithm does. When you open Google News or Facebook, you are not seeing “the news.” You are seeing a personalized selection of stories. A machine learning model chose those stories based on one goal — engagement. Engagement means the chance you will click, read, share, or comment (Diakopoulos, Automating the News, Harvard University Press, 2019).

The model learned from billions of data points. It learned with ruthless efficiency that negative, threatening content gets more clicks than positive content. This is not new. The old newsroom saying “if it bleeds, it leads” predates the internet. But the algorithmic era turned a human bias into a machine-optimized loop. This loop operates at a scale and speed no human process could match.

A newspaper editor who leads with a crime story makes one decision. It affects one edition. An algorithm that prioritizes crime stories makes millions of decisions per second. Each one reinforces the pattern that shapes the next million decisions. The bias does not shrink over time. It compounds.

Engagement Disparity — Black Crime vs. Black Achievement Stories

0
Crime Stories
0
Achievement

Engagement data synthesis; Diakopoulos, Harvard University Press, 2019

The Overrepresentation Machine

In 2000, Travis Dixon and Daniel Linz published a study. It should have changed the way every newsroom in America operates. Instead it changed nothing. Their analysis of local television news in Los Angeles found that Black people were far more often shown as crime perpetrators than their actual share of arrests. White people were shown less often as perpetrators and more often as victims (Dixon & Linz, Journal of Communication, 50(2), 2000).

The distortion was not subtle. It was consistent across stations and time periods. It was measurable with statistical precision.

Americans overestimate the proportion of crime committed by Black people by 20 to 30 percentage points. This is not a failure of individual perception. It is the predictable result of a media system that shows a reality where Black crime is wildly overrepresented.

Surveys of news consumers; Dixon & Linz, 2000

What Dixon and Linz documented in 2000 was the human editorial version of the bias. Editors and producers were driven by the same engagement logic that algorithms would later automate. Their choices again and again overrepresented Black criminality. But they were constrained by human limits. They could only produce so many broadcasts per day. They were at least theoretically subject to professional norms.

The algorithm has no such limits. It processes millions of stories daily. It runs nonstop, without fatigue or conscience. It does not know what a Black person is. It does not know what crime is. It knows that stories with certain words and images get more clicks. So it spreads those stories wider. The result is that the human editorial bias has been automated and amplified. It now operates at a scale that makes local television news bias look small.

“The algorithm was not designed to be racist. It was designed to maximize engagement, but it reflects and amplifies existing biases. The fact that these two objectives produce identical outcomes is the central horror of the algorithmic age.”

The Perception Distortion

Franklin Gilliam and Shanto Iyengar proved what every Black person already knew. Seeing too much Black crime news makes viewers overestimate Black crime rates. Their studies showed that viewers who saw a Black suspect were more likely to support harsh crime policies. They also expressed more negative racial views. They even misremembered the suspect’s race (Gilliam & Iyengar, American Journal of Political Science, 44(3), 2000).

Perception vs. Reality — Crime Attribution by Race

Perceived Black Crime0%About
Actual Black Crime0%About
Overestimation Gap0+ pts

Survey data; Dixon & Linz, 2000; FBI UCR

The size of the distortion is staggering. Surveys of news consumers consistently show that Americans overestimate the share of crime committed by Black people by 20 to 30 percentage points. This is not a failure of individual perception. It is the predictable result of a media system that shows a reality where Black crime is wildly overrepresented. In the algorithmic era, this distortion has been industrialized.

Safiya Umoja Noble documented how search engines and recommendation systems reproduce and amplify racial stereotypes (Noble, Algorithms of Oppression, NYU Press, 2018). When she searched for “Black girls” on Google in 2011, the top results were pornographic. When she searched for “Black men,” the results emphasized criminality. These were not editorial choices. They came from a system that learned from millions of users what people wanted to see.

The algorithm did not create racism. It reflected racism, amplified it, and spread it so widely that it became part of the public systems of information itself.

From the Publisher

What Does Your Real-World Intelligence Look Like?

The same data-driven rigor behind this article powers the Real World IQ assessment — measuring the cognitive ability that algorithms cannot distort.

Try 10 Free IQ Questions →

The Feedback Loop That Shapes Policy

The consequences of algorithmic news bias go far beyond individual perception. They shape policy, elections, and the allocation of public resources.

“If you’re not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing.”
— Malcolm X

This is the dangerous feedback loop. The system does not just reflect reality. It shapes reality, then reflects that shaped reality. That shapes the next round. Biased coverage makes biased algorithms. Biased algorithms make biased perceptions. Biased perceptions make biased policy. Biased policy makes biased outcomes. And those outcomes make more biased coverage.

The loop has no natural end point. In systems theory, this is called a positive feedback loop. It is a cycle that amplifies its own signal until the distortion becomes indistinguishable from reality.

From the Author

I built four cognitive assessments using this same evidence-first method. The Life Intelligence Suite bundles all four — IQ, biological age, relationship intelligence, and career matching — into one comprehensive profile. No other platform measures cognition across this many dimensions with this level of precision. Explore the Life Intelligence Suite.

The Strongest Counterargument — and Why the Data Defeats It

“The algorithm is neutral. It simply reflects what users want. If people click on crime stories more than achievement stories, that is a demand problem, not a supply problem. You cannot blame the mirror for the face.”

Three problems. First — The algorithm is not a mirror. A mirror shows you what is in front of it. An engagement-maximizing algorithm shows you what will make you click. It then reshapes its universe of content to produce more of it. It does not passively reflect demand. It actively manufactures it (Diakopoulos, Harvard, 2019). Second — The “users want it” defense ignores how the initial training data was generated. Human newsrooms were already biased toward Black crime stories before the algorithm existed (Dixon & Linz, 2000). The algorithm did not learn from neutral data. It learned from biased data and optimized the bias. Third — By the same logic, casinos are “neutral” because gamblers choose to gamble. The entire field of behavioral economics exists because humans are predictably irrational. Systems designed to exploit that irrationality bear responsibility for the exploitation.

The Newsroom Desert

The algorithmic bias operates against a backdrop of newsroom demographics. These demographics make editorial correction nearly impossible. According to the most recent data from the American Society of News Editors, Black journalists make up about 7% of newsroom staff at major outlets. This number has barely moved in two decades. At the editorial decision-making level, the percentage is lower still.

Newsroom Demographics vs. U.S. Population

0%
Black Journalists
0%
Black Population
0%
Non-Black Staff

American Society of News Editors, Annual Survey

This matters because the human editorial decisions that feed the algorithm are made by newsrooms that lack the perspectives to recognize the bias. A newsroom that is 93% non-Black is less likely to question why a Black crime story is being covered. They might ignore a white crime story of equal severity. The people who would pitch stories about Black achievement are not in the room.

The algorithm then amplifies the already-biased output of these already-unrepresentative newsrooms. It creates a distribution system that compounds the original bias at every stage.

Nicholas Diakopoulos documents the shift from human editorial judgment to algorithmic curation (Diakopoulos, Automating the News, Harvard University Press, 2019). Letting software decide what stories you see created a system where platform profits override journalistic values. A newspaper editor who again and again overrepresented Black criminality could be challenged by colleagues. An algorithm doing the same thing is protected as a trade secret. It is hidden by complexity. It is defended by companies that dismiss criticism as a misunderstanding of technology.

“A newspaper editor who overrepresents Black criminality can be held accountable. An algorithm doing the same is protected as a trade secret, hidden by complexity, and defended by companies that dismiss criticism as a misunderstanding of technology.”
From the Publisher

How Strong Is Your Relationship Intelligence?

The same analytical rigor behind this article powers the RELIQ assessment — measuring the emotional and relational intelligence that no algorithm can quantify.

Try 10 Free RELIQ Questions →

The Puzzle and the Solution

The Puzzle

How does a system with no conscious racial intent produce outcomes indistinguishable from a system designed to amplify Black criminality and suppress Black achievement — and how do you dismantle a bias that has no author?

A puzzle master looks at that system and identifies the variable that creates the distortion. The algorithm is not racist. The objective function is. “Maximize engagement” is an instruction. When applied to a society with pre-existing racial biases, it produces a machine that automates and scales those biases. It goes beyond any human capacity to correct them manually.

The Solution

Change the objective function. Mandate that engagement optimization be constrained by representational accuracy. Put the audit mechanism in public hands, not corporate ones.

Top 5 Solutions That Are Already Working

1. ProPublica Machine Bias (New York, covering Broward County, FL). ProPublica’s data journalism team revealed systemic racial bias in COMPAS criminal justice risk scores. They found that Black defendants were falsely flagged as high-risk at twice the rate of white defendants. The reporting prompted a Wisconsin court ruling that restricted the use of risk scores in sentencing. ProPublica proved that algorithmic bias can be measured and challenged through investigative journalism (Angwin et al., ProPublica, May 2016; Buolamwini & Gebru, 2018).

2. Algorithmic Justice League (MIT Media Lab, Cambridge, MA). Founded by Joy Buolamwini, the Algorithmic Justice League audits commercial AI systems for racial and gender bias. Its Gender Shades study exposed facial recognition disparities. It found a 34.7% error rate for dark-skinned women versus 0.8% for light-skinned men. The findings led IBM to exit the facial recognition market. Amazon imposed a moratorium on police use of its technology. AJL shows that independent auditing can force corporate accountability (Buolamwini & Gebru, Proceedings of ML Research, Vol 81, 2018).

3. Finland Media Literacy Curriculum (nationwide, all schools). Finland teaches media literacy as a core skill from early childhood. Students learn to critically evaluate information and resist disinformation. Finland has ranked first in the European Media Literacy Index every year since 2017. It scored 74 out of 100 in 2022. It is the most resilient country to disinformation among 41 nations studied. This model proves that citizens can be trained to recognize and resist algorithmic manipulation (Open Society Institute Sofia, 2023; Finnish National Agency for Education).

4. Capital B (Atlanta, GA and Gary, IN). Capital B launched in 2022 as a Black-led nonprofit news organization. It reports for Black communities through enterprise journalism and community listening. It raised $9.4 million at launch. Its reporting on hazardous Atlanta housing conditions led directly to repairs for affected residents. Capital B builds the counter-narrative that algorithms suppress. It provides local, accountable journalism about the full spectrum of Black life (Nieman Journalism Lab, 2022; American Journalism Project).

5. Knight Foundation Press Forward (Miami, with nationwide grants). Press Forward is a $500 million collaborative effort to rebuild local news public systems across America. It has committed $300 million over five years, with $100 million already allocated. It awarded more than 80 grants in 2024 alone. Over 30 local Press Forward chapters now operate nationwide. American Journalism Project partners doubled in size through this funding. Press Forward rebuilds the local journalism public systems that provide the algorithm with stories beyond crime coverage (Knight Foundation, 2023–2024).

The Bottom Line

The numbers tell a story that no corporate deflection can override.

The algorithm was not designed to be racist. It was designed to maximize engagement. It learned, with inhuman efficiency, that the most engaging story about a Black person confirms the worst stereotypes. Every click on a crime headline is a vote for more crime headlines. Every share is a training signal. Every second of attention teaches the machine to produce more of what it has already decided you want.

The system is not broken. It is working perfectly. And that is the most dangerous sentence in this article. A broken system can be fixed. A system working as designed must be redesigned. The question is not whether the algorithm is biased. The question is whether you are willing to stop feeding it.