ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
YearCase IssueLink
2
2015Amazon – Gender Bias in AI RecruitmentAmazon’s AI recruitment system (2014–2018) showed gender bias by favoring male candidates and penalizing resumes with terms like “women’s.” It learned this bias from past hiring patterns, leading to its shutdown after it systematically disadvantaged female applicants.https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
3
https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against
4
2020HireVue – Disability and AI BiasHireVue’s AI interview tool failed to interpret spoken responses from lesser-abled candidates. For example, a deaf Indigenous woman applying to Intuit was misunderstood by the speech recognition system, which couldn’t process her responses—highlighting accessibility gaps in AI-based hiring.https://www.shrm.org/topics-tools/news/talent-acquisition/hirevue-discontinues-facial-analysis-screening
5
https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants
6
2022iTutorGroup – Age Discrimination in AI HiringITutorGroup, a company that provides English language tutoring services, used hiring software that automatically rejected over 200 applicants based on age---female applicants aged 55 or older and male applicants ages 60 or older. https://www.eeoc.gov/newsroom/itutorgroup-pay-365000-settle-eeoc-discriminatory-hiring-suit
7
https://www.berkshireassociates.com/blog/eeoc-settles-first-ai-bias-lawsuit
8
2022LinkedIn– Gender Bias in AILinkedIn’s AI job recommendation system has faced allegations of gender bias. A 2022 study found it favored male over equally qualified female candidates. A 2016 investigation also showed the search algorithm suggested male alternatives for female names, highlighting ongoing concerns about bias in AI-driven professional platforms.https://arxiv.org/abs/2202.07300
9
https://incidentdatabase.ai/cite/47/
10
https://nypost.com/2025/06/24/business/ai-hiring-tools-favor-black-female-candidates-over-whites-males/
11
2016Northpointe – COMPAS Racial BiasThe COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm, developed by Northpointe, was found to exhibit racial bias in its risk assessments. A study by ProPublica showed that COMPAS was more likely to falsely predict that Black defendants would re-offend compared to white defendants, according to research from ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
12
2018Buolamwini & Gebru – Gender Shades Facial Recognition BiasA 2018 MIT Study, titled "Gender Shades," found signifcant disparities in the accuracy of facial recognition technology based on skin tone and gender. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212
13
2020Detroit Wrongful Arrest, - Robert WilliamsRobert Williams, 42, was wrongfully arrested in Detroit due to a flawed facial recognition match in 2020. https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
14
https://www.youtube.com/watch?v=WFGOXyMnyRo
15
https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig
16
2021PredPol, – Predictive Policing AlgorithmsPredictive policing reinforced racial bias by over-policing predominantly Black and Latinx communities in several major U.S. cities based on biased historical crime data.https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
17
https://centerforhealthjournalism.org/our-work/insights/reporting-long-shadow-lapds-data-driven-policing-programs
18
https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
19
2019Obermeyer et al., – Racial Bias in Health AlgorithmObermeyer et al., (2019) found evidence of racial bias in a widely used health care algorthim that is used in the US health care system that relies on health costs to predict patient risk. The algorithim systemetically undervalued the health needs of Black patients compared to white patients. https://www.science.org/doi/10.1126/science.aax2342
20
2021Skin Cancer – Skin Diversity Data Bias

AI systems for skin cancer diagnosis may be less accurate for people with dark skin due to non-diverse training data. Of 21 open-access datasets, few include ethnicity or skin type, with very few images of darker skin. https://www.theguardian.com/society/2021/nov/09/ai-skin-cancer-diagnoses-risk-being-less-accurate-for-dark-skin-study
21
2019Optum Health Management Algorithm – Racial Bias Health Care
Similar to Obermeyer et al., this system prioritized high-cost patients for extra care management — a proxy that disadvantaged Black patients who typically incur lower costs due to systemic under-treatment.https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
22
https://www.theguardian.com/society/2019/oct/25/healthcare-algorithm-racial-biases-optum
23
https://pubmed.ncbi.nlm.nih.gov/31649194/
24
2018Facebook – Ad Targeting Racial BiasFacebook's ad targeting allowed targeting by gender and race, showing lower-paying jobs to minority men and women.
25
2020-2022TikTok – Content Moderation Suppression of VoicesLeaked moderation guidelines showed TikTok’s AI and human moderators were instructed to suppress content from disabled, queer, fat, and BIPOC creators to reduce “bullying” risk.https://www.bbc.com/news/technology-50645345
26
https://www.theguardian.com/technology/2020/mar/17/tiktok-tried-to-filter-out-videos-from-ugly-poor-or-disabled-users
27
2021TikTok Suggested Accounts Bias TikTok’s “Suggested Accounts” algorithm allegedly reinforced racial bias through feedback loops, disproportionately recommending white creators and marginalizing creators of color.https://incidentdatabase.ai/cite/117/
28
2018Google – Job Gender BiasGoogle's algorthim showed job ads of high-paying jobs more to men than women. The ads for high-paying executive positions (over $200,000) were shown to male users far more often than female usershttps://csd.cmu.edu/news/fewer-women-than-men-are-shown-online-ads-related-to-highpaying-jobs#:~:text=%22We%20can't%20look%20inside,service%20and%20an%20auto%20dealer.
29
https://www.washingtonpost.com/news/the-intersect/wp/2015/07/06/googles-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-heres-why-that-should-worry-you/
30
2023Uber - AI Facial Recognition and Racial Bias ClaimsDrivers claimed Uber's AI-based identity verification system misidentified them (especially darker-skinned drivers), leading to unfair job suspensions. Raises issues of bias and privacy.https://www.bbc.com/news/technology-68655429
31
https://www.peoplemanagement.co.uk/article/1866835/uber-eats-worker-wins-payout-racist-ai-facial-recognition-–-hr-learn
32
2018-present Biased Maternal AI Puts Black Mothers at Greater RiskBlack women in the U.S. face disproportionately high maternal mortality and morbidity rates—three times that of white women—even when controlling for education or class. Emerging maternal‑health AI tools risk worsening outcomes if training data exclude Black women, reinforcing bias in predicting risks and guiding care .https://www.vice.com/en/article/bias-in-maternal-ai-could-hurt-expecting-black-mothers/
33
https://www.mozillafoundation.org/en/blog/addressing-ai-bias-in-maternal-healthcare-in-southern-africa/
34
2023Racial Bias in AI Diagnosing Women’s Health ConditionsThe University of Florida found that AI tools diagnosing bacterial vaginosis (BV) show significant bias across ethnic groups. In a study of 400 women (White, Black, Asian, Hispanic), Hispanic women experienced the most false positives and Asian women the most false negatives. These tools performed best for White women, worst for Asian women, underscoring a need for fairness-focused AI development.https://news.ufl.edu/2023/11/bias-in-ai-womens-health/
35
https://www.science.org/content/article/ai-models-miss-disease-black-female-patients
36
2025Demographic bias of expert-level vision-language foundation models in medical imagingThe study evaluated advanced vision-language AI models used for chest X-ray diagnoses across five international datasets. While these systems perform comparably to expert radiologists overall, they consistently underdiagnose conditions in marginalized groups—especially Black women and other intersectional subgroups. This bias highlights urgent fairness concerns in clinical AI deployment. https://www.science.org/doi/10.1126/sciadv.adq0305
37
2021Proctorio Racial Bias roctorio’s Chrome extension and found it relied on OpenCV’s face‑detection models. Tested against nearly 11,000 images from FairFaces, the tool failed to detect Black faces 57% of the time making it unreliable and discriminatory. https://www.vice.com/en/article/proctorio-is-using-racist-algorithms-to-detect-faces/
38
https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/proctorio-racist-facial-detection
39
2016Microsfot Tay Chatbot https://www.bbc.com/news/technology-35902104
40
https://wou.edu/westernhowl/microsofts-ai-chatbot-tay-turned-pr-disaster/
41
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
42
2015Google Photos Tagging Google's AI labeled Black people as "gorillas," revealing racial bias in facial recognition.https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people
43
https://algorithmwatch.org/en/google-vision-racism/
44
https://incidentdatabase.ai/cite/16/
45
2013Google Image Search Gender BiasStudies found that Google Image Search reinforced gender stereotypes. For example, returning mostly male images for terms like “CEO” and mostly female images for “receptionist.”https://incidentdatabase.ai/cite/18/
46
https://www.washington.edu/news/2022/02/16/googles-ceo-image-search-gender-bias-hasnt-really-been-fixed/
47
2021Bias in algorithms of AI systems developed for COVID-19AI tools used during the COVID-19 pandemic, particularly those relying on data from internet-of-things (IoT) devices, can exhibit biases that lead to discrimination against certain populations. These biases stem from issues like inadequate data collection, skewed media coverage, and the potential for false negatives, which can hinder accurate identification of COVID-19 hotspots and impact equitable access to healthcare.
https://pmc.ncbi.nlm.nih.gov/articles/PMC9463236/
48
https://incidentdatabase.ai/cite/173/
49
2017FaceApp Racial FiltersFaceApp introduced a “race transformation” feature that allowed users to change their appearance to different ethnicities, which was widely criticized as digital blackface and for reinforcing racial stereotypes.https://www.theguardian.com/technology/2017/aug/10/faceapp-forced-to-pull-racist-filters-digital-blackface
50
https://incidentdatabase.ai/cite/60/
51
2020Genderify AI Bias AI tool for guessing gender from names reproduced harmful stereotypes and made inaccurate assumptions.https://medium.com/syncedreview/ai-powered-genderify-platform-shut-down-after-bias-based-backlash-9fc3e504a9c5
52
https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/genderify-accused-of-gender-bias
53
2016Beauty Contest AIAI judges of a beauty contest rated lighter skin tones more favorably due to biased training data.https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people
54
https://incidentdatabase.ai/cite/49/
55
2021Korean Chatbot LudaThe Korean AI chatbot Luda made offensive and discriminatory remarks against LGBTQ+ and minority groups, reflecting bias in training data scraped from online conversations.https://incidentdatabase.ai/cite/106/
56
https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook
57
https://scholarspace.manoa.hawaii.edu/bitstreams/32282f53-3158-49bb-aeb2-dfee9e70541f/download
58
2022Stable Diffusion Profession BiasThe image-generation AI Stable Diffusion showed gender and racial bias for example, generating images of doctors as mostly white men and cleaners as women of color reflecting stereotypes embedded in training data.https://incidentdatabase.ai/cite/529/
59
https://www.bloomberg.com/graphics/2023-generative-ai-bias/
60
https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/
61
2022 DALL·E 2 Gender and Racial BiasUsers reported that DALL·E 2 generated racially and gender-stereotyped images for prompts like “CEO” (mostly white men) and “nurse” (mostly women), reflecting bias in the model’s training datahttps://incidentdatabase.ai/cite/179/
62
https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918
63
2019Apple Card Gender BiasApple Card’s credit assessment algorithm allegedly offered significantly lower credit limits to women than to men with similar financial profiles, prompting a regulatory investigation.https://incidentdatabase.ai/cite/92/
64
https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html
65
https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/
66
2021 TikTok Suggested Accounts BiasTikTok’s “Suggested Accounts” algorithm allegedly reinforced racial bias through feedback loops, disproportionately recommending white creators and marginalizing creators of color.https://incidentdatabase.ai/cite/117/
67
2022Spain Gender Violence Risk AlgorithmAn algorithm used to assess the risk faced by gender violence victims reportedly misclassified high-risk cases as low-risk, allegedly contributing to femicides and child homicides.https://www.nytimes.com/interactive/2024/07/18/technology/spain-domestic-violence-viogen-algorithm.html
68
https://incidentdatabase.ai/cite/186/
69
2009Amazon Censors Gay BooksAmazon’s search and ranking algorithm reportedly de-ranked or removed LGBTQ+ books from search results and best-seller lists, allegedly labeling them as “adult” content—raising concerns of algorithmic bias and discrimination.https://www.theguardian.com/world/2021/mar/12/amazon-stop-selling-books-lgbtq-mental-illness
70
https://incidentdatabase.ai/cite/15/
71
2023UnitedHealth AI Coverage DenialUnitedHealth was accused of using a flawed AI model to deny rehabilitation care to elderly and disabled patients. Lawsuits allege the algorithm was biased and overruled medical judgment, leading to premature coverage denials.https://incidentdatabase.ai/cite/608/
72
2020Twitter Image Cropping BiasTwitter’s image cropping algorithm was found to prioritize white and male faces in preview thumbnails, reflecting racial and gender bias in how images were auto-cropped.https://incidentdatabase.ai/cite/103/
73
https://www.bbc.com/news/technology-57192898
74
https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100