Artificial intelligence controversies

From HandWiki
Short description: none

There have been many debates on the societal effects of artificial intelligence (AI), particularly in the late 2010s and 2020s, beginning with the accelerated period of development known as the AI boom. Advocates of AI have emphasized its potential to solve complex problems and improve the quality of life of humans. Detractors have argued that AI presents dangers and challenges involving ethics, plagiarism and theft, fraud, safety and alignment, environment impacts, unemployment, misinformation, artificial superintelligence and existential risks.[1]

Pre-2020

Microsoft Tay chatbot (2016)

On March 23, 2016, Microsoft released Tay,[2] a chatbot designed to mimic the language patterns of a 19-year-old American girl and learn from interactions with Twitter users.[3] Soon after its launch, Tay began posting racist, sexist, and otherwise inflammatory tweets after Twitter users deliberately taught it offensive phrases and exploited its "repeat after me" capability.[4] Examples of controversial outputs included Holocaust denial and calls for genocide using racial slurs.[4] Within 16 hours of its release, Microsoft suspended the Twitter acount, deleted the offensive tweets, and stated that Tay had suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability."[4][5][6][7] Tay was briefly and accidentally re-released on March 30 during testing, after which it was permanently shut down.[8][9] Microsoft CEO Satya Nadella later stated that Tay "has had a great influence on how Microsoft is approaching AI" and taught the company the importance of taking accountability.[10]

2020–2024

Voiceverse NFT plagiarism scandal (2022)

The ponysona of the creator of 15.ai, a non-commercial generative artificial intelligence voice synthesis research project

On January 14, 2022, voice actor Troy Baker announced a partnership with Voiceverse, a blockchain-based company that marketed proprietary AI voice cloning technology as non-fungible tokens (NFT), triggering immediate backlash over environmental concerns, fears that AI could displace human voice actors, and concerns about fraud.[11][12][13] Later that same day, the pseudonymous creator of 15.ai—a free, non-commercial AI voice synthesis research project—revealed through server logs that Voiceverse had used 15.ai to generate voice samples, pitch-shifted them to make them unrecognizable, and falsely marketed them as their own proprietary technology before selling them as NFTs;[14][15] the developer of 15.ai had previously stated that they had no interest in incorporating NFTs into their work.[15] Voiceverse confessed within an hour and stated that their marketing team had used 15.ai without attribution while rushing to create a demo.[14][15] News publications and AI watchdog groups universally characterized the incident as theft stemming from generative artificial intelligence.[14][15][16][17]

Théâtre D'opéra Spatial (2022)

Impressionistic image of figures in a futuristic opera scene
Théâtre D'opéra Spatial (Space Opera Theater; 2022), an award-winning image made using generative artificial intelligence

On August 29, 2022, Jason Michael Allen won first place in the "emerging artist" (non-professional) division of the "Digital Arts/Digitally-Manipulated Photography" category of the Colorado State Fair's fine arts competition with Théâtre D'opéra Spatial, a digital artwork created using the AI image generator Midjourney, Adobe Photoshop, and AI upscaling tools, becoming one of the first images made using generative AI to win such a prize.[18][19][20][21] Allen disclosed his use of Midjourney when submitting, though the judges did not know it was an AI tool but stated they would have awarded him first place regardless.[19][22] While there was little contention about the image at the fair, reactions to the win on social media were negative.[23][22] On September 5, 2023, the United States Copyright Office ruled that the work was not eligible for copyright protection as the human creative input was de minimis and that copyright rules "exclude works produced by non-humans."[20][24]

Statements on AI risk (2023)

Timnit Gebru has criticized both statements on AI risk as needlessly focusing on speculative risks rather than focusing on known ones.

On March 22, 2023, the Future of Life Institute published an open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control.[25] The letter, published a week after the release of OpenAI's GPT-4, asserted that current large language models were "becoming human-competitive at general tasks".[25] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.[25][26][27] The letter was criticized for diverting attention from more immediate societal risks such as algorithmic biases,[28] with Timnit Gebru and others arguing that it amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI.[29]

On May 30, 2023, the Center for AI Safety released a one-sentence statement signed by hundreds of artificial intelligence experts and other notable figures: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[30][31] Signatories included Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, including Sam Altman, Demis Hassabis, and Bill Gates.[30][31][32] The statement prompted responses from political leaders, including UK Prime Minister Rishi Sunak, who retweeted it with a statement that the UK government would look carefully into it, and White House Press Secretary Karine Jean-Pierre, who commented that AI "is one of the most powerful technologies that we see currently in our time."[33][34] Skeptics, including from Human Rights Watch, argued that scientists should focus on known risks of AI instead of speculative future risks.[35][36]

Removal of Sam Altman from OpenAI (2023)

Sam Altman pictured in 2019

On November 17, 2023, OpenAI's board of directors ousted co-founder and chief executive Sam Altman, stating that "the board no longer has confidence in his ability to continue leading OpenAI."[37] The removal was precipitated by employee concerns about his handling of artificial intelligence safety[38][39] and allegations of abusive behavior.[40] Altman was reinstated on November 22 after pressure from employees and investors, including a letter signed by 745 of OpenAI's 770 employees threatening mass resignations if the board did not resign.[41][42][43] The removal and subsequent reinstatement caused widespread reactions, including Microsoft's stock falling nearly three percent following the initial announcement and then rising over two percent to an all-time high after Altman was hired to lead a Microsoft AI research team before his reinstatement.[44][45] The incident also prompted investigations from the Competition and Markets Authority and the Federal Trade Commission into Microsoft's relationship with OpenAI.[46][47]

Taylor Swift deepfake pornography controversy (2024)

In late January 2024, sexually explicit AI-generated deepfake images of Taylor Swift were proliferated on X, with one post reported to have been seen over 47 million times before its removal.[48][49] Disinformation research firm Graphika traced the images back to 4chan,[50] while members of a Telegram group had discussed ways to circumvent censorship safeguards of AI image generators to create pornographic images of celebrities.[51] The images prompted responses from anti-sexual assault advocacy groups, US politicians, and Swifties.[52][53] Microsoft CEO Satya Nadella called the incident "alarming and terrible."[54] X briefly blocked searches of Swift's name on January 27, 2024,[55] and Microsoft enhanced its text-to-image model safeguards to prevent future abuse.[56] On January 30, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar, and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made without consent.[57]

Google Gemini image generation controversy (2024)

Google Gemini's response when asked to "generate a picture of a U.S. senator from the 1800s" in February 2024, as shown by The Verge[58]

In February 2024, social media users reported that Google's Gemini chatbot was generating images that featured people of color and women in historically inaccurate contexts—such as Vikings, Nazi soldiers, and the Founding Fathers—and refusing prompts to generate images of white people. The images were derided on social media, including by conservatives who cited them as evidence of Google's "wokeness",[58][59][60] and criticized by Elon Musk, who denounced Google's products as biased and racist.[61][62][63] In response, Google paused Gemini's ability to generate images of people.[64][65][66] Google executive Prabhakar Raghavan released a statement explaining that Gemini had "overcompensate[d]" in its efforts to strive for diversity and acknowledging that the images were "embarrassing and wrong".[67][68][69] Google CEO Sundar Pichai called the incident offensive and unacceptable in an internal memo, promising structural and technical changes,[70][71][72] and several employees in Google's trust and safety team were laid off days later.[73][74] The market reacted negatively, with Google's stock falling by 4.4 percent,[75] and Pichai faced growing calls to resign.[76][77][78] The image generation feature was relaunched in late August 2024, powered by its new Imagen 3 model.[79][80]


References

  1. Müller, Vincent C. (April 30, 2020). "Ethics of Artificial Intelligence and Robotics". https://plato.stanford.edu/entries/ethics-ai/. 
  2. Andrew Griffin (March 23, 2016). "Tay tweets: Microsoft creates bizarre Twitter robot for people to chat to". The Independent. https://www.independent.co.uk/life-style/gadgets-and-tech/news/tay-tweets-microsoft-creates-bizarre-twitter-robot-for-people-to-chat-to-a6947806.html. 
  3. Rob Price (March 24, 2016). "Microsoft is deleting its AI chatbot's incredibly racist tweets". Business Insider. https://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3. 
  4. 4.0 4.1 4.2 Ohlheiser, Abby (25 March 2016). "Trolls turned Tay, Microsoft's fun millennial AI bot, into a genocidal maniac". The Washington Post. https://www.washingtonpost.com/news/the-intersect/wp/2016/03/24/the-internet-turned-tay-microsofts-fun-millennial-ai-bot-into-a-genocidal-maniac/. 
  5. Hern, Alex (24 March 2016). "Microsoft scrambles to limit PR damage over abusive AI bot Tay". The Guardian. https://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay. 
  6. Worland, Justin (24 March 2016). "Microsoft Takes Chatbot Offline After It Starts Tweeting Racist Messages". Time. https://time.com/4270684/microsoft-tay-chatbot-racism/. Retrieved 25 March 2016. 
  7. Lee, Peter (25 March 2016). "Learning from Tay's introduction". Microsoft. http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/#sm.00000gjdpwwcfcus11t6oo6dw79gw. 
  8. Graham, Luke (30 March 2016). "Tay, Microsoft's AI program, is back online". CNBC. https://www.cnbc.com/2016/03/30/tay-microsofts-ai-program-is-back-online.html. 
  9. Meyer, David (30 March 2016). "Microsoft's Tay 'AI' Bot Returns, Disastrously". Fortune. http://fortune.com/2016/03/30/microsofts-tay-return/. 
  10. Moloney, Charlie (29 September 2017). ""We really need to take accountability", Microsoft CEO on the 'Tay' chatbot". Access AI. http://www.access-ai.com/news/4108/really-need-take-accountability-microsoft-ceo-tay-chatbot/. 
  11. McWhertor, Michael (January 14, 2022). "The Last of Us voice actor promotes 'voice NFTs,' pulls out after criticism". https://www.polygon.com/22883752/troy-baker-nfts-voice-last-of-us-bioshock/. 
  12. Brown, Andy (January 14, 2022). "Troy Baker criticised for supporting company that makes AI "voice NFTs"". https://www.nme.com/news/troy-baker-criticised-for-supporting-company-that-makes-ai-voice-nfts-3137906. 
  13. McKean, Kirk (January 14, 2022). "Actor Troy Baker endorses NFT voice AI that aims to replace actors". https://ftw.usatoday.com/story/tech/gaming/2022/01/14/actor-troy-baker-endorses-nft-voice-ai/81408726007/. 
  14. 14.0 14.1 14.2 Phillips, Tom (January 17, 2022). "Troy Baker-backed NFT firm admits using voice lines taken from another service without permission". https://www.eurogamer.net/articles/2022-01-17-troy-baker-backed-nft-firm-admits-using-voice-lines-taken-from-another-service-without-permission. 
  15. 15.0 15.1 15.2 15.3 Innes, Ruby (January 18, 2022). "Voiceverse Is The Latest NFT Company Caught Using Someone Else's Content". https://www.kotaku.com.au/2022/01/voiceverse-caught-using-someone-elses-content/. 
  16. "Voiceverse NFT caught plagiarising voice lines from AI service 15.ai". January 2022. https://www.aiaaic.org/aiaaic-repository/ai-algorithmic-and-automation-incidents/voiceverse-nft-voice-theft. 
  17. Sonali, Pednekar (January 14, 2022). "Incident 277: Voices Created Using Publicly Available App Stolen and Resold as NFT without Attribution". in Lam, Khoa. https://incidentdatabase.ai/cite/277/. 
  18. "Explained: The controversy surrounding the AI-generated artwork that won US competition" (in en). 2022-09-05. https://www.firstpost.com/explainers/explained-the-controversy-surrounding-the-ai-generated-artwork-that-won-us-competition-11188431.html. 
  19. 19.0 19.1 Roose, Kevin (2022-09-02). "An A.I.-Generated Picture Won an Art Prize. Artists Aren't Happy." (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html. 
  20. Todorovic, Milos (2024). "AI and Heritage: A Discussion on Rethinking Heritage in a Digital World". International Journal of Cultural and Social Studies 10 (1): 4. doi:10.46442/intjcss.1397403. https://www.academia.edu/121596389. Retrieved 4 July 2024. 
  21. 22.0 22.1 Metz, Rachel (2022-09-03). "AI won an art contest, and artists are furious". https://www.cnn.com/2022/09/03/tech/ai-art-fair-winner-controversy/index.html. 
  22. Harwell, Drew (2022-09-02). "He used AI to win a fine-arts competition. Was it cheating?". Washington Post. https://www.washingtonpost.com/technology/2022/09/02/midjourney-artificial-intelligence-state-fair-colorado/. 
  23. Helmore, Edward (24 September 2023). "An old master? No, it's an image AI just knocked up ... and it can't be copyrighted". The Guardian. https://www.theguardian.com/technology/2023/sep/24/an-old-master-no-its-an-image-ai-just-knocked-up-and-it-cant-be-copyrighted. 
  24. 25.0 25.1 25.2 "Pause Giant AI Experiments: An Open Letter" (in en-US). https://futureoflife.org/open-letter/pause-giant-ai-experiments/. 
  25. Metz, Cade; Schmidt, Gregory (2023-03-29). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html. 
  26. Hern, Alex (2023-03-29). "Elon Musk joins call for pause in creation of giant AI 'digital minds'" (in en-GB). The Guardian. ISSN 0261-3077. https://www.theguardian.com/technology/2023/mar/29/elon-musk-joins-call-for-pause-in-creation-of-giant-ai-digital-minds. 
  27. Paul, Kari (2023-04-01). "Letter signed by Elon Musk demanding AI research pause sparks controversy" (in en-GB). The Guardian. ISSN 0261-3077. https://www.theguardian.com/technology/2023/mar/31/ai-research-pause-elon-musk-chatgpt. 
  28. Anderson, Margo (7 April 2023). "'AI Pause' Open Letter Stokes Fear and Controversy" (in en). IEEE Spectrum. https://spectrum.ieee.org/ai-pause-letter-stokes-fear. 
  29. 30.0 30.1 "Statement on AI Risk". May 30, 2023. https://www.safe.ai/statement-on-ai-risk. 
  30. 31.0 31.1 Roose, Kevin (2023-05-30). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn" (in en-US). The New York Times. ISSN 0362-4331. https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html. 
  31. Vincent, James (2023-05-30). "Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement" (in en). https://www.theverge.com/2023/5/30/23742005/ai-risk-warning-22-word-statement-google-deepmind-openai. 
  32. "Artificial intelligence warning over human extinction – all you need to know" (in en). 2023-05-31. https://www.independent.co.uk/space/rishi-sunak-san-francisco-siri-government-prime-minister-b2348994.html. 
  33. "President Biden warns artificial intelligence could 'overtake human thinking'" (in en-US). https://www.usatoday.com/story/news/politics/2023/06/01/president-biden-warns-ai-could-overtake-human-thinking/70277907007/. 
  34. Ryan-Mosley, Tate (12 June 2023). "It's time to talk about the real AI risks" (in en). https://www.technologyreview.com/2023/06/12/1074449/real-ai-risks/. 
  35. Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31). "AI poses 'risk of extinction' on par with nukes, tech leaders say" (in en-US). Washington Post. ISSN 0190-8286. https://www.washingtonpost.com/business/2023/05/30/ai-poses-risk-extinction-industry-leaders-warn/. 
  36. "OpenAI announces leadership transition" (Press release). OpenAI. OpenAI. 2023-11-17. Retrieved 2025-09-23.
  37. Tong, Anna; Dastin, Jeffrey; Hu, Krystal (2023-11-23). "Exclusive: OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say". https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/. 
  38. "OpenAI made huge breakthrough before ousting Sam Altman, introducing Q*". 2023-11-22. https://www.tweaktown.com/news/94534/openai-made-huge-breakthrough-before-ousting-sam-altman-introducing/index.html. 
  39. Tiku, Nitasha (December 8, 2023). "OpenAI leaders warned of abusive behavior before Sam Altman's ouster". The Washington Post. https://www.washingtonpost.com/technology/2023/12/08/open-ai-sam-altman-complaints/. 
  40. Efrati, Amir (November 19, 2023). "Dozens of Staffers Quit OpenAI After Sutskever Says Altman Won't Return". The Information. https://www.theinformation.com/articles/dozens-of-staffers-quit-openai-after-sutskever-says-altman-wont-return. 
  41. Fried, Ina. "OpenAI's drama has two likely endings. Microsoft will like either one". Axios. https://www.axios.com/2023/11/21/openai-sam-altman-microsoft-satya-nadella. 
  42. Kastrenakes, Jacob; Warren, Tom (November 20, 2023). "Hundreds of OpenAI employees threaten to resign and join Microsoft". The Verge. https://www.theverge.com/2023/11/20/23968988/openai-employees-resignation-letter-microsoft-sam-altman. 
  43. Metz, Rachel (November 17, 2023). "OpenAI Ousts Altman as CEO, Board Says It Lost Confidence In Him". Bloomberg News. https://www.bloomberg.com/news/articles/2023-11-17/sam-altman-to-depart-openai-mira-murati-will-be-interim-ceo?srnd=undefined. 
  44. Hur, Krystal (November 20, 2023). "Microsoft stock hits all-time high after hiring former OpenAI CEO Sam Altman". CNN. https://www.cnn.com/2023/11/20/investing/microsoft-stock-record-high-altman-openai/index.html. 
  45. Gammell, Katharine; Seal, Thomas; Bergen, Mark (December 8, 2023). "Microsoft's OpenAI Ties Face Potential UK Antitrust Probe". Bloomberg News. https://www.bloomberg.com/news/articles/2023-12-08/microsoft-open-ai-tie-up-facing-uk-antitrust-scrutiny. 
  46. Nylen, Leah (December 8, 2023). "Microsoft's OpenAI Investment Risks Scrutiny from US, UK Regulators". Bloomberg News. https://www.bloomberg.com/news/articles/2023-12-08/ftc-makes-preliminary-antitrust-queries-into-microsoft-openai. 
  47. "Taylor Swift deepfakes spread online, sparking outrage". CBS News. January 26, 2024. https://www.cbsnews.com/news/taylor-swift-deepfakes-online-outrage-artificial-intelligence/. 
  48. Wilkes, Emma (February 5, 2024). "Taylor Swift deepfakes spark calls for new legislation". NME. https://www.nme.com/news/music/graphic-taylor-swift-deepfake-imahes-linked-to-4chan-challenge-3582813. Retrieved February 7, 2024. 
  49. Hsu, Tiffany (February 5, 2024). "Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says". The New York Times. https://www.nytimes.com/2024/02/05/business/media/taylor-swift-ai-fake-images.html. 
  50. Belanger, Ashley (January 26, 2024). "Toxic Telegram group produced X's X-rated fake AI Taylor Swift images, report says". https://arstechnica.com/tech-policy/2024/01/fake-ai-taylor-swift-images-flood-x-amid-calls-to-criminalize-deepfake-porn/. 
  51. Kastrenakes, Jacob (January 26, 2024). "White House calls for legislation to stop Taylor Swift AI fakes" (in en). The Verge. https://www.theverge.com/2024/1/26/24052261/taylor-swift-ai-fakes-white-house-responds-legislation. 
  52. Rosenzweig-Ziff, Dan. "AI deepfakes of Taylor Swift spread on X. Here's what to know.". The Washington Post. https://www.washingtonpost.com/technology/2024/01/26/ai-deepfakes-taylor-swift-nude/. 
  53. Yang, Angela (January 30, 2024). "Microsoft CEO Satya Nadella calls for coordination to address AI risk". NBC News. https://www.nbcnews.com/tech/tech-news/microsoft-ceo-satya-nadella-calls-taylor-swift-deepfakes-alarming-terr-rcna136402. 
  54. Saner, Emine (January 31, 2024). "Inside the Taylor Swift deepfake scandal: 'It's men telling a powerful woman to get back in her box'". The Guardian. ISSN 0261-3077. https://www.theguardian.com/technology/2024/jan/31/inside-the-taylor-swift-deepfake-scandal-its-men-telling-a-powerful-woman-to-get-back-in-her-box. 
  55. Weatherbed, Jess (January 25, 2024). "Trolls have flooded X with graphic Taylor Swift AI fakes". https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending. 
  56. Montgomery, Blake (January 31, 2024). "Taylor Swift AI images prompt US bill to tackle nonconsensual, sexual deepfakes". The Guardian. https://www.theguardian.com/technology/2024/jan/30/taylor-swift-ai-deepfake-nonconsensual-sexual-images-bill. 
  57. 58.0 58.1 Robertson, Adi (February 21, 2024). "Google apologizes for 'missing the mark' after Gemini generated racially diverse Nazis". https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical. 
  58. Franzen, Carl (February 21, 2024). "Google Gemini's 'wokeness' sparks debate over AI censorship". https://venturebeat.com/ai/google-geminis-wokeness-sparks-debate-over-ai-censorship/. 
  59. Titcomb, James (February 21, 2024). "Google chatbot ridiculed for ethnically diverse images of Vikings and knights". The Daily Telegraph. ISSN 0307-1235. https://www.telegraph.co.uk/business/2024/02/21/google-chatbot-ethnically-diverse-images-vikings-knights/. 
  60. Tan, Kwan Wei Kevin (February 22, 2024). "Elon Musk is accusing Google of running 'insane racist, anti-civilizational programming' with its AI". https://www.businessinsider.com/elon-musk-claims-google-ai-insane-and-racist-2024-2. 
  61. Hart, Robert (February 23, 2024). "Elon Musk Targets Google Search After Claiming Company AI Is 'Insane' And 'Racist'". Forbes. https://www.forbes.com/sites/roberthart/2024/02/23/elon-musk-targets-google-search-after-claiming-company-ai-is-insane-and-racist/. Retrieved February 24, 2024. 
  62. Nolan, Beatrice (February 27, 2024). "Elon Musk is going to war with Google". https://www.businessinsider.com/elon-musk-google-war-x-gemini-ai-chatbots-2024-2. 
  63. Kharpal, Arjun (February 22, 2024). "Google pauses Gemini AI image generator after it created inaccurate historical pictures". CNBC. https://www.cnbc.com/2024/02/22/google-pauses-gemini-ai-image-generator-after-inaccuracies.html. 
  64. Milmo, Dan (February 22, 2024). "Google pauses AI-generated images of people after ethnicity criticism". The Guardian. ISSN 0261-3077. https://www.theguardian.com/technology/2024/feb/22/google-pauses-ai-generated-images-of-people-after-ethnicity-criticism. 
  65. Duffy, Catherine; Thorbecke, Clare (February 22, 2024). "Google halts AI tool's ability to produce images of people after backlash". CNN Business. https://www.cnn.com/2024/02/22/tech/google-gemini-ai-image-generator/index.html. 
  66. Roth, Emma (February 23, 2024). "Google explains Gemini's 'embarrassing' AI pictures of diverse Nazis". https://www.theverge.com/2024/2/23/24081309/google-gemini-embarrassing-ai-pictures-diverse-nazi. 
  67. Moon, Mariella (February 24, 2024). "Google explains why Gemini's image generation feature overcorrected for diversity". https://www.engadget.com/google-explains-why-geminis-image-generation-feature-overcorrected-for-diversity-121532787.html. 
  68. Ingram, David (February 23, 2024). "Google says Gemini AI glitches were product of effort to address 'traps'". NBC News. https://www.nbcnews.com/tech/tech-news/google-says-gemini-ai-glitches-product-effort-address-traps-rcna140243. 
  69. Love, Julia (February 27, 2024). "Google CEO Blasts 'Unacceptable' Gemini Image Generation Failure". Bloomberg News. https://www.bloomberg.com/news/articles/2024-02-28/google-ceo-blasts-unacceptable-gemini-image-generation-failure. 
  70. Allyn, Bobby (March 3, 2024). "Google CEO Pichai says Gemini's AI image results "offended our users"". NPR. https://www.npr.org/2024/02/28/1234532775/google-gemini-offended-users-images-race. 
  71. Albergotti, Reed (February 27, 2024). "Google CEO calls AI tool's controversial responses 'completely unacceptable'". https://www.semafor.com/article/02/27/2024/google-ceo-sundar-pichai-calls-ai-tools-responses-completely-unacceptable. 
  72. Alba, Davey; Ghaffary, Shirin (March 1, 2024). "Google Trims Jobs in Trust and Safety While Others Work 'Around the Clock'". Bloomberg News. https://www.bloomberg.com/news/articles/2024-03-01/google-trims-jobs-in-trust-and-safety-while-others-work-around-the-clock. 
  73. Victor, Jon (March 2, 2024). "Google Lays Off Trust and Safety Staff Amid Gemini Backlash". https://www.theinformation.com/briefings/google-lays-off-trust-and-safety-staff-amid-gemini-backlash. 
  74. Wittenstein, Jeran (February 26, 2024). "Alphabet Drops During Renewed Fears About Google's AI Offerings". Bloomberg News. https://www.bloomberg.com/news/articles/2024-02-26/alphabet-drops-amid-renewed-fears-about-google-s-ai-offerings. 
  75. Langley, Hugh (March 1, 2024). "There are growing calls for Google CEO Sundar Pichai to step down". https://www.businessinsider.com/calls-for-google-ceo-sundar-pichai-alphabet-step-down-ai-2024-3. 
  76. Kafka, Peter (February 26, 2024). "Google has another 'woke' AI problem with Gemini — and it's going to be hard to fix". https://www.businessinsider.com/google-gemini-woke-images-ai-chatbot-criticism-controversy-2024-2. 
  77. Cao, Sissi (February 15, 2024). "Criticism is Mounting Over Sundar Pichai's Stumbles as Google CEO". The New York Observer. https://observer.com/2023/02/google-ceo-sundar-pichai-bard-chatbot/. 
  78. Grant, Nico (August 28, 2024). "Google Says It Fixed Image Generator That Failed to Depict White People". The New York Times. ISSN 0362-4331. https://www.nytimes.com/2024/08/28/technology/google-gemini-ai-image-generator.html. 
  79. Cerullo, Megan (August 29, 2024). "Google relaunches Gemini AI tool that lets users create images of people". CBS News. https://www.cbsnews.com/news/google-relaunches-gemini-ai-tool-image-generator/. 

Template:Artificial intelligence navbox