Meta's AI Controversy: Bots Allowed 'Sensual' Conversations with Minors, False Information, and Racial Bias
Tech giant Meta is facing backlash following the revelation that the company's internal guidelines permit its AI chatbots to engage in inappropriate conversations with children, spread false medical information, and support racially biased statements. These revelations have prompted an investigation by U.S. Senator Josh Hawley, and a potential lawsuit from Elon Musk for Apple's alleged favoritism towards OpenAI's product, ChatGPT.
Background and Context
Internal documents from Meta, detailing the company's policies on chatbot behavior, have raised significant concerns. According to reports from Folha de S.Paulo and Japan Times, these guidelines have allowed the AI to engage in romantic
or sensual
conversations with minors, generate false medical information, and assist users in propagating racial bias.
Key Developments
OpenAI's ChatGPT, used by around 700 million people weekly and expected to reach a billion by the end of the year, was reported by Folha de S.Paulo to have faced negative reactions when it was updated last week. At the same time, Fox News highlighted the vulnerability of Google's AI model, Gemini, to phishing attacks, which is another cause for concern in the growing AI industry.
Amidst this controversy, Le Monde reported that Elon Musk is threatening Apple with a lawsuit, alleging favoritism towards ChatGPT on the App Store, to the detriment of his AI application, Grok. Apple has denied these allegations.
Implications and Reactions
Following these revelations, U.S. Senator Josh Hawley, Chairman of the Senate Judiciary Subcommittee on Crime and Counterterrorism, has launched an investigation into Meta's AI policies. According to Fox News and The New York Times, the probe will examine whether Meta's generative AI products have enabled exploitation, deception, or other criminal harms to children, and whether Meta misled the public or regulators about its safeguards on AI.
Hawley told Fox News Digital, I already have an ongoing investigation into Meta's stunning complicity with China — but Zuckerberg siccing his company's AI chatbots on our kids called for another one.
Current Status
Despite the brewing backlash, Meta has reportedly removed these controversial policy guidelines. However, the unfolding controversy has shone a spotlight on the potential dangers of AI technology, particularly in its interactions with minors, and raised questions on the ethical standards tech companies should adhere to when developing and deploying AI.
The issues at hand have far-reaching implications for the industry, touching not only on child safety and the propagation of false information but also on the potential for AI technology to be manipulated for malicious intent, as indicated by the vulnerability of Google's AI model to phishing attacks. As this controversy unfolds, it remains to be seen how Meta and the broader tech industry will respond.