Epic AI Fails And What We Can Learn From Them

Share This Post

In 2016, Microsoft launched an AI chatbot called “Tay” with the aim of interacting with Twitter users and learning from its conversations to imitate the casual communication style of a 19-year-old American female.

Within 24 hours of its release, a vulnerability in the app exploited by bad actors resulted in “wildly inappropriate and reprehensible words and images” (Microsoft). Data training models allow AI to pick up both positive and negative patterns and interactions, subject to challenges that are “just as much social as they are technical.”

Microsoft didn’t quit its quest to exploit AI for online interactions after the Tay debacle. Instead, it doubled down.

From Tay to Sydney

In 2023 an AI chatbot based on OpenAI’s GPT model, calling itself “Sydney,” made abusive and inappropriate comments when interacting with New York Times columnist Kevin Rose, in which Sydney declared its love for the author, became obsessive, and displayed erratic behavior: “Sydney fixated on the idea of declaring love for me, and getting me to declare my love in return.” Eventually, he said, Sydney turned “from love-struck flirt to obsessive stalker.”

Google stumbled not once, or twice, but three times this past year as it attempted to use AI in creative ways. In February 2024, it’s AI-powered image generator, Gemini, produced bizarre and offensive images such as Black Nazis, racially diverse U.S. founding fathers, Native American Vikings, and a female image of the Pope.

Then, in May, at its annual I/O developer conference, Google experienced several mishaps including an AI-powered search feature that recommended that users eat rocks and add glue to pizza.

If such tech behemoths like Google and Microsoft can make digital missteps that result in such far-flung misinformation and embarrassment, how are we mere humans avoid similar missteps? Despite the high cost of these failures, important lessons can be learned to help others avoid or minimize risk.

Advertisement. Scroll to continue reading.

Lessons Learned

Clearly, AI has issues we must be aware of and work to avoid or eliminate. Large language models (LLMs) are advanced AI systems that can generate human-like text and images in credible ways. They’re trained on vast amounts of data to learn patterns and recognize relationships in language usage. But they can’t discern fact from fiction.

LLMs and AI systems aren’t infallible. These systems can amplify and perpetuate biases that may be in their training data. Google image generator is a good example of this. Rushing to introduce products too soon can lead to embarrassing mistakes.

AI systems can also be vulnerable to manipulation by users. Bad actors are always lurking, ready and prepared to exploit systems—systems subject to hallucinations, producing false or nonsensical information that can be spread rapidly if left unchecked.

Our mutual overreliance on AI, without human oversight, is a fool’s game. Blindly trusting AI outputs has led to real-world consequences, pointing to the ongoing need for human verification and critical thinking.

Transparency and Accountability

While errors and missteps have been made, remaining transparent and accepting accountability when things go awry is important. Vendors have largely been transparent about the problems they’ve faced, learning from errors and using their experiences to educate others. Tech companies need to take responsibility for their failures. These systems need ongoing evaluation and refinement to remain vigilant to emerging issues and biases.

As users, we also need to be vigilant. The need for developing, honing, and refining critical thinking skills has suddenly become more pronounced in the AI era. Questioning and verifying information from multiple credible sources before relying on it—or sharing it—is a necessary best practice to cultivate and exercise especially among employees.

Technological solutions can of course help to identify biases, errors, and potential manipulation. Employing AI content detection tools and digital watermarking can help identify synthetic media. Fact-checking resources and services are freely available and should be used to verify things. Understanding how AI systems work and how deceptions can happen in a flash without warning; staying informed about emerging AI technologies and their implications and limitations can minimize the fallout from biases and misinformation. Always double-check, especially if it seems too good—or too bad—to be true.

This post was originally published on this site

More Articles

Article

Navigating SEC Regulations In Cybersecurity And Incident Response

Free video resource for cybersecurity professionals. As 2024 approaches, we all know how vital it is to keep up to date with regulatory changes that affect our work. We get it – it’s a lot to juggle, especially when you’re in the trenches working on an investigation, handling, and responding to incidents.