In today’s column, I examine the brouhaha over Anthropic’s latest AI, known as Claude Mythos Preview, which has attracted tremendous controversy even though it hasn’t yet been released for public use.
You might have seen major news headlines or vociferous postings on social media about Mythos. The deal is that Anthropic discovered during lab testing that their latest unreleased AI has the capability to do bad things and reveal dire secrets that would be harmful to humankind. A primary area of concern is that Mythos discovered or uncovered a plethora of cybersecurity holes that evildoers could use to undermine a large swath of computing throughout society. I’ll explain momentarily how it is that modern-era generative AI and large language models (LLMs) can veer into such untoward territory.
The AI maker has opted to convene AI specialists and cybersecurity professionals to assess Mythos amid the myriads of unsavory system exploits that it seems to have in hand. The effort launched is known as Project Glasswing, and per the official website: “Today we’re announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of the capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity.”
Let’s talk about the whole conundrum.
To read more, click here.