As of today, only a few perpetrators have allegedly used AI to prepare for violent attacks.
But out of public view, high-risk threat cases involving ChatGPT and other chatbots are on the rise, a trend that should concern all of us. In a new investigation out today, my colleague Mark Follman spoke to several mental health and law enforcement leaders who work in the field of behavioral threat assessment and are sounding the alarm. Mark also tested ChatGPT's guardrails himself and got sobering results, which also turned out to echo shocking new details from a recent mass shooting at Florida State University.
Here's a sampling of what Mark learned from the experts:
- “I’ve seen several cases where the chatbot component is pretty incredible,” one top threat assessment source with psychiatric expertise told me, describing evidence from confidential investigations. “We’re finding that more people may be more vulnerable to this than we anticipated.”
- “Getting technical information from the chatbot for their plans also gives them a feeling of power.”
- “You have vulnerable individuals who are steeping in unhealthy places, who are trying to find credibility and validation for how they’re feeling. Now they have free and ready access to these generative platforms where they can research things like circumventing surveillance systems or how to use weapons."
That insight, with Mark's reporting, raises urgent questions about how chatbots could play an increasingly relevant role in future acts of violence. I urge you to read Mark's latest here.
—Inae Oh
No comments:
Post a Comment