Skip to content

Not So Intelligent AI

Since it is incredibly local to our agency and has sparked a lot of conversations in our community, we have to talk about Artificial Intelligence (AI).

Last week, a local high school received multiple bomb threats, which were later confirmed to be hoaxes. Even so, Facebook’s AI chat bot feature sent the chaos to a new level.

Multiple people in the area asked AI about the threats only to be alerted that it was an active shooter situation with multiple injuries and one fatality. While a potential bomb threat is already terrifying enough, Meta’s chat bot created panic.

The messages mentioned that one student was taken into custody and there are counselors on site to support students and staff. The responses AI was giving the community were just plain wrong, but how could they know? A spokesperson with Meta says, “While this is a new technology and may not always return the response we intend, we got this wrong and apologize for any confusion or anxiety that was caused. We are reviewing the incident.”

Horry County police spokesperson Mikayla Moskov said “another threat was received on Friday, bringing the total to five”. They did take the TWO students that were responsible into custody, but all this to say, AI should not be the place we go for breaking news. We never thought this could happen in our town, but it has. No one should be subjected to a life-threatening situation, and we definitely cannot afford to spread false news.


These hoaxes and AI mishaps are taking place throughout the country as well, which is cause for more awareness. Reporting incorrect information is exactly what we should worry about with AI spreading like wildfire. We’ve all seen how AI can forget someone’s arm or now, falsify news reports. Biases, misdiagnosis, lack of emotion, and unpredictable negativity should be on everyone’s radar when it comes to this Artificial “Intelligence”.

Back in February around the Super Bowl, there was a huge stint in AI when graphic images of Taylor Swift blew up the internet. Someone was able to create and post horrendous images of Swift that were obviously not real – But to someone unaware of anything pop culture or technology advances, they might have believed them. If someone can craft up defaming images of a celebrity that easily, who is to say this would not happen to any average person.

Even something like an audio message can be carefully crafted to sound like someone you know with enough audio to teach AI the basics of their voice. The smarter AI gets, the easier this will become, leading to possible deceitful or manipulative messaging. It’s known best for potential identity fraud for now, using voices of loved ones to call & ask for money from family. When AI is good enough to sound like your family member and said family member needs help, how are you going to say no when you’re not aware of these tricks?

We see the potential in creative positions, especially as a marketing agency – It is a great concept for research or ideas. We have been able to use it to help us craft messaging or certain graphics. But we truly cannot back this 100% without some kind of human review behind it. Come back to us when they have perfectly automated the chats, photos, audio, and everything else AI can apparently do – Then we can talk.