fbpx

Templeton.org is in English. Only a few pages are translated into other languages.

OK

Usted está viendo Templeton.org en español. Tenga en cuenta que solamente hemos traducido algunas páginas a su idioma. El resto permanecen en inglés.

OK

Você está vendo Templeton.org em Português. Apenas algumas páginas do site são traduzidas para o seu idioma. As páginas restantes são apenas em Inglês.

OK

أنت تشاهد Templeton.org باللغة العربية. تتم ترجمة بعض صفحات الموقع فقط إلى لغتك. الصفحات المتبقية هي باللغة الإنجليزية فقط.

OK
Skip to main content
Back to Templeton Ideas

In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.

During this election year in the United States, some are worried that bad actor AI will sway the outcomes of hotly contested races. We spoke with Neil Johnson, a professor of physics at George Washington University, about his research that maps out where AI threats originate and how to help keep ourselves safe.

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” - Stephen Hawking

AI: A tale of love and fear
Humanity’s relationship status with AI would read: It’s Complicated. ChatGPT, the chatbot from OpenAI, is estimated to have reached 100 million monthly active users just two months after launch, making it the fastest-growing app ever. Indeed, the AI market size is expected to reach a whopping $407 billion by 2027. However, a survey showed that over 75% of consumers are concerned about misinformation from AI. Movies and shows about AI gone wrong illuminate our collective uneasiness with a technology that could disrupt, even destroy, our very existence.

“If humans are in a battle with AI there needs to be a deeper understanding of the battlefield.” - Professor Neil Johnson

Mapping and tracking AI threats 
When Johnson and his team started their research, he saw that lawmakers and thought leaders around the world were talking about the many online threats to our mental and physical health, “But without any kind of map that I know about,” says Johnson. His new article “Controlling bad-actor-artificial intelligence activity at scale across online battlefields” aims to fill that gap.  

We are used to picturing maps of the world. “Now imagine doing that for the online world—lots of platforms. What does that look like in terms of which is next to what and how close are you to where there might be some bad activity going on?” says Johnson.

“So that’s what we set out to do. Starting a few years ago, we basically mapped out all of that machinery. In other words, the roadmap of the communities that are trying to undermine established and expert beliefs.”

Here are four key takeaways from Johnson’s research:

  • Bad actors need only the basic GPT AI systems to manipulate and bias information on platforms.
  • A network across a multitude of social media platforms connects bad actor communities to billions of users worldwide without users’ knowledge.
  • Bad-actor activity driven by AI will become a daily occurrence by the summer of 2024. 
  • Social media companies should deploy tactics to contain the disinformation, removing bigger pockets of coordinated activity.

“Those who win every battle are not really skilled — those who render others’ armies helpless without fighting are best of all.” - Sun Tzu

The daily battle of ideas 
“It may not be a physical battle, but it’s certainly a battle of hearts and minds. You are fighting for people’s trust and beliefs. It’s like an arms race,” says Johnson.

His research shows where bad-actor-AI activity will likely happen across a vulnerable-mainstream ecosystem. This research is funded, in part, by the U.S. Department of Defense. “The Department of Defense is very much interested in understanding social systems…If you can see a system start to have instability, you know that you should be a bit more wary of what’s going on in that system,” says Johnson, adding that online activity can quickly scale up to millions of people.

When your Facebook Group gets weird
Online AI manipulations lurk on platforms people use daily. “Facebook, and everything like it, struggle to keep it off,” says Johnson. That online map doesn’t look like what we might imagine, with good stuff in the middle, and fringe stuff a long way off.

“Bottom line is that the fringe is actually next door to us on smaller platforms, which sounds like there’s therefore small impact. But if I take a hundred small things, they can be bigger and more important than one large thing,” says Johnson.

“Millions of communities exist online,” says Johnson, who gives as an example a Stay-At-Home Dads Facebook Group. “You give it a name, it has a picture, it has members, you invite members. Each of those communities, particularly during Covid when our research started, tends to become safe spaces for people to go. By definition, it’s self-reinforcing. You go to communities where you feel safe, and that’s even true of bad actors, people with racist thoughts or ideas against certain religions, for instance.”

“They’ll find a community online where people sound like they have similar ideas, and they’ll hang out there for a while and exchange comments. Gradually, if it’s a good fit, they’ll stay. If it’s not, they’ll just head off and find another one,” says Johnson. “But these communities are not echo chambers. They link to each other, and some people move into other communities.”

Then, just when you think you’re talking to a seemingly reasonable person, or even someone you feel you know well, they may say something jarringly racist, sexist, or otherwise extremist. That “person” in your group may be real, or it could be a sophisticated online bot.

Not your grandmother’s bot
“Back in the 2016 election, the bots were relatively very simple, like little pieces of program that can promote a certain message…It was like having one of those toys that just keeps repeating,” says Johnson.

“Fast forward to 2024, we’ve now got GPT, which doesn’t sound like an annoying repetitive robot. It can actually sound quite human. So now imagine, while all the humans are sleeping and eating, I can have little programs that run very basic GPT create malicious content 24/7, share it, and reply repeatedly on social media,” says Johnson, adding that versions of AI are now actually trained on hate speech, perfectly tuned to the wanted type of rhetoric and narrative. 

AI will take any prompt and learn from it. So it can be targeted to the real topics of your community at a scale and speed that a human can’t replicate. Within a few sentences, “It jumps to another topic,” says Johnson. “People do the same thing, and that’s when people think AI is human-like. Because, let’s face it, you sit around with some friends, and a conversation at eight o’clock starts off with one thing, and by 8:03, suddenly, you’re onto something else.”

Regulatory and policy frameworks 
“Here in the US, there’s always these congressional hearings of one sort or another. They get the CEOs of the major platforms…in front of [Congress] and then say they’ve got to do more. And then the platforms say, well, we are doing more,” says Johnson. 

Johnson has been invited to and spoken to government departments. “They tend to have the view, particularly if they’re from law enforcement, that there’s somebody who’s broken the law…that it’s controlled by one, two, or three people, maybe 10 people, maybe 20, but not 2 million. Law enforcement can’t deal with two million people doing something.” 

“I think what is needed is the [EU Artificial Intelligence Act] version done better,” says Johnson, highlighting the need to also regulate the smaller platforms his research mapped out as the glue that holds together the bad actor online universe.

“I would love the next congressional hearing to have our map in the background, and every question can be referred to that map,” says Johnson, adding that his team could show where and on what platform most of the bad actors are associated with child safety issues, how many of them are there, and who they are connected to. “It’s this kind of very practical discussion. Then you can come up with a policy.” 

Johnson and his team have even approached agencies about making these AI maps a national resource, showing which online communities and platforms are most active in real-time. Like plate tectonics, the maps move around and always need updating. “We’ve got an educational role and try to share what we are doing,” says Johnson. “And so, we are making our maps available to journalists, policymakers, and others who want to use them as part of their discussions.”

The better a person, group, and nation understands influences, the better positioned they are to avoid instability, trauma, loss of money, and even loss of life. There is an old-timey phrase: “Believe nothing you hear, and only one half that you see.” But what about a world in which you can’t believe anything?

“If we let bad actor AI take over, then people will naturally have to distrust pretty much everything they see,” says Johnson. “It just wears away at the fabric of society. Slowly, it erodes it, and that can be hard to get back. And so, in the end, we might end up more tribal. I worry about that.”

Reasons for hope
Although it’s not the focus of Johnson’s research, there are reasons to be optimistic about AI. By aggregating research and connecting dots worldwide at a pace and depth impossible for humans, it could help us cure diseases and solve scientific puzzles that are currently unimaginable. “It’s the new thing that will bring together all of these existing pieces. It’s like anything though. It can be used for good and bad,” says Johnson.

Efforts to mitigate the risks associated with bad actor AI require a multifaceted approach that encompasses technological innovation, regulatory reform, ethical AI governance frameworks and policies that prioritize transparency, accountability, and human rights; tech, stakeholder, and international cooperation, and investments in AI education, including public awareness campaigns and research.

As we navigate the complexities of the digital age, we can strive to build a future in which AI serves primarily as a force for good, enriching our lives and advancing the collective welfare of humanity, while also remaining ever vigilant to predict, identity, and root out bad actors.


Alene Dawson is a Southern California-based writer known for her intelligent and popular features, cover stories, interviews, and award-winning publications. She’s a regular contributor to the LA Times.