PENNSYLVANIA, USA — With the growing popularity of artificially intelligent chatbots that can hold conversations like a human, some are concerned about the possibility of an AI takeover.
While science fiction writers have long forecast such a doomsday, Jamison Rotz, CEO of Nearly Human AI in Harrisburg, says the technology is nowhere near it.
"People talk about the singularity, but the near-term risk that we face is going to look a lot more like idiocrasy than it does terminator," Rotz said.
What’s far more of a threat, he said, is the wave of AI-generated content online, specifically deepfakes.
The term refers to images or video meant to resemble real people, often coupled with audio that mimics their voices.
Deepfake videos have been popping up more frequently on sites like YouTube. Viewers can watch the last three U.S. presidents appearing to play a military video game or see a political candidate seemingly star in a scene from a popular television sitcom.
This type of material has existed for years, but AI makes it easier to generate and more convincing to the eye, increasing the chance that some viewers believe it’s real.
"I think it’s actually the very most immediate risk that we have. I think that it’s going to be pushed forward by this next election cycle," Rotz said. "We’re all sort of susceptible to manipulation under those circumstances. It’s disorienting for us as humans to not know exactly what it is we’re contending with and that’s going to get harder, not easier, as the tech progresses."
Not every video is as lighthearted as the ones we just highlighted.
A deepfake video released earlier this year falsely showed Ukrainian President Volodymyr Zelenskyy telling his citizens to surrender to Russia.
"It can be used to generate propaganda. It can be used to generate large numbers of opinions that don’t actually represent people. Posting those online has the potential to influence public discussion," Wilson said.
How do we combat harmful or misleading AI content?
Nearly Human’s CTO Karl Haviland said part of the responsibility falls on AI companies.
"We are strong proponents of responsible AI techniques," Haviland said.
Many federal and state government officials are discussing new regulations on the technology.
"On the one hand, there are definitely concerns about what this technology can do," Wilson said. "On the other hand, finding a way to limit it in a way that still preserves all of the benefits we might get is a topic of ongoing research and public policy in technology."
"Different countries are coming out with various levels of oversight and they’re trying to figure this out," Haviland said.
Rotz suggests state and federal laws that apply to humans might also need to apply to AI.
"I think it’s useful to look at this, as software starts to act more human, that we regulate it in a more human fashion," Rotz said.
With an expansive and global internet, some feel regulation would be hard to enforce.
"This is a place where AI is going to confront AI," Rotz said. "You’re going to have certain algorithms that have one motivation and then you’re going to have to have other algorithms that keep them in check."
In the meantime, there are ways to protect yourself.
"There are many different sources of information and people are figuring it out right now," Haviland said. "Sometimes it’s helpful to take a beat and try to get a few perspectives."
"How was this written? How was this produced?" Wilson added. "Is there something about it that seems to be more about getting me to react than getting me to actually learn?"
AI has been part of our lives for decades and we’re bound to see a lot more of it.
Students at Cumberland Valley School District are learning some of the techniques our experts just spoke about in their digital literacy courses. Their taught to consider the source of the information they’re seeing, consider the evidence presented and balance it against other sources.
Administrators said those skills will be even more critical as AI becomes more lifelike.