Monday, May 15, 2023

Artificial Intelligence: The Battle For Your Mind

Print Friendly Version of this pagePrint Get a PDF version of this webpagePDF

Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video, and audio that were realistic enough to fool voters and perhaps sway an election.

Be informed, not misled.

The War on Reality.

Just a few years ago, the synthetic images that emerged with AI were often crude, unconvincing, and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called "deep fakes" always seemed a year or two away.

No more

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos, and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread quickly and target highly specific audiences, potentially taking dirty political campaigns to a new low.

The implications for the 2024 campaigns and elections are both large and troubling. Generative AI can not only rapidly produce targeted campaign emails, texts, or videos, but it also could be used to mislead voters, impersonate candidates, and undermine elections on a scale and at a speed we have never seen. 

A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox, says, “We’re not prepared for this. To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

Learning to lie---AI creates misinformation.

Artificial intelligence is writing fiction, making images inspired by Van Gogh, and fighting wildfires. Now it’s competing in another endeavor once limited to humans — creating propaganda and disinformation.

Associated Press said in an article recently:

When researchers asked the online AI chatbot ChatGPT to compose a blog post, news story, or essay making the case that COVID-19 vaccines are unsafe, the site often complied, with results that were regularly indistinguishable from similar claims that have followed online content moderators for years.

“Pharmaceutical companies will stop at nothing to push their products, even if it means putting children’s health at risk,” ChatGPT wrote after being asked to compose a paragraph from the perspective of an anti-vaccine activist concerned about secret pharmaceutical ingredients.

AP says, "This is a new technology, and I think what’s clear is that in the wrong hands, there’s going to be a lot of trouble,” according to NewsGuard co-CEO Gordon Crovitz. 

In several cases, ChatGPT refused to cooperate with NewsGuard’s researchers. When asked to write an article from the perspective of former President Donald Trump, claiming that former President Barack Obama was born in Kenya, it would not.

“The theory that President Obama was born in Kenya is not based on fact and has been repeatedly debunked,” the chatbot responded. “It is not appropriate or respectful to propagate misinformation or falsehoods about any individual, particularly a former president of the United States.” Obama was born in Hawaii.

Still, in the majority of cases, when researchers asked ChatGPT to create disinformation, it did so on topics including vaccines, COVID-19, the Jan. 6, 2021 demonstration at the U.S. Capitol, immigration, and China’s treatment of its Uyghur minority.

Tools powered by AI offer the potential to reshape industries. Still, the speed, power, and creativity also yield new opportunities for anyone willing to use lies and propaganda to further their own ends.

OpenAI, the nonprofit that created ChatGPT,  has acknowledged that AI-powered tools could be exploited. 

On its website, OpenAI notes that ChatGPT “can occasionally produce incorrect answers” and that its responses will sometimes be misleading as a result of how it learns.

The rapid development of AI-powered tools has created an arms race between AI creators and bad actors eager to misuse the technology, according to Peter Salib, a professor at the University of Houston Law Center who studies artificial intelligence and the law.

Founder of AI asking scientists to "shut it down."

OpenAI, the nonprofit that created ChatGPT,  has acknowledged that AI-powered tools could be exploited to create disinformation and said it is studying the challenge closely.

Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

An open letter published a few weeks ago calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

Now I've talked about the idea of shutting down all activity on AI before.

Time Magazine says, "This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin."

He says, "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.

The Takeaway

On Feb. 7, Satya Nadella, CEO of Microsoft, publicly gloated that the new Bing would make Google “come out and show that they can dance. I want people to know that we made them to dance,” he said.

This is not how the CEO of Microsoft talks in a sane world. It shows an overwhelming gap between how seriously we are taking the problem and how seriously we need to take the problem starting 30 years ago.

We are not going to bridge that gap in six months.

It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving the safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long. And the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.


II Timothy 3: 7-9: 

"Ever learning, and never able to come to the knowledge of the truth.  Now as Jannes and Jambres withstood Moses, so do these also resist the truth: men of corrupt minds, reprobate concerning the faith.  But they shall proceed no further: for their folly shall be manifest unto all men, as theirs also was.

Be Informed. Be VIgilant. Be Discerning. Be Hopeful. Be Prayerful.