Connect with us

Hi, what are you looking for?

Deluxe Investment Group – Investing and Stock NewsDeluxe Investment Group – Investing and Stock News

Latest News

Deepfake Kari Lake video shows coming chaos of AI in elections

Hank Stephenson has a finely tuned B.S. detector. The longtime journalist has made a living sussing out lies and political spin.

But even he was fooled at first when he watched the video of one of his home state’s most prominent congressional candidates.

There was Kari Lake, the Republican Senate hopeful from Arizona, on his phone screen, speaking words written by a software engineer. Stephenson was watching a deepfake — an artificial-intelligence-generated video produced by his news organization, Arizona Agenda, to underscore the dangers of AI misinformation in a pivotal election year.

“When we started doing this, I thought it was going to be so bad it wouldn’t trick anyone, but I was blown away,” Stephenson, who co-founded the site in 2021, said in an interview. “And we are unsophisticated. If we can do this, then anyone with a real budget can do a good enough job that it’ll trick you, it’ll trick me, and that is scary.”

As a tight 2024 presidential election draws ever nearer, experts and officials are increasingly sounding the alarm about the potentially devastating power of AI deepfakes, which they fear could further corrode the country’s sense of truth and destabilize the electorate.

There are signs that AI — and the fear surrounding it — is already having an impact on the race. Late last year, former president Donald Trump falsely accused the producers of an ad, which showed his well-documented public gaffes, of trafficking in AI-generated content. Meanwhile, actual fake images of Trump and other political figures, designed both to boost and to bruise, have gone viral again and again, sowing chaos at a crucial point in the election cycle.

Now some officials are rushing to respond. In recent months, the New Hampshire Justice Department announced it was investigating a spoofed robocall featuring an AI-generated voice of President Biden; Washington state warned its voters to be on the lookout for deepfakes; and lawmakers from Oregon to Florida passed bills restricting the use of such technology in campaign communications.

And in Arizona, a key swing state in the 2024 contest, the top elections official used deepfakes of himself in a training exercise to prepare staff for the onslaught of falsehoods to come. The exercise inspired Stephenson and his colleagues at the Arizona Agenda, whose daily newsletter seeks to explain complex political stories to an audience of some 10,000 subscribers.

They brainstormed ideas for about a week and enlisted the help of a tech-savvy friend. On Friday, Stephenson published the piece, which included three deepfake clips of Lake.

It begins with a ploy, telling readers that Lake — a hard-right candidate whom the Arizona Agenda has pilloried in the past — had decided to record a testimonial about how much she enjoys the outlet. But the video quickly pivots to the giveaway punchline.

“Subscribe to the Arizona Agenda for hard-hitting real news,” the fake Lake says to the camera, before adding: “And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.”

By Saturday, the videos had generated tens of thousands of views — and one very unhappy response from the real Lake, whose campaign attorneys sent the Arizona Agenda a cease-and-desist letter. The letter demanded “the immediate removal of the aforementioned deep fake videos from all platforms where they have been shared or disseminated.” If the outlet refuses to comply, the letter said, Lake’s campaign would “pursue all available legal remedies.”

A spokesperson for the campaign declined to comment when contacted on Saturday.

Stephenson said he was consulting with lawyers about how to respond, but as of Saturday afternoon, he was not planning to remove the videos. The deepfakes, he said, are good learning devices, and he wants to arm readers with the tools to detect such forgeries before they’re bombarded with them as the election season heats up.

“Fighting this new wave of technological disinformation this election cycle is on all of us,” Stephenson wrote in the article accompanying the clips. “Your best defense is knowing what’s out there — and using your critical thinking.”

Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation, said the Arizona Agenda videos were useful public service announcements that appeared carefully crafted to limit unintended consequences. Even so, he said, outlets should be wary of how they frame their deepfake reportage.

“I’m supportive of the PSAs, but there’s a balance,” Farid said. “You don’t want your readers and viewers to look at everything that doesn’t conform to their worldview as fake.”

Deepfakes present two distinct “threat vectors,” Farid said. First, bad actors can generate false videos of people saying things they never actually said; and second, people can more credibly dismiss any real embarrassing or incriminating footage as fake.

This dynamic, Farid said, has been especially apparent during Russia’s invasion of Ukraine, a conflict rife with misinformation. Early in the war, Ukraine promoted a deepfake showing Paris under attack, urging world leaders to react to the Kremlin’s aggression with as much urgency as they might show if the Eiffel Tower had been targeted.

It was a potent message, Farid said, but it opened the door for Russia’s baseless claims that subsequent videos from Ukraine, which showed evidence of Kremlin war crimes, were similarly feigned.

“I am worried that everything is becoming suspect,” he said.

Stephenson, whose backyard is a political battleground that lately has become a crucible of conspiracy theories and false claims, has a similar fear.

“For many years now we’ve been battling over what’s real,” he said. “Objective facts can be written off as fake news, and now objective videos will be written off as deep fakes, and deep fakes will be treated as reality.”

Researchers like Farid are feverishly working on software that would allow journalists and others to more easily detect deepfakes. Farid said the suite of tools he currently uses easily classified the Arizona Agenda video as bogus, a hopeful sign for the coming flood of fakes. However, deepfake technology is improving at a rapid rate, and future phonies could be much harder to spot.

And even Stephenson’s admittedly sub-par deepfake managed to dupe a few people: After blasting out Friday’s newsletter with the headline “Kari Lake does us a solid,” a handful of paying readers unsubscribed. Most likely, Stephenson suspects, they thought Lake’s endorsement was real.

Maegan Vazquez contributed to this report.

This post appeared first on The Washington Post

Become a VIP member by signing up for our newsletter. Enjoy exclusive content, early access to sales, and special offers just for you! As a VIP, you'll receive personalized updates, loyalty rewards, and invitations to private events. Elevate your experience and join our exclusive community today!



    By opting in you agree to receive emails from us and our affiliates. Your information is secure and your privacy is protected.

    You May Also Like

    Latest News

    Florida Gov. Ron DeSantis (R) needs a massive infusion of cash in the next two months of the Republican presidential primary race to help...

    Editor's Pick

    ERP or Enterprise Resource Planning solutions help businesses of all sizes manage their daily business operations. First used in the 1990s, ERP systems have...

    Economy

    Amp’s 223.67% Leap: Analyzing the Sudden Spike The cryptocurrency community has recently been set abuzz by the phenomenal rise of Amp (AMP). Just in...

    Latest News

    The United States could be on track for a Joe Biden-Donald Trump rematch in 2024, but it’s the president’s son Hunter Biden who earned...

    Disclaimer: Deluxeinvestmentgroup.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.


    Copyright © 2024 deluxeinvestmentgroup.com