Contact Us | About Us | DMCA | Terms of Use | Privacy Policy

AI-made images mean seeing is no longer believing – and that’s bad news for democracy | Technology

Spread the love

A strange thing happened last week when you searched for “tank man” on Google.

Tap on image results and instead of the usual photos of Tiananmen Square in Beijing, and the iconic image of a brave protester staring down a convoy of tanks that was captured in 1989, the first result was the same historic moment – but from a different point of view.

For a time last week, the first result on Google Images for “tank man” was instead an AI-generated image of the same protester, taking a selfie in front of the tank. The image was created by Midjourney, and was at least six months old. First reported by 404 Media, a new tech journalism startup set up by former Vice News staff, the emergence of the tank man selfie – which Google subsequently removed from search results for “tank man” – highlighted one of the main fears that Eddie Perez, Twitter’s former head of election integrity, highlighted to me in a recent podcast interview: it’s now possible, with the use of AI imagery, to create alternative history. And that has huge ramifications not only on our lives, but also our elections.

When he spoke to me, Perez was concerned about the deliberate use of AI imagery to hoodwink voters into believing alternative facts: disinformation. But the tank man incident was an example of AI misinformation – content posted innocuously, within a context that made it clear how it was made, but which was then shorn of that context and presented as something else.

Images are such a powerful, scary tool for AI to grapple with because of a handful of old maxims. One: a picture tells a thousand words. The other? Seeing is believing.

It’s easy to discount a story if you’re only reading about it. If you see it with your own eyes, and see the images included in it, it immediately becomes more credulous. This is an issue I highlighted six months ago, when AI-generated images, of Donald Trump being arrested went viral on social media. In a pre-ChatGPT era, when the tools weren’t available to all and sundry, we’d call them “deepfakes”.

Back then, I asked the creator of the series of images, Bellingcat journalist Eliot Higgins, whose job it was to poke holes in the disinformative qualities of Russian propaganda, whether he worried he was contributing to the issue of fake news. At the time he wasn’t, reckoning that there were always giveaways that would highlight an image was the product of AI.

Today, he’s still not worried about people playing about with the tool on social media, but is concerned about politicians using AI tools to create photos that damage their opponents. (Ron DeSantis’s campaign has already done this.) “I guess we’ll see as the US election progresses how bad it gets,” he says, “but I don’t think Trump supporters would be shy about using AI generated imagery.”

By the way, there’s a certain irony in the AI tank man tale.

For months, Igor Szpotakowski, who researches Chinese law at Newcastle University, has spoken about the way China’s version of generative AI tools are responding to exactly the same threats of rewriting history – in this instance, in ways the ruling Communist party might not like. Szpotakowski has screenshots of how an image generation model developed by Baidu, a giant Chinese tech company, will create images in response to a prompt asking it to depict “dictatorship”, but won’t when asked to show “democracy and freedom”. “That tells us a lot about their training data,” Szpotakowski says.

On your marks, set, fake

What might generative AI mean for the 2024 US election?
What might generative AI mean for the 2024 US election? Photograph: Alex Brandon/AP

The backdrop to the tank man debacle is the increasing pace in AI image development, meaning this kind of misrepresentation (perhaps it’s better put as misinterpretation) is likely to become more common as the ability to put artistic skills in the hands of the least skilled increases.

I’m no artist, and never have been. But give me Midjourney, DALL-E or any other AI image generator, and a few minutes to fine tune my prompt (the bit of text that sets an image generator going) and I can produce work that would never be possible in my wildest dreams otherwise.

Just as generative AI text tools are improving every day, so are the capabilities of AI image generators are. One of the biggest, OpenAI’s DALL-E 3, will be rolled out to paying subscribers to ChatGPT Plus in the coming weeks. I’m one of those subscribers, and I’m excited to see what it offers. Twitter seems to have already made up its mind that DALL-E 3 is the match of and better than Midjourney, which has previously had supremacy in making images – so much so that they even release a monthly magazine of its best bits.

skip past newsletter promotion

Yet there are rumours within the AI community that in response to DALL-E 3, Midjourney will also release a massive update that advances its capabilities even further. Could DALL-E 3 be a ChatGPT moment for generative imagery? Whatever happens, it seems likely that many more people will have access to such tools shortly.

One thing that we haven’t yet touched on is the impact that has on artists, many of whom allege that such AI image generation models are trained on their data without permission. Last week, for a future episode of the Article 19 podcast Techtonic, I spoke to Karla Ortiz, an artist who has sued a trifecta of companies touting AI image generators. You’ll have to wait for the episode to learn what she said, but in the interim, her July 2023 testimony to a US senate subcommittee about her fears for copyright in the age of AI is worth reading.

The week in AI

Britain’s Prime Minister Rishi Sunak, whose government is increasingly concerned about the use of AI in the wrong hands.
Rishi Sunak, whose government is increasingly concerned about the use of AI in the wrong hands. Photograph: Reuters
  • A UK-led AI summit in November will tackle Downing Street’s increasing concern that “criminals or terrorists could use artificial intelligence to cause mass destruction,” the Guardian reported yesterday.

  • Amazon is investing up to $4bn (£3.2bn) in Anthropic, a startup building a ChatGPT rival named … Claude?

  • And speaking of art and copyright, a recent US ruling suggests AI works can’t be copyrighted, while Bollywood actor Anil Kapoor won a fascinating legal victory in India preventing the unauthorised use of his likeness, and the Atlantic revealed this week that almost 200,000 books have been used, without permission or compensation, by companies including Meta and Bloomberg to train their generative AI.

The wider TechScape

Elon Musk in Israel last week.
Elon Musk in Israel last week. Photograph: Mandel Ngan/AFP/Getty Images
  • For the Guardian, David Runciman spent months following the same Twitter accounts as Elon Musk, to try to answer: what goes inside the head of the world’s richest man?

  • Zoe Williams, meanwhile, is fantastic on the rise and fall of FTX founder Sam Bankman-Fried.

  • An app promoted by Andrew Tate has been removed from Apple’s App Store after accusations it promoted what could be an illegal pyramid scheme. Unsurprisingly, given who’s involved, it was also accused of spreading misogyny.

  • There’s a shockingly lucrative opportunity in reading out newspaper obituaries on YouTube.

  • What happens if your iPhone breaks in Cuba, where trade embargoes mean Apple doesn’t operate? Unofficial Apple technicians will step in.

  • Ads are coming to Prime Video, the latest streaming service to charge money and interrupt whatever you’re watching.

Like it? Share with your friends!

The Groucho

Contact Us| About Us | DMCA | Terms of Use | Privacy Policy