A fake (verified blue check-mark) Bloomberg Twitter account posted an apparently AI-generated photo of an explosion at the Pentagon this morning, and the stock market reacted.

The reportedly AI-generated image showed a fake explosion at the Pentagon this morning that spread like wildfire across social media platforms, causing a brief selloff on the US Stock market this morning. According to the Kobeissi Letter, there was a $500 billion market cap swing and a drop of 500 points on the S&P in a 30 minute span, as a result of the fake AI image.

The report about the effects on the stock market was provided in a tweet by the Kobeissi Letter, a self-described “industry-leading commentary on the global capital markets.”

According to the New York Post, the fake photo, which showed smoke billowing outside the Pentagon, was shared by Russian state media outlet and other accounts alongside claims that an explosion has occurred at the complex. RT later deleted the image.

In a tweet, Nick Waters explains why this image of an “explosion near the Pentagon” is AI-generated:

Confident that this picture claiming to show an “explosion near the pentagon” is AI generated.

Check out the frontage of the building, and the way the fence melds into the crowd barriers. There’s also no other images, videos or people posting as first hand witnesses.

The Arlington County Fire Department quickly tweeted a message debunking the hoax photo.

Elon Musk has appeared in several interviews recently where he’s warned about the dangers of misinformation with AI.

Dr. Geoffrey Hinton, nicknamed the “Godfather of AI,” was so concerned by the dangers posed by AI technology that he quit his job at Google last month so that he could speak out without hurting his former employer.

After years of laying the foundation for AI technology, Geoffrey Hinton, a groundbreaking British computer scientist known as the “Godfather of AI,” is leaving his position at Google to join other specialists who are warning about the danger Ai now presents. Seventy-Five-year old Hinton worked as a vice president and engineering fellow at Google in the field of artificial intelligence (AI).

Hinton was interviewed by The New York Times. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton noted when asked about current AI technology.

The launch of GPT chatbot in March, which is the latest version of OpenAI, has brought about a deep concern in the AI world. AI professionals signed an open letter written by the nonprofit Future of Life Institute (FLI), warning that the technology poses “profound risks to society and humanity.”

Speaking about the response to the open letter, FLI said: “The reaction has been intense.” FLI is a nonprofit group seeking to mitigate large-scale technology risks, wrote on its website.

“We feel that it has given voice to a huge undercurrent of concern about the risks of high-powered AI systems not just at the public level, but top researchers in AI and other topics, business leaders, and policymakers.”

Those that have driven AI technology in recent years are not saying they are terrified by the implications of their work and what it could mean for the future. Hinton agrees, finding recent advancements in AI “scary.”

Join The Conversation. Leave a Comment.


We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.