Trump Blames AI for Video His Staff Says Is Real

Tags:

A recent video of a person throwing two items out of an upstairs window of the White House sparked discussion about photorealistic AI video.

With companies selling AI tools that can create convincing replicas of real recordings, the incident highlights concerns about AI attribution and how generative AI could be used to spread misinformation. 

Viral video showed items thrown out of White House window 

The video, posted by the account “washingtonianprobs,” included a caption saying, “One of our community members noticed some things being thrown out of one of The White House’s windows today.” It appeared on Instagram on Sept. 1. 

White House representatives initially told reporters the footage was legitimate and showed a contractor performing maintenance on the building while the president was not in residence.  

President Donald Trump had a different take, telling reporters the video has “got to be fake” and “If something happens that’s really bad, maybe I’ll have to just blame AI.”

Trump told Fox News Channel reporter Peter Doocy: “It’s the kind of thing they do. And one of the problems we have with AI, it’s both good and bad. If something happens really bad, just blame AI. But also they create things, you know?”

He added that White House windows are difficult or impossible to open, saying “number one, they’re sealed, and number two, each window weighs about 600 pounds.”

A Reader’s Digest article from January 2025 noted that the president is not permitted to open the White House windows for security reasons. 

Generative AI can cause confusion between real and photorealistic content  

Generative AI has advanced to a point where it is increasingly difficult to tell real images, audio, or videos from fabricated ones. For example, hiring managers may encounter AI-generated avatars posing as applicants. Threat actors could also impersonate company leaders using deepfakes. 

At the same time, companies are introducing safeguards such as source citations and digital watermarks to indicate when content has been AI-generated. 

The recent White House incident underscored a different risk, highlighted by Trump’s remarks: real footage or audio being dismissed as AI-generated. 

Ultimately, generative AI in the wrong hands remains a powerful tool for spreading misinformation at scale. 

xAI’s generative AI Grok tends to repeat political messaging aligned with Elon Musk’s personal interventions, The New York Times found.   

The post Trump Blames AI for Video His Staff Says Is Real appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *