AI Manipulation: Minneapolis Events Blur the Truth
The spread of AI-altered photos and videos of the armed incidents in Minneapolis on social media is blurring the line between reality and fiction.

Manipulated Images Viewed Millions of Times
AI-altered photos and videos depicting the final moments of Alex Pretti, who was shot and killed by federal agents in Minneapolis, have spread rapidly on social media platforms since the incident. This content circulating on platforms like Facebook, TikTok, Instagram, and X has blurred the critical details of the event on social networks.
Unlike other easily detectable AI-generated 'deepfakes' that depict completely fictitious scenes, it is noted that many AI manipulations of Pretti's shooting are based on verified imagery and are convincing enough to reflect reality, thus capable of misleading many people.
Senator Accidentally Used Manipulated Image
An image, understood to be altered with AI, showing an ICE agent pointing a gun at Pretti's back was viewed over 9 million times on the X platform. Interestingly, despite this image being marked on the platform with a note stating it was 'AI-enhanced,' it was used by Democratic Senator Dick Durbin during a speech on the Senate floor without him realizing it was not authentic.
Durbin's spokesperson stated in a declaration, 'Our office used a photo widely circulating online on the Senate floor. Our staff later noticed the image was slightly edited, and we regret that this mistake occurred.'
Real Videos Are Declared 'Fake'
The spread of AI media has led many people to also reject real videos of Pretti, claiming they are not authentic. Experts are concerned this situation could lead to a phenomenon known as the 'liar's dividend.' In this phenomenon, malicious actors aim to create distrust and avoid accountability by claiming that real media content is AI-generated.
There are three videos, independently verified by NBC News, showing an argument Pretti had with federal agents less than a week before his death. However, one of these videos is labeled as AI-generated by some social media users.
Ben Colman, co-founder and CEO of the deepfake detection company Reality Defender, stated that the uncontrolled spread of AI-involved media regarding the incident is concerning but not surprising. Colman said, 'For the past few months, we have seen a significant increase in AI-'enhanced' versions of blurry, low-resolution photos on social media.'
Lack of Verification Tools Deepens the Problem
News consumers have very few tools to accurately tell whether content was created or manipulated by AI. On the X platform, the platform's AI assistant Grok, in response to queries about the authenticity of images, gave responses claiming that the real video 'appeared to be generated or altered by AI.'
As AI systems advance and their capacity to create high-quality images and videos that blur the line between reality and fiction increases, it has been observed that waves of AI-driven misinformation and disinformation surrounding breaking news have become more common over the past year. Such developments once again bring concerns about the security and ethical dimensions of AI systems to the agenda.


