AI and the Reality Crisis: Verification Tools Are Falling Short
The U.S. Department of Homeland Security has confirmed it uses AI video generators to edit content shared with the public. Experts warn that we are entering an era where content verification systems are inadequate, and the impact of manipulated information persists 'even after it is disclosed as fake.'
Government Agencies Using AI Content Generators
It has been confirmed for the first time that the US Department of Homeland Security (DHS) has used AI video generators from Google and Adobe to edit content shared with the public. This development coincides with a period where immigration agencies are heavily using social media to support a mass deportation agenda. It is believed that some of the shared content was generated by AI.
Manipulated Content and Public Reaction
Two distinct public reactions to the issue are noteworthy. The first group expressed that they were not surprised by this news after the White House, on January 22, shared a digitally altered photo of a woman detained at an ICE protest, showing her as hysterical and crying. The White House deputy director of communications did not respond to questions about whether the photo was altered, stating instead, "The memes will continue."
The second reaction group argued that reporting on DHS's AI use was meaningless, suggesting that news organizations also engage in similar practices. Those holding this view pointed to the fact that the MS Now (formerly MSNBC) news network aired an AI-edited image of Alex Pretti to make him appear more handsome. An MS Now spokesperson stated that the image was published without the knowledge that it had been edited.
Content Verification Systems Falling Short
Systems like the 'Content Authenticity Initiative,' pioneered by companies like Adobe and adopted by major tech firms, aimed to add labels explaining when content was produced, by whom, and whether AI was used. However, Adobe automatically applies these labels only if the content is entirely AI-generated. In other cases, labeling is left to the discretion of the content creator.
Furthermore, platforms like X (formerly Twitter) can remove these labels or choose not to display them. Although it was announced that the DVIDS website, used by the Pentagon to share official imagery, would display these labels, an examination of the site revealed no such labels exist.
The Impact of Fake Content Can Be Lasting
A new study published in the journal Communications Psychology highlights the severity of the situation. In the study, participants were shown a 'deepfake' video of a person confessing to a crime. Researchers found that even when participants were explicitly told the evidence was fake, they still relied on it when assessing the individual's guilt. In other words, even when people learn that the content they viewed is completely fake, they continue to be emotionally affected by it.
Disinformation expert Christopher Nehring commented on the research findings, stating, "Transparency helps, but it is not enough on its own. We need to develop a new master plan for what to do about deepfakes."
New Battlefields in the Post-Truth Era
AI tools for creating and editing content are becoming more advanced, easier to use, and cheaper. This is why the US government is increasingly paying to use these tools. Warnings had focused on preparing for a world where confusion would be the main danger. However, the world we are entering is described as one where manipulation retains its impact even after being exposed, where doubt can easily be weaponized, and where revealing the truth does not function as a 'reset button.' It is stated that the defenders of truth have already fallen behind.
These developments also bring debates about how AI is being integrated into areas like visual collaboration and coding. For instance, developments such as the MCP server announced by Miro, which combines AI coding tools with visual collaboration, demonstrate the speed at which the technology is being adopted as productivity tools.


