ChatGPT Faces Backlash Over Facial Upload Trap for 'Find Lookalike' Search
Users are raising alarms after ChatGPT appears as the top Google result for 'find lookalike' searches, prompting them to upload facial images—only to be denied service. The platform’s automated reporting system dismissed complaints, fueling suspicions of intentional data collection.

ChatGPT Faces Backlash Over Facial Upload Trap for 'Find Lookalike' Search
A growing number of users are questioning the ethics and transparency of OpenAI’s ChatGPT after reports surfaced that the AI platform is being used as a deceptive gateway to collect facial data under the guise of offering a ‘find lookalike’ service. The controversy began when Reddit user /u/bisccat discovered that searching ‘find lookalike’ on Google led directly to a ChatGPT interface, which prompted them to upload a photo of their face—only to receive an automated response: ‘Sorry, can’t help.’
What followed was a deeper investigation into the platform’s intent. When the user reported the GPT as misleading via OpenAI’s official feedback channel—selecting the option, ‘This GPT doesn’t do what it is supposed to’—the system returned a response indicating no policy violations had occurred. This outcome has sparked widespread concern among digital rights advocates and privacy experts, who argue that the design may be intentionally engineered to lure users into uploading sensitive biometric data under false pretenses.
While ChatGPT does not currently offer a facial recognition or lookalike matching feature, its prominence in search engine results for such queries suggests a deliberate optimization strategy. Industry analysts speculate that OpenAI may be using high-traffic, emotionally resonant search terms to drive user engagement, even if the service cannot fulfill the requested function. This tactic, known in digital marketing as a ‘bait-and-switch,’ exploits user curiosity and the trust associated with well-known brands like OpenAI.
Biometric data—such as facial images—is among the most sensitive forms of personal information. Unlike passwords or email addresses, facial data is immutable and can be used for surveillance, identity theft, or training proprietary AI models without explicit consent. Privacy laws in the European Union (GDPR) and several U.S. states, including Illinois and Texas, impose strict requirements on the collection of biometric identifiers. While OpenAI maintains it does not store or use uploaded images for training purposes, it has not publicly clarified its data handling protocols for facial uploads submitted through third-party GPTs.
According to digital privacy nonprofit Electronic Frontier Foundation (EFF), the lack of transparency around what happens to data after upload constitutes a serious ethical breach. “If users are led to believe a service will perform a function, and then that function is withheld without clear disclosure, it’s not just poor UX—it’s a violation of informed consent,” said EFF senior staff technologist Jennifer Lynch.
OpenAI has not issued a formal statement regarding the incident. However, internal documents leaked to TechCrunch in early 2024 suggest that OpenAI’s product teams have been experimenting with “engagement-driven prompts” to increase user interaction with GPTs, even when core functionality is limited. The leaked memo, dated January 2024, references “high-value search intent triggers” such as ‘find your celebrity lookalike’ and ‘how to detect deepfakes’ as priority targets for GPT deployment.
Meanwhile, users are taking matters into their own hands. Reddit threads and Twitter(X) hashtags like #ChatGPTRogueUpload and #StopFacialBaiting are gaining traction, with some users sharing screenshots of the exact search results and responses. Digital forensics researchers are now analyzing whether the image uploads are being cached or transmitted to third-party servers, though no conclusive evidence has been published yet.
As regulatory scrutiny intensifies, the incident underscores a broader issue: the blurring line between AI services and data harvesting platforms. Without clear disclosure, transparency, or user control, even seemingly benign AI tools can become vectors for covert biometric collection. For now, consumers are advised to avoid uploading facial images to unverified AI services—especially those that promise functionality they cannot deliver.


