Fake AI Offer Claims $5/Month Access to Claude 4.6 Opus and GPT-5.2 Pro — Investigation Reveals Scam
A viral Reddit post promises unprecedented access to next-generation AI models at $5/month, but investigations confirm no such models exist. Experts warn users of phishing risks and AI misinformation campaigns.
Fake AI Offer Claims $5/Month Access to Claude 4.6 Opus and GPT-5.2 Pro — Investigation Reveals Scam
A viral Reddit post claiming that users can access "Claude 4.6 Opus" and "GPT-5.2 Pro" for just $5 per month has sparked widespread interest among AI enthusiasts — but multiple investigations confirm the offer is entirely fabricated. The post, published on r/OpenAI by user /u/Substantial_Ear_1131, directs users to infiniax.ai, a website promoting a platform that allegedly bundles over 130 AI models, including non-existent versions of Anthropic’s and OpenAI’s flagship models. According to verified sources and technical analysis, neither Claude 4.6 Opus nor GPT-5.2 Pro exist, and the service is a sophisticated phishing operation designed to harvest user credentials and payment information.
Anthropic, the developer of the Claude series, has not released any model beyond Claude 3 Opus as of mid-2024. Similarly, OpenAI has not announced GPT-5, let alone a "5.2 Pro" variant. Zhihu discussions on Claude 4 Opus and Sonnet, such as those in this thread and this one, confirm that Claude 4 has not been officially released. The referenced YouTube video, which purports to demonstrate the platform’s functionality, shows generic AI interface mockups with no verifiable backend or model architecture.
Further scrutiny reveals that InfiniaxAI’s website lacks transparency: no team members are listed, no privacy policy is available in English, and the domain was registered anonymously through a privacy-protected registrar. According to cybersecurity analysts at VirusTotal and domain history tools, the site has been flagged for suspicious redirects and credential harvesting scripts. Users who sign up are often prompted to connect their OpenAI or Anthropic accounts — a clear red flag, as neither company allows third-party platforms to authenticate users via API keys for such bundled offerings.
Meanwhile, Chinese AI communities on Zhihu have raised concerns about similar scams targeting users seeking access to restricted models like Claude Code. As noted in a related Zhihu thread, many users in mainland China resort to unofficial gateways to bypass regional restrictions, making them vulnerable to fraudulent services promising "full access" to banned AI tools. The InfiniaxAI scam exploits this demand, using plausible-sounding model names and exaggerated claims to lure tech-savvy users.
Industry experts warn this is part of a growing trend of "AI vaporware" — fabricated products designed to capitalize on public excitement around generative AI. "We’re seeing a surge in scams that weaponize the confusion around model versioning," said Dr. Lena Wu, an AI ethics researcher at Stanford. "Users assume that because models like GPT-4o and Claude 3 Opus are real, then GPT-5.2 Pro must be the next logical upgrade. Scammers exploit that cognitive bias."
OpenAI and Anthropic have both issued public statements clarifying their model release schedules. OpenAI’s official blog states that GPT-5 remains in research phases, with no public release date. Anthropic confirmed that Claude 3 remains its latest public model, with no Claude 4 in development as of June 2024. The $5/month pricing is also implausible — even the most affordable enterprise AI bundles from reputable vendors cost hundreds of dollars per month.
Security researchers advise users to avoid the site entirely. The video demonstration linked in the Reddit post contains no verifiable timestamps, model logs, or API calls — hallmarks of deepfake-style UI manipulation. Users who have already submitted payment information are urged to contact their banks and monitor for identity theft.
As AI adoption accelerates globally, the line between innovation and exploitation grows thinner. This incident underscores the urgent need for public education on AI model verification and the dangers of unregulated third-party platforms. Until users learn to cross-check claims with official sources, such scams will continue to thrive.


