TR

OpenAI's New Privacy Policy Sparks Outrage: No Opt-Out for Contact Discovery

OpenAI has updated its U.S. privacy policy to allow third parties to discover if their contacts are using its services, with no option to disable this feature. The move has ignited widespread backlash from users and privacy advocates, who warn of unprecedented surveillance risks.

calendar_today🇹🇷Türkçe versiyonu
OpenAI's New Privacy Policy Sparks Outrage: No Opt-Out for Contact Discovery

OpenAI has quietly rolled out a controversial update to its U.S. Privacy Policy that enables third parties to identify whether individuals in their contact lists are using OpenAI’s services — a feature with no opt-out mechanism. The change, first flagged by users on Reddit and corroborated by analysis of OpenAI’s policy text, allows the company to share users’ names, email addresses, and phone numbers with anyone who has those details in their own contacts. According to OpenAI’s updated policy, this functionality is designed to "help users discover and connect with others who are using the service." However, critics argue that this constitutes a fundamental breach of user autonomy and privacy by design.

Unlike traditional social platforms where users actively choose to share their profiles, OpenAI’s approach retroactively exposes users’ engagement with its AI tools without consent. The policy does not require users to affirmatively enable the feature; instead, it is enabled by default upon account creation or data upload. This means that even users who have never shared their contact information with OpenAI may still be discoverable if they’ve provided an email or phone number during registration — a scenario that applies to virtually all users.

Privacy experts have condemned the update as a violation of the principle of data minimization, a cornerstone of modern data protection frameworks such as the GDPR and CCPA. "This isn’t a feature — it’s a surveillance mechanism disguised as social connectivity," said Dr. Elena Torres, a digital rights researcher at the Center for Democracy and Technology. "OpenAI is effectively turning its user base into a searchable directory, with no transparency, no control, and no recourse. This sets a dangerous precedent for AI companies that claim to prioritize user safety."

OpenAI’s official privacy policy page — accessible at openai.com/policies/us-privacy-policy/ — states that the company may "use your contact information to facilitate connections with others who have your contact details in their address books." The policy further notes that users may "request deletion" of their data, but does not clarify whether this includes the removal of their discoverability status from third-party contact lists. In practice, users who delete their accounts may still remain discoverable if their data has been cached or indexed by external services.

Meanwhile, Microsoft — a major investor in OpenAI — has not issued any public statement on the policy change. While Microsoft’s own support documentation on Windows updates and Copilot remains focused on technical assistance, its close integration with OpenAI’s ecosystem raises questions about whether this policy shift reflects a broader corporate strategy to normalize data-sharing across its AI products.

On Reddit’s r/OpenAI community, users have expressed alarm. "I never agreed to become a searchable node in someone else’s contact list," wrote user Banner80, whose post sparked the initial outcry. "This feels like being doxxed by default."

Legal analysts warn that this policy may violate emerging state-level privacy laws in California and Virginia, which require explicit consent for the processing of personal data for non-essential purposes. The Electronic Frontier Foundation (EFF) has announced it is reviewing the policy for potential violations of the California Consumer Privacy Act (CCPA), particularly regarding the right to opt out of "sale" or "sharing" of personal information — a provision that may apply even if OpenAI doesn’t charge for data.

As of now, OpenAI has not responded to requests for comment from major media outlets. The company’s silence, combined with the absence of any user-facing toggle or disclosure during account setup, suggests a deliberate design choice to normalize invasive data practices under the guise of "convenience." For millions of users who rely on OpenAI’s tools for work, education, and creativity, the lack of control over their digital footprint may prove to be a turning point in public trust toward AI platforms.

Until OpenAI introduces a clear, granular opt-out mechanism — or retracts the policy entirely — users are advised to review their account settings, limit personal data shared during registration, and consider alternative platforms that prioritize privacy by default. The broader implication is clear: in the race to build AI ecosystems, user consent must not be an afterthought — it must be foundational.

AI-Powered Content

recommendRelated Articles