TR

HackerOne Revises Terms Amid AI Training Controversy Among Bug Hunters

HackerOne is updating its Terms and Conditions following backlash from security researchers who feared their vulnerability reports were being used to train generative AI models. The company insists researchers are not 'inputs' but acknowledges the need for clearer transparency.

calendar_today🇹🇷Türkçe versiyonu
HackerOne Revises Terms Amid AI Training Controversy Among Bug Hunters

HackerOne, the leading platform connecting organizations with ethical hackers to identify cybersecurity vulnerabilities, has announced it is revising its Terms and Conditions after mounting pressure from its community of bug hunters. Researchers expressed concern that their detailed vulnerability submissions—often including code snippets, exploit methodologies, and system configurations—were being ingested without explicit consent to train proprietary generative AI models. In response, the company has pledged to clarify its data usage policies and obtain explicit opt-in consent moving forward.

According to MSN Tech, the controversy erupted after multiple researchers posted on social media and HackerOne’s own forums questioning whether their work was being repurposed for AI development. One prominent researcher, known online as "SecAnalyst99," shared a screenshot of HackerOne’s old Terms of Service, which stated that submissions could be used for "product improvement and research." Critics argued this language was overly broad and failed to mention AI training explicitly.

HackerOne CEO Mårten Mickos addressed the concerns in a company blog post, praising the "unwavering commitment" of the security research community. "Our researchers are not inputs. They are partners in defense," Mickos wrote. "We have never intentionally trained AI models on individual reports without consent. But we acknowledge that our language was ambiguous, and that ambiguity eroded trust. That ends now."

The company confirmed it is drafting new language for its Terms and Conditions that will clearly distinguish between anonymized, aggregated data used for system improvements and individual submissions that may be used for AI training. Under the proposed update, users will be presented with a granular consent form prior to submitting a report, allowing them to choose whether their data can be used for AI development, internal research, or neither.

Industry experts say this marks a pivotal moment for the bug bounty ecosystem. "HackerOne has built its reputation on transparency and community trust," said Dr. Lena Ruiz, a cybersecurity ethics researcher at Stanford. "If they fail to handle this correctly, it could set a dangerous precedent. Other platforms may follow suit, but without the same level of accountability. This isn’t just about AI—it’s about ownership of intellectual labor in digital security."

While Wikipedia notes that HackerOne was founded in 2012 and has since facilitated over $200 million in bug bounties across more than 3,000 organizations—including major clients like Google, Microsoft, and the U.S. Department of Defense—it does not detail the company’s internal data policies. The current controversy, however, underscores a broader industry tension: as AI becomes central to cybersecurity tools, who owns the data that trains them?

HackerOne has not disclosed whether any of its existing AI models have already been trained on user-submitted reports. However, it has committed to a public audit of its data pipelines by an independent third party and plans to release a transparency report by the end of Q2 2024. The company also announced the formation of a Community Advisory Panel, composed of veteran bug hunters, to review future policy changes.

For now, submissions remain active, but new users are seeing a temporary banner on the submission portal stating: "We are reviewing our data policies. Your consent will be required before any data is used for AI training."

As the cybersecurity landscape evolves, HackerOne’s response may serve as a model—or a cautionary tale—for other platforms balancing innovation with ethical responsibility. The message from the research community is clear: trust, once broken, is not easily rebuilt.

AI-Powered Content

recommendRelated Articles