X's 'Open Source' Algorithm Under Scrutiny by Researchers
X's recent release of its 'for you' algorithm code has been lauded by Elon Musk as a triumph for transparency. However, independent researchers argue that the published code is heavily redacted and offers little genuine insight into the platform's recommendation mechanics.

X's 'Open Source' Algorithm Under Scrutiny by Researchers
New York, NY – In a move heralded by CEO Elon Musk as a significant stride towards transparency, X (formerly Twitter) recently published the code underpinning its "for you" recommendation algorithm. Musk stated the release would allow users to witness the platform's efforts to improve its often-criticized system in real-time, asserting, "No other social media companies do this." However, a closer examination by technology researchers suggests that this open-sourcing may be more of a symbolic gesture than a genuine leap forward in algorithmic transparency.
While X stands alone among major social networks for making elements of its recommendation engine publicly accessible, experts contend that the released code falls short of providing the actionable insights needed to truly understand its inner workings in the current landscape of 2026. John Thickstun, an assistant professor of computer science at Cornell University, described the published code as a "redacted" version, similar to an earlier release in 2023. "What troubles me about these releases is that they give you a pretense that they're being transparent for releasing code and the sense that someone might be able to use this release to do some kind of auditing work or oversight work," Thickstun told Engadget. "And the fact is that that's not really possible at all."
Following the code's publication, a flurry of activity erupted on X itself, with users speculating on strategies for content creators to enhance their visibility. Discussions ranged from theories suggesting that "conversational" posts and those aiming to "raise the vibrations of X" would be rewarded, to claims that posting video content or adhering strictly to a "niche" would boost reach. However, Thickstun cautioned against drawing definitive conclusions from these interpretations, emphasizing that "They can't possibly draw those conclusions from what was released." While some minor details, such as the filtering of content older than a day, were illuminated, much of the information remains "not actionable" for creators seeking to optimize their presence.
A significant structural shift in the current algorithm, compared to its 2023 predecessor, is its reliance on a large language model, akin to Grok, for ranking posts. Ruggero Lazzaroni, a PhD researcher at the University of Graz, explained the evolution: "In the previous version, this was hard coded: you took how many times something was liked, how many times something was shared, how many times something was replied … and then based on that you calculate a score, and then you rank the post based on the score." He continued, "Now the score is derived not by the real amounts of likes and shares, but by how likely Grok thinks that you would like and share a post."
This integration of a large language model, according to Thickstun, further obfuscates the algorithm's decision-making process. "So much more of the decisionmaking … is happening within black box neural networks that they're training on their data," he stated. "More and more of the decisionmaking power of these algorithms is shifting not just out of public view, but actually really out of view or understanding of even the internal engineers that are working on these systems, because they're being shifted into these neural networks."
Furthermore, the current release omits details that were previously disclosed. In 2023, X provided insights into how it weighted various user interactions – for instance, a reply was valued at 27 retweets. However, X has now redacted such weighting information, citing "security reasons." This lack of detail extends to the training data used for the algorithm, a crucial element for researchers seeking to understand potential biases. Mohsen Foroughifar, an assistant professor of business technologies at Carnegie Mellon University, highlighted this deficiency: "One of the things I would really want to see is, what is the training data that they're using for this model. If the data that is used for training this model is inherently biased, then the model might actually end up still being biased, regardless of what kind of things that you consider within the model."
The inability for independent researchers to thoroughly study the X recommendation algorithm carries broader implications. Lazzaroni, involved in an EU-funded project exploring alternative social media recommendation systems, noted that the current release lacks the necessary components to reproduce the algorithm's functionality. "We have the code to run the algorithm, but we don't have the model that you need to run the algorithm," he said.
The challenges and questions surrounding social media algorithms are poised to resurface in the context of AI chatbots. "A lot of these challenges that we're seeing on social media platforms and the recommendation [systems] appear in a very similar way with these generative systems as well," observed Thickstun. "So you can kind of extrapolate forward the kinds of challenges that we've seen with social media platforms to the kind of challenges that we'll see with interaction with GenAI platforms."
Lazzaroni offered a stark perspective on the profit-driven nature of AI development, stating, "AI companies, to maximize profit, optimize the large language models for user engagement and not for telling the truth or caring about the mental health of the users. And this is the same exact problem: they make more profit, but the users get a worse society, or they get worse mental health out of it." The limited transparency offered by X's recent algorithm release thus underscores ongoing concerns about the societal impact of opaque algorithmic systems.


