Elon Musk says he wants to make Twitter’s algorithm transparent in an attempt to fix this broken social network. Discrimination, hatred, fake news, manipulation or even conspiracy, the obstacles to a healthy alternative media are indeed many. And the word “transparency”, when decontextualized, can mean a lot and little at the same time…
Scientifically unraveling the billionaire’s ambitions is necessary in order to understand their relevance and effectiveness. We all, at least overwhelmingly, want to navigate a new Twitter where life is good, but transparency that is misused can be counterproductive, because first of all it needs to be well defined.
TO READ ALSOInside Elon Musk’s head
To run an application like Twitter, there are not one but several algorithms that have been designed. The user categorization algorithm that classifies the twitterians depending on their behavior on the platform, that of content recommendations that suggest posts to see or people to connect with, or that of the detection of inappropriate content, are part of this set.
Elon Musk wants to publish the source code of the Twitter algorithm, that is, what corresponds to the computer programs in which the algorithms and not the only algorithm were implemented. But this publication may not solve all the problems faced by the blue bird network.
Multiple risks
First of all, having the source code does not make it possible to understand the operation and construction of the algorithms as well as the sometimes arbitrary choices that have been made. It also does not make it possible to know the tests used to validate these algorithms and assess the risk of technological discrimination. Precisely, having the source code between your fingers will not make it possible to detect biases effectively. A broader transparency, which includes, among other things, that on the datasets used, is then necessary.
It is necessary to be transparent to all Twitter users about the types of algorithms that run on the platform, about what they do and how they use the behavioral data collected about users. It is also necessary to share the algorithmic governance of the company by explaining the good development and testing practices of these entities.
TO READ ALSOElon Musk’s intimate wound
Finally, it would be necessary to give back the power to users to choose, more clearly, to activate or deactivate the algorithmized editorialization of content. Algorithmized personalization is necessary to make these tools effective, but, when combined with a revenue model based on the economy of attention only, it becomes dangerous.
It should be noted, however, that some algorithms that are difficult to design and that do not represent an economic differentiator for Twitter, such as those that detect hateful or pornographic content, should be shared freely with the rest of the scientific and technological community to hope to build much better ones.
Transparency is a good idea, but not just any idea, and not applied in any way, at the risk of making the system we want to repair even more opaque. That would be a pity…