Our vision is to increase transparency behind personalization algorithms, so that people can have more effective control of their online Facebook experience and more awareness of the information to which they are exposed.
Our mission is to help researchers assess how current filtering mechanisms work and how personalization algorithms should be modified in order to minimize the dangerous social effects.
Since its rise in 2004, Facebook has grown into the largest social networking service online. Gathering almost 2 billions of active users a month, the resulting data generated by the community makes it extremely difficult for the users to acquire and process relevant information in a meaningful way. To overcome the problem of information overload, Facebook has started to deploy personalization algorithms.
Personalization algorithms are tools to customize information, determining its importance for a specific user. On the basis of users’ identity and interactions, specific bits of information are preferred while other are hidden from her view. By deploying this filtering mechanism, Facebook ensure that each user can have a meaningful experience of the service, instead of dealing with an unorganized information flow.
Personalization algorithms have to be judged positively in its potential to engage with basic human need in the information era: reduce information overload, increase its scope and usability, and support users autonomy of making informed choices. However, while algorithms facilitate the managing of information, new researches have shown evidences that this benefit is largely balanced by the social and individual negative effect of filtering.
The design of personalization algorithms is not just a technical matter, but a political one. We do not claim that personalization algorithms are bad; we claim that bad personalization algorithms are bad.
Facebook filtering occurs in silence, with the filtering occurring before the user interact with the information. The user has no active control on what she’s exposed to; this is settled by the algorithm according to her digital identity. However, filtering results in the creation of a bubble, a bundle of only that information that match her preferences.
This phenomenon, known as “the filter bubble” or “echo chamber” or again “information cocoons”, actually decrease users autonomy of making informed choices. By decreasing the quality and variety of information the user consume in its personal bubble, her ability to critically evaluate and deal with contrary opinions, especially in the context of public debates, is potentially damaged.
Facebook filtering acts on the basis of an identity that is considered fixed through different contexts: many interactions, one filter. But the interactions that shape users’ identity are much more contextual than what is actually taken to be.
As individuals, we know that information shared in some context is often inappropriate to share in another. That’s why we have different disclosure expectation of the same bit of information in respect to different situations or people. Our ability to keep track of the flow of information we send and receive through different context is what makes privacy valuable for our individual autonomy.
However, given Facebook treatment of identity, personalization algorithms tend to filter out information that is important in one context just because it is not relevant according to the general profile, potentially damaging our ability to exercise our privacy preferences and recognize its personal and social value.
Facebook personalization algorithms are not yet open to public scrutiny. But the public has the right to demand a transparent disclosure of the rules by which those algorithms silently shape the way we manage information both as individuals and as societies. This is not only valuable for individuals and societies, but also for Facebook itself. In the long run, the effect of those algorithms can potentially disrupt the value underneath Facebook business partnerships with firms. In fact, by tightening users’ preferences, personalization algorithms might actually create a customer type hostile to the advertising and testing of new products that do not fit its identity expectation.