The problem of personalized IXs is not exclusively on Facebook, but affects all platforms

As much as our informative experience becomes tailored to our interests, and content production and delivery becomes increasingly specific to particular audiences: a conversation, a debate, among friends or in a more public sphere with strangers; it becomes harder if you are not even sure if the audience shares the same background knowledge as you or where that audience came from. Confrontation and exchange, overall, works only if we have a common ground.

The problem with Facebook content moderation

Content moderation is the complex socio-political issue which Facebook wants to address with artificial intelligence. The process already of content moderation, inside of social networks but especially Facebook, is a closed, hidden and secret process, with a lack of accountability and appeal. This problem has been recently highlighted by the documentary The Cleaners. But adding in the usage of AI will only amplify problems of bias, a lack of accountability and understanding in content moderation.

At the moment, Facebook has tried to delegate task forces for some critical cases (i.e. Correctiv for Germany elections). In other, more complex situations such as regional conflicts, the decision on who gets the power to ban or delete is a decision with political consequences, and it's entirely in Facebook's hands.

For example, despite the complexities within the region, Facebook only has one content revision office for all Arab-speaking countries and it is situated in the United Arab Emirates In regards to all the Mashriq and Maghreb countries, the evident socio-political problems are the use-cases we address to highlight the weakness of the centralized model platform.

fbTREX's approach in this sense, enables citizens, authorities and communities to keep Facebook accountable for their moderation decisions. Mark Zuckerberg promised multiple times, in front of the US Congress and EU Parliament, the adoption of more sophisticated AI to address hate speech. The solutionist approach of delegating complex political tasks to AI is an additional reason to keep accountable an automated tool which experts believe can't catch up with the complexity and the diversity of the conflicts around the world. Facebook and some other initiative financed by FB, GOOG and others, promise transparency and fairness, but if you consider the technological complexity and the limited access authority can have, this is far from be a fair solution of the connected population.

fbTREX engage everyday people as primary users. They are those most affected by algorithms, those with the least amount of power and equity. We want all users capable to have an ability to audit how Facebook is shaping their informative experiences.

The problem with personalization algorithms

If we can't be on a large network without a filter, users have a right to know the nature of that filter, and some day in the future we should have enough power to determine our own algorithms. This is what we call algorithm diversity, the technical possibility to apply your own algorithm on top of your timeline. Change it when you feel the need, share it, and remix it- similarly to what free licenses such as GPL permit. This is not something which is permitted by Facebook's current business model, but if algorithms have such an impact on your life, it makes sense you should exert control over them.

According to Facebook's confirmation of strategy in early 2018, the NewsFeed algorithm is prioritizing certain content for the user to see and deprioritizing other content based on a variety of hidden and/or arbitrary variables. In fbTREX talks, it is usual quote three historical cases as a reference:

  1. The Facebook's massive experiments in emotional manipulation
  2. The personal testimony of someone missing out on a tragic message
  3. The selective appearance of a political uprisings such as Black Lives Matter.

We built fbTREX to give scientific accuracy to such observations and be less dependent of anecdotes. In our first achievement, researchers get able to perform analysis. Future development would enable the user itself to understand more on the algorithm influence, maybe, even to empower them in spotting algorithm misbehavior.

We've seen through our research that Facebook sometimes shows the same content more than once, and includes content from sources the user may not follow. It is widely known that this happens with advertising, especially targeted advertising. But it also happens for other reasons which are less well understood, but have potentially harmful impacts on the user and which we are exploring via our research.

The negative consequences that can arise from Facebook showing us what we don't necessarily want to see are known. We are also concerned about what Facebook is not showing us that we would want to see. This is like a more subtle kind of censorship, because it exploits our attention economy with no room or space for intervention or course-correction from the user.

Facebook's black box algorithmic system is the equivalent of Coca-Cola's secret recipe; it has a major impact in how we perceive the product, though the components are not readily discoverable. This means that public discourse and the lives of our friends are at the mercy of an unknown system with unknown motivations. In addition to the secrecy, there is also a dominant position factor in place. a network, it is said has a value, equals the square of their nodes. This because less node has a network, less likely users will join it. This is a monopoly issue which in the telecommunications has been solved by interoperability and number portability, currently, the effort done by the regulator(s) are still distant from a true empowering solution, and this keeps the position of Facebook even more ubiquitous.

Algorithms are the technological solution to the information overload: they are as powerful as necessary to manage the overflow of data that reaches us. As explained above, they can also conceal ideas and culture and alter how users assess and judge the impact of culture and ideas as seen and shared through the digital community.

The impact on society also depends on products designed, algorithms are tools for the product. Think to the Facebook like, it is the most basic interaction, intended to measure the most uncomplicated appreciation, with an emergent property: the amount of likes become the primary measure for social engagement and acknowledgment, But as society, what do we like has nothing to do with quality, or accuracy, or trustworthiness. Still, this is the indicator, used by algorithms among others values, in deciding what goes viral and what should silently be forgotten. No one should be allowed to abuse such power over connected people, At this stage, consent is neither informed nor optional..In fbTREX we try to don't re-use such metadata, or we would be miselead by Facebook design also in our effort in apply different metrics.

Running accurate analysis is hard, because every person has a different profile, access habits and algorithm decision-making can be highlighted only by comparing how two or more users get a different experience among the same opportunities.

An additional issue we faced was to figure out what a fair algorithm is. As a society, we do not yet have metrics and values, but with our tests, we can begin to develop such language. We have no way to measure it, judge it, or understand to what extent the algorithm's agenda matters more than our personal choices.

The problems with conducting and maintaining fbTREX

keeps changing the HTML needed to allow the client (the user providing the data) to access their data.

fbTREX privacy is tricky: although we are collecting only public posts, the knowledge of users' timeline could leak how Facebook profiled those users, and this can reveal private information or let third parties attribute characteristics to a fbTREX user. Aware of this risk, we do not make the users' contribution public at all. Only the users submitting their data have access to the data and it is their decision to opt-in and share their data with third parties. Because we can't solve all the trust problems with technology, it is important to define precisely the security and privacy property of this component. Log access to the database and build within it privacy-by-design.

We, as the users, want more control and transparency, we should clearly understand the impact of our activity on our information diet. Meanwhile, the wave of misinformation leads to unnecessary pressures on publishers and platforms to get new powers, legitimised by public demands. We don't expect to offer a solution which can fit every context, but we want to offer alternative perspectives from the one spread by the monopolist, and we especially want to enable local actors in exerting their politics over their community, over their data and their algorithms

The hearing at the European Parliament, held on the 22nd of May 2018, showed many political criticisms of the Facebook model and the same amount of non-answers from Mark Zuckerberg. We feel there is a lot of space for third-party independent analysis but it is an effort which requires investment and strategy.

Considering how much social media influences our learning process...

only way to compare a person's timeline with mine is by having access to their mobile phone.

At the moment, we see Facebook in front of Congress acknowledging responsibilities and making bold statements such as "with artificial intelligence, in 5 years we will solve the problem of hate speech and False News". This project's position is: they don't deserve any additional function in the social sphere. Facebook already has certain responsibilities and it fulfills them with disheartening results, and we argue third-party actors should be in charge of it: decentralized among the different cultures and languages.

Self-imposed limitations and data ethics

refuse to share the full access to the database to who ask us so, that assets represent the trust of our adopters and we pledge to protect it. the moment, the protection model is by policy, as part of our development roadmap, we want to gradually improve the technology and offer protected by design.

By installing fbTREX, we are asking individuals to share some of the data - not their personal data, but what Facebook gives them. The goal is to study how social media influences, not the subject participating.

Still, many Personally identifiable information, can be present in such informations.

the only goal of this data collection is the collective interest, transparency and fairness are two essential values. The project ethics define clearly which limits we give to yourself, in the collection and in the analysis. We don't have a business to develop, or a user profiling schema behind,

First limitation: we observe only the timelines, not individual profiles or pages. This is the difference between self assessment and enabling social media intelligence, which could be an abusive practice and we do not want to enable that kind of practice carelessly. We consider timeline of public post something linked to your individual profile, therefore is for us a PII to protect.Second limitation: we only store public posts on our server Third limitation: Users who install the extension have full control over their data; they can delete what they submit whenever they want.Fourth limitation: Nobody has access to an individual's data unless the owner grants them access.This means: users can opt-in to every possible third-party they might want to include or interact with.Fifth limitation: if we, fbTREX, run analysis on the dataset, the analysis would be developed to understand the social phenomenon of information served up in an algorithmic timeline, not specific information about the individual profiles we are studying. This can't be formally verified, therefore during the ALEX development we would like to formulate and publish updates on the safeguards we are implementing in that research.