Written on July'17. Still valid one year later, baked by some results.
What is the problem?
- The ways in which FB selects information to be presented to individual users are not well understood.
- Algorithms are like social policies; they should be available for public scrutiny
- Users has been used as lab rats in psychological and social experiments.
- Among all the algorithms crossing our life, Facebook's coordinate how people perceive the reality; besides the enormous audience, it is personalized, making it even harder to scrutiny.
- How the data are treated, assimilated, analyzed, used and re-purposed is not said by Facebook Data Policy.
- Facebook claims the complexity of the algorithm is so huge no engineer can understand or explain the internals, making the algorithm the perfect technocratic scapegoat
What are the goals
- To observe, evaluate, and better understand Facebook algorithm, in particular how FB leverages and profits from promoted content.
- To develop awareness on how Facebook operate the business of promoted contents.
- To enable researchers from non-technical fields to analyze the social media.
For users of FB
- To enable better understanding of the critical issues and their implications.
- To raise awareness around the concept of “algorithm diversity” or “the right to pick your algorithm.”
- To provide users with a persistent record of their timeline.
- To serve as a neutral actor, reliably providing research-quality data for analysis.
- To engage diverse research audiences to utilize the data, prioritizing and encouraging interdisciplinary approaches.
- Over the long term, get FB to give users greater transparency and greater agency over how they experience the algorithm.
- To encourage FB to publish more open data describing the social phenomena happening in the social media.
How we intend to work toward the goals
- Overall, the approach is based on broadly distributed collection of individual user experiences with FB feeds from their respective observation points, utilizing a networks of volunteers and bots
- Using a web-extension, we will collect data on what each user sees on their public timeline.
- Extract the metadata from the data submitted, and make this dataset available to researchers.
- Produce visualizations and other renderings of how different users experience the FB algorithm.
- Use research findings to inform both advocacy and user education.
What needs to be done next?
Overall, we are open to advice, partnership, and collaboration with all interested parties.
Our roadmap includes the following milestones:
Improve the software
- Improve the code of the parsers. They are self-contained, small components that extract metadata from the posts seen from the user.
Establish community processes
- Write a visually and formally clear data policy, explain the life-cycle of the data and where third party and users interact with us in an exemplary way.
Write an ethical agreement to
- permit the third party accessing the database of collected observations, in order to protect the supporters against social media intelligence;
- the third party shall not sell, monetize, publish, reuse or analyze and copy outside the scope of the agreed analysis.
- As safeguards, the algorithms runs on the database are declared and formalized, we execute the script, providing an API for the owner.
Engage and support researchers
- Provide support for research organizations as early adopters.
- Presentation, Advocacy, finding collaborators, keep doing public appearance.