NPA 2022: Thoughtful design is more effective than AI-moderation against misinformation on social media

Författare:

|

Datum:

|

Author: Kave Noori

The 7th edition of the Nordic Privacy Arena (NPA) was held on September 26-27, 2022. The conference included 23 agenda items on data protection. This is article 2 of 3 highlighting lectures and discussions. Today we will cover a keynote speech given on the first day where the speaker argued how thoughtful design of social media platforms is better than AI-driven moderation to make platforms a safe space.

Rose-Mharie Åhlfeldt, board member of the Swedish Data Protection Forum, introduced the speaker, Frances Haugen. Haugen is a former Facebook employee turned whistle-blower. Åhlfeldt introduced Haugen as an advocate for accountability and transparency in social media.

Frances Haugen began by telling the audience that in the fall of 2021 she blew the whistle on Facebook by disclosing 22,000 pages of internal documents that showed Facebook’s algorithms were negatively impacting things such as how political parties chose what ideas to push, the mental health of children and ethnic violence. She has also provided testimony to the Congress of the United States.

She went on to talk about how the conflict between social media platforms and user safety and privacy is caused by the current social media business model. Frances said that Facebook’s chosen means of AI censorship to make the Internet a safe place is flawed. In her opinion, AI moderation is ineffective and distracts us from implementing more important strategies to keep us safe online. She explained that there is always a trade-off between accuracy and recall with AI systems, and that Facebook made a reasonable choice for what is realistic when using an AI-moderator, by opting for a lower rate of bad content removal (3-5%) rather than a higher rate that would also remove a lot of good content. She added that the most dangerous thing is that AI censorship only works well in certain countries like the US, Germany, and France, and it does not work for smaller languages like the Nordic languages.

Frances said that in countries like Ethiopia and Myanmar, Facebook is the Internet for many people. That’s because the company has subsidized the construction of Internet connections. As a result, most of the content available online is only accessible through Facebook. However, this has led to a situation where Facebook is trying to prevent competitors from providing people with alternative options, with disastrous consequences.

In Myanmar and Ethiopia, many people have died due to misinformation and ethnic violence fuelled by governments. Officials from these governments were sent to Russia and trained in information operations. These governments exploited vulnerabilities in the system, such as the lack of support for local languages by Facebook’s AI moderation algorithms. In both cases, the government exploited vulnerabilities in the system to cause chaos and violence. In Myanmar, 200,000 people died as a result of the Myanmar government-run intelligence and operations. In Ethiopia, hundreds of thousands of people died as a result of misinformation and ethnic violence.

According to Frances, Facebook has conducted studies that have shown that users who see more content from their friends and family members on the site are less likely to see violence, hate speech or nudity. The problem with Facebook, however, is that its business model is based on users consuming content. This means that the company has an incentive to push users to content that is not necessarily from their friends and family. This has led to problems where the algorithm often pushes users to extreme groups.

Frances Haugen said that in 2016, 65 % of people in Germany who joined neo-Nazi groups on Facebook did so because Facebook adjusted its algorithm to get them to join more groups. However, Facebook went beyond this by also counting group invitations that had not yet been accepted by the user as ”joins.” This created a channel for misinformation, as people who engaged with content in the first 30 days were counted as ”members” even if they had not actually joined the group.

Haugen also challenged the notion that Facebook needs to see everything to keep its virtual space safe. She pointed out that in many cases people do not feel safe because of the design of the space, but that this can be changed by designing it differently and thoughtfully. She drew an analogy to the design of offline spaces:

”When people go into a church, it’s loud, it echoes. As a result, it makes us feel small, it makes us feel quiet… Right? It’s a space for contemplation, because if you talk, you’ll hear the echo while if you go into a conference room, they’re designed to be intimate. They’re meant to foster a sense of safety. They’re meant to make people feel like they can talk as much as they want. They don’t have to shout because it’s quiet.”

Frances went on to say that if you put content moderation aside and focus on the design of the user interface, a different design of the Facebook interface could actually lead to less spread of misinformation. She said that simple things like adding a human in the loop, requiring someone to click on a link before sharing, or adding a moment of reflection before sharing could reduce the spread of misinformation by 10-15 %. She also suggested that the effectiveness of this countermeasure does not depend on what language the content is written in.

Frances Haugen imagined what would happen if Facebook was owned by its users, had voting rights, was a decentralized, autonomous organization, and was non-profit. She suggested that this would lead to a Facebook that looks more like it did in 2008, with a focus on family and friends.

Further, she said that end-to-end encryption, which would prevent the social media platform from seeing what users are communicating about, would be another measure that would make social media more secure by pushing the user to focus their communication to be with family and friends.

Francis said we know from conversations with law enforcement officers that social media is an open directory for predators to contact children. She argued that there are safety features that could be implemented to make social media safer for children, such as the app giving the child a warning when they interact with an “outlier”, but these measures would run counter to the business interests of social media companies who need more and more users to consume more and more content.

Francis said that car companies were reluctant to talk about safety in the 1960s because it would remind people that cars are dangerous. She argued that there are many safety features that could be implemented on social media platforms like Instagram and TikTok, but that it is now those platforms who do not want to remind users that their children could be in danger. Francis believes social media has the potential to be both safe and private. She urged that conversations need to happen now about how to design such safer and more personal social networks.

Read more about the speaker

Frances Haugen has a website https://www.franceshaugen.com/ with links to social media accounts on Twitter and Instagram.