Thursday, September 29, 2022

Social media pollute society. Moderation alone will not solve the problem

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Moderation (automated or human) can potentially work for what we call “acute” damage: damage caused directly by individual pieces of content. But we need this new approach because there are also many “structural” issuesissues such as discrimination, decline in mental health and declining social trust – manifest in broad ways in the product rather than through a single piece of content. A famous example of these kinds of structural problems is Facebook’s 2012 “emotional contagion” experiment, which found that users’ affect (their mood as measured by their behavior on the platform) shifted measurably depending on the version of the platform. product to which they were exposed.

In the backlash that followed after the results went public, Facebook (now Meta) put an end to this kind of deliberate experimentation. But just because they’ve stopped measuring such effects doesn’t mean product decisions won’t continue to have them.

Structural problems are direct outcomes of product choices. Product managers at tech companies like Facebook, YouTube, and TikTok are incentivized to focus overwhelmingly on maximizing time and engagement on the platforms. And there is still plenty of experimentation going on there: almost every product change is implemented through randomized, controlled trials with a small test audience. To assess progress, companies implement rigorous management processes to advance their central missions (known as Objectives and Key Results, or OKRs), even using these results to determine bonuses and promotions. Responsibility for addressing the impact of product decisions is often placed on other teams who are usually downstream and have less authority to address the root causes. Those teams are generally able to respond to acute damage, but often cannot resolve problems caused by the products themselves.

With attention and focus, the same product development structure could be directed to the issue of societal harm. Consider Frances Haugen’s congressional testimony last year, along with: media revelations on the alleged impact of Facebook on teen mental health. Facebook responded to criticism by explaining that it had examined whether teenagers felt that the product had a negative effect on their mental health and whether that perception led them to use the product less, not whether the product actually had an adverse effect. While the response may have addressed that particular controversy, it illustrated that a study that focused directly on the issue of mental health — rather than its impact on user engagement — wouldn’t be a big stretch.

Incorporating assessments of systemic damage will not be easy. We should find out what we can actually measure rigorously and systematically, what we would expect from companies and what things should be prioritized in such assessments.

Companies could implement protocols themselves, but their financial interests too often in conflict with meaningful restrictions on product development and growth. That reality is a standard case for regulation acting on behalf of the public. Whether it’s a new regulatory mandate from the Federal Trade Commission or harm reduction guidelines from a new government agency, the regulator’s job would be to work with tech companies’ product development teams to design workable protocols that are measurable. during the course of product development to assess meaningful signs of damage.

That approach may sound cumbersome, but adding these kinds of protocols should be easy for the largest companies (the only ones that should be regulated) because they already have randomized controlled trials built into their development process to measure their efficacy. The more time consuming and complex part would be defining the standards; the actual conduct of the tests would require no regulatory participation at all. It would only be necessary to ask diagnostic questions in addition to normal growth-related questions and then make that data accessible to external reviewers. Our upcoming newspaper on the 2022 ACM Conference on Equality and Access in Algorithms, Mechanisms and Optimization will further explain this procedure and outline how it can be established effectively.

When products that reach tens of millions are tested for their ability to increase engagement, companies must ensure that those products — at least in aggregate — also meet the “don’t make the problem worse” principle. Over time, more aggressive standards could be set to reverse existing effects of already approved products.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article