G Fun Facts Online explores advanced technological topics and their wide-ranging implications across various fields, from geopolitics and neuroscience to AI, digital ownership, and environmental conservation.

Project Mercury: Meta's Alleged Suppression of Social Media Harm Research

Project Mercury: Meta's Alleged Suppression of Social Media Harm Research

Project Mercury: The Shadowy Saga of Meta's Suppressed Research

A trove of unsealed court documents and whistleblower testimonies has cast a harsh light on Meta's alleged efforts to conceal internal research, codenamed "Project Mercury," which reportedly found a causal link between the use of its platforms, Facebook and Instagram, and a decline in users' mental well-being. The revelations, part of a sprawling legal battle against the social media giant, paint a damning picture of a company purportedly prioritizing profit and growth over the safety of its users, particularly teenagers.

The controversy centers on a 2020 internal study conducted in collaboration with the market research firm Nielsen. This research, known within Meta as Project Mercury, was designed to gauge the impact of deactivating Facebook and Instagram on users' mental health. The results were apparently not what the company had hoped for. According to unredacted filings in a class-action lawsuit brought by U.S. school districts, individuals who stopped using the platforms for just one week reported lower feelings of depression, anxiety, loneliness, and social comparison.

Instead of this data prompting a significant shift in strategy or greater transparency, Meta is accused of burying the findings. The company allegedly shut down further research and internally dismissed the results, attributing them to an "existing media narrative" critical of the company. However, internal communications revealed in the court documents suggest that not all employees were in agreement with this assessment. Some staff members privately assured Nick Clegg, Meta's then-head of global public policy, that the research conclusions were valid. One concerned employee reportedly drew a parallel to the tobacco industry, expressing unease about "doing research and knowing cigs were bad and then keeping that info to themselves."

The Whistleblower's Damning Testimony

Adding a powerful and personal voice to these allegations is Arturo Béjar, a former engineering director for Protect and Care at Facebook. In riveting testimony before a Senate Judiciary subcommittee, Béjar revealed that Meta's leadership, including CEO Mark Zuckerberg, had been warned for years about the negative impacts of their platforms on young users.

Béjar's concerns were not merely academic; they were deeply personal. He recounted his own 16-year-old daughter's "awful experiences" on Instagram, which included "repeated unwanted sexual advances" and harassment. When she reported these incidents, the platform either did nothing or responded that the content didn't violate its rules. This personal experience, he testified, highlighted a "critical gap" between Meta's stated policies and the reality of harm experienced by users.

In a 2021 email to Zuckerberg and other executives, sent on the same day that whistleblower Frances Haugen testified before Congress, Béjar detailed his concerns. He received no reply from Zuckerberg. "Meta knows the harm that kids are experiencing on their platforms," Béjar told lawmakers, "and executives know that their measures fail to address it. They are deciding time and time again not to tackle these issues."

Béjar's team at Meta had even developed a recurring survey called "Bad Emotional Experience Feedback" (BEEF). The data from this survey was alarming, revealing that 13% of Instagram users between the ages of 13 and 15 had received unwanted sexual advances in the previous week.

A Pattern of Prioritizing Growth Over Safety

The unsealed court documents from the multi-state lawsuit allege a consistent pattern of behavior at Meta that prioritized user engagement and growth over the well-being of its most vulnerable users. The filings claim that Meta intentionally designed safety features for young users to be ineffective and rarely used, and even blocked the testing of safety features that it feared might hinder growth.

The allegations extend to a disturbingly high tolerance for harmful activity. According to the lawsuit, Meta required a user to be reported 17 times for attempting to traffic people for sex before their account would be removed, a threshold described in an internal document as "a very, very, very high strike threshold."

Furthermore, the documents allege that Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but proceeded anyway. Efforts by safety staff to prevent child predators from contacting minors were allegedly stalled for years due to growth concerns. A 2021 text message from Mark Zuckerberg reportedly stated that he wouldn't say child safety was his top concern "when I have a number of other areas I'm more focused on like building the metaverse."

The Perils for Young Minds

The internal research that Meta allegedly sought to suppress is part of a growing body of evidence highlighting the potential negative impacts of social media on youth mental health. Peer-reviewed studies have long pointed to a correlation between high social media use and increased rates of anxiety, depression, and body image issues among adolescents.

Meta's own internal presentations, leaked by whistleblower Frances Haugen in 2021, revealed that the company was aware of these problems. One slide from a 2019 presentation stated, "We make body image issues worse for one in three teen girls." Another from March 2020 reported that "Thirty-two per cent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse." The research also found that among teens who reported suicidal thoughts, 13% of UK users and 6% of US users traced them back to Instagram.

The lawsuit also alleges that Meta was aware that its platforms were widely used by children under the age of 13, in violation of its own policies and federal law. Internal research suggested that in 2015, there were 4 million users under 13 on Instagram.

Meta's Defense and the Broader Fallout

In response to the mounting allegations, Meta has maintained that the claims are a misrepresentation of its efforts. A company spokesperson, Andy Stone, has stated that the Project Mercury study was halted due to "methodological flaws" and that the lawsuit relies on "cherry-picked quotes and misinformed opinions." Stone has asserted that Meta has "listened to parents, researched issues that matter most, and made real changes to protect teens." The company points to its development of "30 tools to support teens and families" as evidence of its commitment to safety.

Despite these protestations, the revelations from the unsealed documents and whistleblower testimonies have intensified the legal and regulatory scrutiny facing Meta. The class-action lawsuit by school districts is just one of many legal challenges. In October 2023, 41 states and the District of Columbia filed a lawsuit against Meta, alleging that the company knowingly designed its platforms with features that are addictive and harmful to young users.

These lawsuits and the public outcry have also fueled legislative efforts to impose stricter regulations on social media companies. The Kids Online Safety Act (KOSA), a bipartisan bill aimed at protecting minors online, has gained renewed momentum in the wake of these allegations.

The saga of Project Mercury, as detailed in these explosive documents, raises profound questions about corporate responsibility and the impact of social media on society. The comparison of Meta's alleged actions to those of the tobacco industry, once a far-fetched analogy, now resonates with a chilling new relevance. As the legal battles unfold and lawmakers debate the future of online safety, the shadow of Project Mercury looms large, a stark reminder of the potential human cost of unchecked technological ambition.

Reference: