On the same day that whistleblower Frances Haugen was testifying before Congress about Facebook and Instagram’s harms to children in the fall of 2021, Arturo Bigar, then a contractor at the social media giant, sent a troubling email to Meta’s CEO Mark Zuckerberg is about the same thing. Title.
In the memo, as first reported by The Wall Street Journal, Biggar, who worked as an engineering director at Facebook from 2009 to 2015, outlined a “critical gap” between how the company deals with harm and how the people who use its products — most of them but especially young people — deal with it. Experience it.
“Two weeks ago, my 16-year-old daughter, who is an Instagram creator, posted a post about cars, and someone commented, ‘Back in the kitchen.’ It was very annoying for her.” “At the same time, the comment goes beyond If it’s a policy violation, our blocking or deletion tools mean that person will go to other profiles and continue spreading misogyny. I don’t think policy/reporting or further content review are the solutions.”
Biggar believes Meta needs to change how it policies its platforms, with a focus on addressing harassment, unwanted sexual advances and other bad experiences even if these issues don’t clearly violate existing policies. For example, sending lewd sexual messages to children doesn’t necessarily violate Instagram’s rules, but Biggar said teens should have a way to let the platform know they don’t want to receive these types of messages.
Two years later, Biggar will testify before a Senate subcommittee on Tuesday about social media and the teen mental health crisis, hoping to shed light on how Meta executives, including Zuckerberg, knew about the harm Instagram was causing but chose not to make relevant changes. Meaning to address them.
“I can safely say that Meta executives knew the harm the teens were experiencing, that there were things they could do that were very doable, and that they chose not to do,” Biggar told the Associated Press. This shows that “we cannot trust them with our children,” he said.
At the opening of the hearing Tuesday, Sen. Richard Blumenthal, a Connecticut Democrat who chairs the Senate Privacy and Technology Subcommittee, introduced Biggar as an engineer “widely respected and admired in the industry” who was specifically hired to help prevent harms against children but whose recommendations She was ignored.
“What you brought to this committee today is something every parent needs to hear,” added Missouri Sen. Josh Hawley, the committee’s ranking Republican.
Biggar points to user surveys that show, for example, that 13% of Instagram users — ages 13 to 15 — reported receiving unwanted sexual advances on the platform within the past seven days.
In his prepared remarks, Biggar is expected to say he does not believe the reforms he is proposing will significantly impact the revenues or profits of Meta and its peers. He said its goal is not to punish companies, but to help teens.
“You’ve heard the company talk about this, this is really complicated,” Biggar told the AP. “No, it’s not. Just give your teen the opportunity to say ‘this content isn’t for me,’ and then use that information to train all the other systems and get feedback that makes it better.”
This testimony comes as the Republican and Democratic parties in Congress seek to adopt regulations aimed at protecting children online.
“Every day, countless people inside and outside of Meta are working on how to help keep young people safe online,” Meta said in a statement. The issues raised here with user surveys highlight one part of this effort, and such surveys have led us to create features such as anonymous notifications of potentially harmful content and comment warnings. Working with parents and experts, we’ve also introduced more than 30 tools to support teens and their families have safe and positive online experiences. “All this work is ongoing.”
Regarding spam that users deem does not violate Instagram’s rules, Meta points to its 2021 Content Distribution Guidelines that state that “problematic or low-quality” content automatically receives reduced distribution in users’ feeds. This includes clickbait content, fact-checked misinformation, and “borderline” posts, such as “a photo of someone posing in a sexually suggestive manner, speech that includes profanity, borderline hate speech, or bloody images.”
In 2022, Meta also introduced “gentle reminders” that tell users to be respectful in their direct messages — but they only apply to users who send message requests to a creator, not a regular user.
Biggar’s testimony comes just two weeks after dozens of US states filed a lawsuit against Meta on charges of harming youth and contributing to the youth mental health crisis. The lawsuits, filed in state and federal courts, allege that Meta knowingly and intentionally designs features on Instagram and Facebook that lead to children being addicted to their platforms.
Biggar said it is “absolutely imperative” that Congress pass bipartisan legislation “to help ensure there is transparency around these harms and that teens can get help” with the support of the right experts.
“The most effective way to regulate social media companies is to require them to develop metrics that allow both the company and third parties to assess and track instances of harm, as done by users. This plays into the hands of what these companies can do, because to them data is everything.”