Sept. 21 (Reuters) – Facebook Inc (FB.O) said on Tuesday it has invested more than $ 13 billion in safety and security measures since 2016, days after a newspaper reported that the company had failed failed to correct “the adverse effects of the platform,” according to the researchers. identified.
The social media giant said it now has 40,000 people working on safety and security, up from 10,000 five years ago.
Facebook has downplayed the negative effects on young users of its Instagram app and has had a poor response to employee alarms about how the platform is being used in developing countries by human traffickers, said The Wall Street Journal reported last week, citing an internal business review. documents. Read more
“In the past, we have not addressed safety and security issues early enough in the product development process”, the company said in a blog post. “But we fundamentally changed that approach.”
Facebook said its artificial intelligence technology helped it block 3 billion fake accounts in the first half of this year. The company has also removed more than 20 million fake COVID-19 and vaccine content.
The company said it is now removing 15 times more content that violates its hate speech standards on Facebook and its image-sharing platform Instagram than when it started reporting it in 2017.
Report by Praveen Paramasivam in Bengaluru; Editing by Maju Samuel
Our standards: Thomson Reuters Trust Principles.