“Explicit video clips of children.” An experiment that monitors what Instagram algorithms allow

The “Reels” service on Instagram allows users to view short video clips that address topics that the application’s algorithms think will interest them, such as sports or entertainment videos, and various fields that would capture the user’s interest.

But tests conducted by the Wall Street Journal revealed that the app, owned by Meta, uses the same method for users, who the platform’s algorithms see as having a lustful interest in children.

The newspaper tried to test what the algorithm of the Reels service recommended, for the benefit of the test accounts created, and followed the accounts and pages of young gymnasts and female cheerleading teams, as well as creators of content for children and teenagers on the platform.

The newspaper explained that it created the experimental accounts after noticing that the thousands of followers of these young men and women’s accounts often include a large number of adult men, and that many of these followers also show an interest in the content of a sexual nature, especially for adults and children.

The newspaper noted that the application’s systems provided “shocking and obscene suggestions and content” to these experimental accounts, including “racy video clips of children,” as well as videos of a clearly sexual nature and advertisements for some of the industry’s biggest brands . United States.

The newspaper’s experimental reporting also followed up on some of these reports, revealing that “the application’s algorithms then began producing more disturbing content, interspersed with advertisements.”

In a series of videos recommended by Instagram’s algorithms, an ad for the dating app Bumble appeared between a video of a person caressing the face of a human-sized doll and another video of a young girl, with her face blacked out, which lifts the shirt to reveal its midsection.

In another commercial, a Pizza Hut commercial was followed by a video of a man lying on a bed with his arm around what one comment on the video described as a 10-year-old girl.

According to the Wall Street Journal, the Canadian Center for Child Protection conducted similar independent tests and reached similar results.

In its response to the report’s findings, Meta said the tests conducted by the newspaper “produced a manufactured experience that does not represent what billions of users see.”

The company declined to comment on why the app’s algorithms group separate videos featuring children and sex with ads, but a spokesperson said that in October it introduced new brand safety tools that give advertisers more control over where their ads are displayed and that the app removes, or reduces the prominence of, Four million videos are suspected of violating its standards every month.

Among the companies whose ads appeared with inappropriate content during the newspaper’s testing of the application were Walmart, Disney, online dating company Match Group and the Wall Street Journal itself, despite the fact that most Brand owners require that their ads not appear with sexual or explicit content.

Samantha Stetson, Meta’s head of advertising, responded, finding Instagram’s systems to be “effective at reducing harmful content,” adding that the company has invested heavily in reducing it.

The newspaper quoted current and former company employees as saying that the possibility of Instagram’s algorithms harvesting sexual content for children is known internally to be a problem.

Employees revealed that once Instagram classifies a user as interested in a particular topic, the systems are determined to send them more relevant content.

They said that preventing systems from sending harmful content to affected users requires major changes in the algorithms that recommend certain content, which leads ordinary users to that content.

The newspaper said company documents it reviewed show that Meta security staff are largely prohibited from making changes to the platform that could reduce the number of daily active users.

For example, ads encouraging users to visit Disneyland for the holidays appeared next to a video clip of an adult woman pretending to have sex with her father and another clip of a young woman wearing underwear with fake blood that was dripping from her mouth.

The newspaper said it had presented its experience to some advertisers, which prompted those companies to ask questions of Meta, who said it would investigate the matter, but did not provide a timetable for resolving the issue, nor did it explain how it could have been done. It would limit the promotion of inappropriate content in the future.

Source link

Leave a Comment