mirror of
https://github.com/DISARMFoundation/DISARMframeworks.git
synced 2024-12-24 06:49:47 -05:00
21 lines
13 KiB
Markdown
21 lines
13 KiB
Markdown
# Technique T0153.006: Content Recommendation Algorithm
|
||
|
||
* **Summary**: Many online platforms have Content Recommendation Algorithms, which promote content posted to the platform to users based on metrics the platform operators are trying to meet. Algorithms typically surface platform content which the user is likely to engage with, based on how they and other users have behaved on the platform.
|
||
|
||
* **Belongs to tactic stage**: TA07
|
||
|
||
|
||
| Incident | Descriptions given for this incident |
|
||
| -------- | -------------------- |
|
||
| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
|
||
| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | <i>This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). |
|
||
| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.<br><br><i>In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.<br><br>Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.<br><br>Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.<br><br>Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.<br><br>That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.” </i><br><br>Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).<br><br>Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):<br><br><i>For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.<br><br>This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.<br><br>That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”<br><br>[...]<br><br>By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation. <br><br>A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.<br><br>[...]<br><br>For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board. <br><br>“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.” <br><br>While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.<br><br>“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon. <br><br>Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement. <br><br>[...]<br><br>These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.</i> |
|
||
| [I00115 How Facebook shapes your feed](../../generated_pages/incidents/I00115.md) | This 2021 report by The Washington Post explains the mechanics of Facebook’s algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):<br><br><i>In its early years, Facebook’s algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebook’s news feed algorithm.<br><br>Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold people’s attention.<br><br>In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they weren’t interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.<br><br>Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.<br><br>The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didn’t help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.<br><br>[...]<br><br>Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.<br><br>One takeaway is that Facebook’s algorithm isn’t a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users’ values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.</i> |
|
||
|
||
|
||
|
||
| Counters | Response types |
|
||
| -------- | -------------- |
|
||
|
||
|
||
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW |