23 KiB
Technique T0151.001: Social Media Platform
-
Summary: Examples of popular Social Media Platforms include Facebook, Instagram, and VK.
Social Media Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.
Social Media Platforms typically allow the creation of Online Community Groups and Online Community Pages.
Accounts on Social Media Platforms are typically presented with a feed of content posted to the platform. The content that populates this feed can be aggregated by the platform’s proprietary Content Recommendation Algorithm, or users can “friend” or “follow” other accounts to add their posts to their feed.
Many Social Media Platforms also allow users to send direct messages to other users on the platform. -
Belongs to tactic stage: TA07
Incident | Descriptions given for this incident |
---|---|
I00097 Report: Not Just Algorithms | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders. [...] Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children. Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery). Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board. Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board. This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto. The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people. Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document). The report looks deeper into 8chan’s /pol/ board: 8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis. [...] I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe. This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography. Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content). When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad. When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack. This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.” In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’” Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain. A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands. But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X. Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs. But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting. The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves. A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). |
I00108 How you thought you support the animals and you ended up funding white supremacists | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology: Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages [...] This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website. [...] Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements. Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona). Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). |
I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to. In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu; Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform). The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). |
I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups. In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump. Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists. Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation. Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems. That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.” Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona). Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform): For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News. This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves. That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.” [...] By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation. A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory. [...] For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board. “We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.” While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform. “There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon. Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement. [...] These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything. |
I00115 How Facebook shapes your feed | This 2021 report by The Washington Post explains the mechanics of Facebook’s algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm): In its early years, Facebook’s algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebook’s news feed algorithm. Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold people’s attention. In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they weren’t interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy. Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies. The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didn’t help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents. [...] Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020. One takeaway is that Facebook’s algorithm isn’t a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users’ values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former. |
I00128 #TrollTracker: Outward Influence Operation From Iran | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership. The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK. Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images. In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). |
Counters | Response types |
---|
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW