Corrected columns for urls sheet, added back asset into technique names, and tidied up mapping of existing incidents to amended techniques

This commit is contained in:
Stephen Campbell 2024-11-21 11:50:30 -05:00
parent 964938bd15
commit 84f0700c2e
200 changed files with 625 additions and 701 deletions

View file

@ -7,8 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00033 China 50cent Army](../../generated_pages/incidents/I00033.md) | facilitate state propaganda and defuse crises |
| [I00034 DibaFacebookExpedition](../../generated_pages/incidents/I00034.md) | Netizens from one of the largest discussion forums in China, known as Diba, coordinated to overcome Chinas Great Firewall |

View file

@ -7,17 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00005 Brexit vote](../../generated_pages/incidents/I00005.md) | cultivate, manipulate, exploit useful idiots |
| [I00007 Incirlik terrorists](../../generated_pages/incidents/I00007.md) | cultivate, manipulate, exploit useful idiots (in the case Paul Manafort) |
| [I00010 ParklandTeens](../../generated_pages/incidents/I00010.md) | cultivate, manipulate, exploit useful idiots (Alex Jones... drives conspiracy theories; false flags, crisis actors) |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | cultivate, manipulate, exploit useful idiots |
| [I00029 MH17 investigation](../../generated_pages/incidents/I00029.md) | cultivate, manipulate, exploit useful idiots |
| [I00032 Kavanaugh](../../generated_pages/incidents/I00032.md) | cultivate, manipulate, exploit useful idiots (Alex Jones... drives conspiracy theories) |
| [I00044 JadeHelm exercise](../../generated_pages/incidents/I00044.md) | cultivate, manipulate, exploit useful idiots (Alex Jones... drives conspiracy theories) |
| [I00049 White Helmets: Chemical Weapons](../../generated_pages/incidents/I00049.md) | cultivate, manipulate, exploit useful idiots (Roger Waters; Venessa Beeley...) |
| [I00050 #HandsOffVenezuela](../../generated_pages/incidents/I00050.md) | cultivate, manipulate, exploit useful idiots (Roger Waters) |
| [I00051 Integrity Initiative](../../generated_pages/incidents/I00051.md) | cultivate, manipulate, exploit useful idiots |
| [I00063 Olympic Doping Scandal](../../generated_pages/incidents/I00063.md) | cultivate, manipulate, exploit useful idiots |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00006 Columbian Chemicals](../../generated_pages/incidents/I00006.md) | Create and use hashtag |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”, which posted hashtags alongside campaign content (T0015: Create Hashtags and Search Artefacts):<br><br><i>“The accounts post generic images to fill their account feed to make the account seem real. They then employ a hidden hashtag in their posts, consisting of a seemingly random string of numbers and letters.<br><br>“The hypothesis regarding this tactic is that the group orchestrating these accounts utilizes these hashtags as a means of indexing them. This system likely serves a dual purpose: firstly, to keep track of the networks expansive network of accounts and unique posts, and secondly, to streamline the process of boosting engagement among these accounts. By searching for these specific, unique hashtags, the group can quickly locate posts from their network and engage with them using other fake accounts, thereby artificially inflating the visibility and perceived authenticity of the fake account.”</i> |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | Click-bait (economic actors) fake news sites (ie: Denver Guardian; Macedonian teens) |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], however, the Donbas News International (DNI) agency, based in Donetsk, Ukraine, and (since September 2016) an official state media outlet of the unrecognized separatist Donetsk Peoples Republic, ran an article under the sensational headline, “US sends 3,600 tanks against Russia — massive NATO deployment under way.” DNI is run by Finnish exile Janus Putkonen, described by the Finnish national broadcaster, YLE, as a “Finnish info warrior”, and the first foreigner to be granted a Donetsk passport.<br><br>“The equally sensational opening paragraph ran, “The NATO war preparation against Russia, Operation Atlantic Resolve, is in full swing. 2,000 US tanks will be sent in coming days from Germany to Eastern Europe, and 1,600 US tanks is deployed to storage facilities in the Netherlands. At the same time, NATO countries are sending thousands of soldiers in to Russian borders.”<br><br>“The report is based around an obvious factual error, conflating the total number of vehicles with the actual number of tanks, and therefore multiplying the actual tank force 20 times over. For context, military website globalfirepower.com puts the total US tank force at 8,848. If the DNI story had been true, it would have meant sending 40% of all the US main battle tanks to Europe in one go.<br><br>“Could this have been an innocent mistake? The simple answer is “no”. The journalist who penned the story had a sufficient command of the details to be able to write, later in the same article, “In January, 26 tanks, 100 other vehicles and 120 containers will be transported by train to Lithuania. Germany will send the 122nd Infantry Battalion.” Yet the same author apparently believed, in the headline and first paragraph, that every single vehicle in Atlantic Resolve is a tank. To call this an innocent mistake is simply not plausible.<br><br>“The DNI story can only realistically be considered a deliberate fake designed to caricaturize and demonize NATO, the United States and Germany (tactfully referred to in the report as having “rolled over Eastern Europe in its war of extermination 75 years ago”) by grossly overstating the number of MBTs involved.”</i><br><br>This behaviour matches T0016: Create Clickbait because the person who wrote the story is shown to be aware of the fact that there were non-tank vehicles later in their story, but still chose to give the article a sensationalist headline claiming that all vehicles being sent were tanks. |

View file

@ -7,8 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00002 #VaccinateUS](../../generated_pages/incidents/I00002.md) | Promote "funding" campaign |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |

View file

@ -7,9 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00002 #VaccinateUS](../../generated_pages/incidents/I00002.md) | buy FB targeted ads |
| [I00005 Brexit vote](../../generated_pages/incidents/I00005.md) | Targeted FB paid ads |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | Targeted FB paid ads |
| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Ad approval systems can create risks. We created 12 fake ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)<br><br>Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.<br><br>Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.<br><br>Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or look like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.</i><br><br>Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). |

View file

@ -7,10 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00010 ParklandTeens](../../generated_pages/incidents/I00010.md) | 4Chan/8Chan - trial content |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | 4Chan/8Chan - trial content |
| [I00032 Kavanaugh](../../generated_pages/incidents/I00032.md) | 4Chan/8Chan - trial content |
| [I00044 JadeHelm exercise](../../generated_pages/incidents/I00044.md) | 4Chan/8Chan - trial content |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00056 Iran Influence Operations](../../generated_pages/incidents/I00056.md) | Memes... anti-Isreal/USA/West, conspiracy narratives |

View file

@ -7,8 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00047 Sea of Azov](../../generated_pages/incidents/I00047.md) | (Distort) Kremlin-controlled RT cited Russian Minister of Foreign Affairs Sergei Lavrov suggesting that Ukraine deliberately provoked Russia in hopes of gaining additional support from the United States and Europe. |
| [I00053 China Huawei CFO Arrest](../../generated_pages/incidents/I00053.md) | Distorted, saccharine “news” about the Chinese State and Party |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.<br><br> “Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.<br><br> “The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.<br><br> “It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”</i><br><br> Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.<br><br> We cant know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. |

View file

@ -7,8 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00005 Brexit vote](../../generated_pages/incidents/I00005.md) | manipulate social media "online polls"? |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | manipulate social media "online polls"? |

View file

@ -7,9 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00006 Columbian Chemicals](../../generated_pages/incidents/I00006.md) | bait journalists/media/politicians |
| [I00010 ParklandTeens](../../generated_pages/incidents/I00010.md) | journalist/media baiting |
| [I00015 ConcordDiscovery](../../generated_pages/incidents/I00015.md) | journalist/media baiting |

View file

@ -7,8 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00029 MH17 investigation](../../generated_pages/incidents/I00029.md) | Demand insurmountable proof |
| [I00047 Sea of Azov](../../generated_pages/incidents/I00047.md) | Demand insurmountable proof |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00015 ConcordDiscovery](../../generated_pages/incidents/I00015.md) | Circulate to media via DM, then release publicly |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00009 PhilippinesExpert](../../generated_pages/incidents/I00009.md) | Using "expert" |

View file

@ -7,18 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00002 #VaccinateUS](../../generated_pages/incidents/I00002.md) | SEO optimisation/manipulation ("key words") |
| [I00005 Brexit vote](../../generated_pages/incidents/I00005.md) | SEO optimisation/manipulation ("key words") |
| [I00010 ParklandTeens](../../generated_pages/incidents/I00010.md) | SEO optimisation/manipulation ("key words") |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | SEO optimisation/manipulation ("key words") |
| [I00029 MH17 investigation](../../generated_pages/incidents/I00029.md) | SEO optimisation/manipulation ("key words") |
| [I00032 Kavanaugh](../../generated_pages/incidents/I00032.md) | SEO optimisation/manipulation ("key words") |
| [I00044 JadeHelm exercise](../../generated_pages/incidents/I00044.md) | SEO optimisation/manipulation ("key words") |
| [I00049 White Helmets: Chemical Weapons](../../generated_pages/incidents/I00049.md) | SEO optimisation/manipulation ("key words") |
| [I00050 #HandsOffVenezuela](../../generated_pages/incidents/I00050.md) | SEO optimisation/manipulation ("key words") |
| [I00051 Integrity Initiative](../../generated_pages/incidents/I00051.md) | SEO optimisation/manipulation ("key words") |
| [I00056 Iran Influence Operations](../../generated_pages/incidents/I00056.md) | SEO optimisation/manipulation ("key words") |
| [I00063 Olympic Doping Scandal](../../generated_pages/incidents/I00063.md) | SEO optimisation/manipulation ("key words") |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00033 China 50cent Army](../../generated_pages/incidents/I00033.md) | cow online opinion leaders into submission, muzzling social media as a political force |

View file

@ -7,7 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00033 China 50cent Army](../../generated_pages/incidents/I00033.md) | cow online opinion leaders into submission, muzzling social media as a political force |
| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |

View file

@ -7,8 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00033 China 50cent Army](../../generated_pages/incidents/I00033.md) | 2,000,000 people (est.) part of state run/sponsored astroturfing |
| [I00034 DibaFacebookExpedition](../../generated_pages/incidents/I00034.md) | flood the Facebook pages of Taiwanese politicians and news agencies with a pro-PRC message, Democratic Progressive Party (DPP), attracted nearly 40,000 Facebook comments in just eight hours. |

View file

@ -7,10 +7,6 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00005 Brexit vote](../../generated_pages/incidents/I00005.md) | Digital to physical "organize+promote" rallies & events? |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | Digital to physical "organize+promote" rallies & events |
| [I00032 Kavanaugh](../../generated_pages/incidents/I00032.md) | Digital to physical "organize+promote" rallies & events? |
| [I00053 China Huawei CFO Arrest](../../generated_pages/incidents/I00053.md) | Events coordinated and promoted across media platforms, Extend digital the physical space… gatherings ie: support for Meng outside courthouse |
| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). |

View file

@ -7,9 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00099 More Women Are Facing The Reality Of Deepfakes, And Theyre Ruining Lives](../../generated_pages/incidents/I00099.md) | <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access). |
| [I00099 More Women Are Facing The Reality Of Deepfakes, And Theyre Ruining Lives](../../generated_pages/incidents/I00099.md) | <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). |
| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | <i>You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidias publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. Its also irresponsible and needs to be restricted immediately.<br><br>[...]<br><br>Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.<br><br>Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.<br><br>Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that its been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.<br><br>Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammers identity after the fact. Perhaps the scammer used an old classmates photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.<br><br>The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human whos never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos dont give law enforcement much to work with.</i><br><br>ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |

View file

@ -7,9 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View file

@ -8,8 +8,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). |

View file

@ -9,7 +9,7 @@
| -------- | -------------------- |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). |

View file

@ -7,13 +7,12 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”</i><br><br> In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
| [I00078 Metas September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | <i>“[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.<br><br> “This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”</i><br><br> Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.<br><br> Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | <i>“Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.<br><br> ““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants difficult lives in Europe,” Meta said. “They also posted about Polands strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian governments handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””</i><br><br> In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):<br><br><i>“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.”<I> |
| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | <i>“Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”</i><br><br>In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts profile pictures (T0145.001: Copy Account Imagery). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View file

@ -10,9 +10,9 @@
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.<br><br> “The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”</i><br><br> In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. |
| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | <i>“One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powells Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.<br><br> “Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”</i><br><br> The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). |
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <I>“[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.<br><br> “The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants rights, and compared Polands treatment of migrants against other countries. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”</i><br><br> This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).<br><br> Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.”</i> In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).<br><br> This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. |

View file

@ -7,6 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). |
| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | <I>“On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.<br><br> “In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.<br><br> “Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.<br><br> “In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. <br><br> “In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””</i><br><br> In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). |

View file

@ -7,10 +7,10 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00065 'Ghostwriter' Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned With Russian Security Interests](../../generated_pages/incidents/I00065.md) | _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_ <br /> <br />This example demonstrates how threat actors can use compromised accounts to distribute inauthentic content while exploiting the legitimate account holders persona (T0097.110: Party Official Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.005: Compromised). |
| [I00065 'Ghostwriter' Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned With Russian Security Interests](../../generated_pages/incidents/I00065.md) | _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_ <br /> <br />This example demonstrates how threat actors can use compromised accounts to distribute inauthentic content while exploiting the legitimate account holders persona (T0097.110: Party Official Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.005: Compromised Asset). |
| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | <I>“In the campaigns final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.<br><br> “The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professors campaign manager.<br><br> When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”</i><br><br> In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates photographs and, in some cases, plagiarized tweets from the real individuals accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.<br><br> “For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for Californias 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengoods official account earlier that month”<br><br> [...]<br><br> “In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New Yorks 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butlers website, jineeabutlerforcongress[.]com.”</I><br><br> In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |

View file

@ -10,7 +10,7 @@
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.<br><br> [...]<br><br> The letter is not dated, and Dmytro Kulebas signature seems to be copied from a publicly available letter signed by him in 2021.”</i><br><br> In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <i>“After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”</i><br><br>In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).<br><br>The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. |
| [I00085 Chinas large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | <i>“Four media companies Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.<br><br> The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media companys publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.<br><br> The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”</i><br><br> In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” |
| [I00093 China Falsely Denies Disinformation Campaign Targeting Canadas Prime Minister](../../generated_pages/incidents/I00093.md) | <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account, T0150.001: Newly Created, T0150.005: Compromised).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [I00093 China Falsely Denies Disinformation Campaign Targeting Canadas Prime Minister](../../generated_pages/incidents/I00093.md) | <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |

View file

@ -13,8 +13,8 @@
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.<br><br> “Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.<br><br> “The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.<br><br> “It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”</i><br><br> Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.<br><br> We cant know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. |
| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.<br><br> <i>“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the sites true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.<br><br> “The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.<br><br> “In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””</i><br><br> The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). |
| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). |

View file

@ -8,7 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><i>The evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i><br><br>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). |
| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | <I>“[Russias Internet Research Agency, the IRA] pushed narratives with longform blog content. They created media properties, websites designed to produce stories that would resonate with those targeted. It appears, based on the data set provided by Alphabet, that the IRA may have also expanded into think tank-style communiques. One such page, previously unattributed to the IRA but included in the Alphabet data, was GI Analytics, a geopolitics blog with an international masthead that included American authors. This page was promoted via AdWords and YouTube videos; it has strong ties to more traditional Russian propaganda networks, which will be discussed later in this analysis. GI Analytics wrote articles articulating nuanced academic positions on a variety of sophisticated topics. From the sites About page:<br><br> ““Our purpose and mission are to provide high-quality analysis at a time when we are faced with a multitude of crises, a collapsing global economy, imperialist wars, environmental disasters, corporate greed, terrorism, deceit, GMO food, a migration crisis and a crackdown on small farmers and ranchers.””</i><br><br> In this example Alphabets technical indicators allowed them to assert that GI Analytics, which presented itself as a think tank, was a fabricated institution associated with Russias Internet Research Agency (T0097.204: Think Tank Persona, T0143.002: Fabricated Persona). |
| [I00078 Metas September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | <i>“[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.<br><br> “This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”</i><br><br> Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.<br><br> Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.”</i> In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).<br><br> This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. |

View file

@ -9,7 +9,7 @@
| -------- | -------------------- |
| [I00070 Eli Lilly Clarifies Its Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | <i>“Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.<br><br> “[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).<br><br> The parody account tweeted “we are excited to announce insulin is free now.””</i><br><br> In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. |
| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | <i>“A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.<br><br> “The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.<br><br> “London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.</i><br><br> In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). |

View file

@ -8,7 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -7,9 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.<br><br>At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.<br><br>“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”<br><br>The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”</i><br><br>Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |

View file

@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account, T0148.003: Payment Processing Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br><i>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |

File diff suppressed because one or more lines are too long

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |

File diff suppressed because one or more lines are too long

View file

@ -12,9 +12,9 @@
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.<br><br> “Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.<br><br> “The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.<br><br> “It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”</i><br><br> Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.<br><br> We cant know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <i>“After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”</i><br><br>In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).<br><br>The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. |
| [I00085 Chinas large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | <i>“Four media companies Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.<br><br> The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media companys publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.<br><br> The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”</i><br><br> In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” |
| [I00093 China Falsely Denies Disinformation Campaign Targeting Canadas Prime Minister](../../generated_pages/incidents/I00093.md) | <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account, T0150.001: Newly Created, T0150.005: Compromised).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [I00093 China Falsely Denies Disinformation Campaign Targeting Canadas Prime Minister](../../generated_pages/incidents/I00093.md) | <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). |

View file

@ -19,10 +19,10 @@
| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | <I>“On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.<br><br> “In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.<br><br> “Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.<br><br> “In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. <br><br> “In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””</i><br><br> In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). |
| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | <i>“Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.<br><br> “State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.<br><br> “However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.<br><br> “It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.<br><br> [...]<br><br> “It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”</i><br><br> In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). |
| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | <i>“A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.<br><br> “The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.<br><br> “London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.</i><br><br> In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><i>The evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View file

@ -7,21 +7,21 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.007: Rented, T0151.017: Dating Platform). |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br>In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br>In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [I00070 Eli Lilly Clarifies Its Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | <i>“Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.<br><br> “[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).<br><br> The parody account tweeted “we are excited to announce insulin is free now.””</i><br><br> In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | <I>“In the campaigns final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.<br><br> “The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professors campaign manager.<br><br> When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”</i><br><br> In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates photographs and, in some cases, plagiarized tweets from the real individuals accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.<br><br> “For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for Californias 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengoods official account earlier that month”<br><br> [...]<br><br> “In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New Yorks 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butlers website, jineeabutlerforcongress[.]com.”</I><br><br> In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). |
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <i>“[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.<br><br>“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“</i><br><br>In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). |
| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | <i>“Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”</i><br><br>In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts profile pictures (T0145.001: Copy Account Imagery). |
| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.<br><br> <i>“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the sites true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.<br><br> “The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.<br><br> “In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””</i><br><br> The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). |
| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). |
| [I00127 Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops](../../generated_pages/incidents/I00127.md) | <i>Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.<br><br>[...]<br><br>What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.<br><br>"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."<br><br>To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.<br><br>Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.</i><br><br>In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |

View file

@ -1,4 +1,4 @@
# Technique T0146.001: Free Account
# Technique T0146.001: Free Account Asset
* **Summary**: Many online platforms allow users to create free accounts on their platform. A Free Account is an Account which does not require payment at account creation and is not subscribed to paid platform features.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |

View file

@ -1,4 +1,4 @@
# Technique T0146.002: Paid Account
# Technique T0146.002: Paid Account Asset
* **Summary**: Some online platforms afford accounts extra features, or other benefits, if the user pays a fee. For example, as of September 2024, content posted by a Paid Account on X (previously Twitter) is prioritised in the platforms algorithm.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0146.003: Verified Account
# Technique T0146.003: Verified Account Asset
* **Summary**: Some online platforms apply badges of verification to accounts which meet certain criteria.<br><br>On some platforms (such as dating apps) a verification badge signifies that the account has passed the platforms identity verification checks. On some platforms (such as X (previously Twitter)) a verification badge signifies that an account has paid for the platforms service.
@ -7,9 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account, T0150.007: Rented, T0150.004: Repurposed, T00145.006: Attractive Person Account Imagery). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><i>The evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |

View file

@ -1,4 +1,4 @@
# Technique T0146.004: Administrator Account
# Technique T0146.004: Administrator Account Asset
* **Summary**: Some accounts will have special privileges / will be in control of the Digital Community Hosting Asset; for example, the Admin of a Facebook Page, a Moderator of a Subreddit, etc. etc.
@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |

View file

@ -7,8 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). |

File diff suppressed because one or more lines are too long

View file

@ -1,4 +1,4 @@
# Technique T0146.007: Automated Account
# Technique T0146.007: Automated Account Asset
* **Summary**: An Automated Account is an account which is displaying automated behaviour, such as republishing or liking other accounts content, or publishing their own content.

View file

@ -1,4 +1,4 @@
# Technique T0146: Account
# Technique T0146: Account Asset
* **Summary**: An Account is a user-specific profile that allows access to the features and services of an online platform, typically requiring a username and password for authentication.
@ -7,12 +7,12 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View file

@ -1,4 +1,4 @@
# Technique T0147.001: Game
# Technique T0147.001: Game Asset
* **Summary**: A Game is Software which has been designed for interactive entertainment, where users take on challenges set by the games designers.<br><br>While Online Game Platforms allow people to play with each other, Games are designed for single player experiences.
@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod). Extremists also use communication features available in online games to recruit new members. |
| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | <i>In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br>Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB]. <br><br>[...]<br><br>[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.</i><br><br>Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game). |
| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. |
| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | <i>In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br>Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB]. <br><br>[...]<br><br>[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.</i><br><br>Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0147.002: Game Mod
# Technique T0147.002: Game Mod Asset
* **Summary**: A Game Mod is a modification which can be applied to a Game or Multiplayer Online Game to add new content or functionality to the game.<br><br>Users can Modify Games to introduce new content to the game. Modified Games can be distributed on Software Delivery Platforms such as Steam or can be distributed within the Game or Multiplayer Online Game.
@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod). Extremists also use communication features available in online games to recruit new members. |
| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br><i>Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.<br><br>[...]<br><br>During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.<br><br>Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.</i><br><br>Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod). |
| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. |
| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br><i>Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.<br><br>[...]<br><br>During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.<br><br>Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.</i><br><br>Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0147.003: Malware
# Technique T0147.003: Malware Asset
* **Summary**: Malware is Software which has been designed to cause harm or facilitate malicious behaviour on electronic devices.<br><br>DISARM recommends using the [MITRE ATT&CK Framework](https://attack.mitre.org/) to document malware types and their usage.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0147.004: Mobile App
# Technique T0147.004: Mobile App Asset
* **Summary**: A Mobile App is an application which has been designed to run on mobile operating systems, such as Android or iOS.<br><br>Mobile Apps can enable access to online platforms (e.g. Facebooks mobile app) or can provide software which users can run offline on their device.

View file

@ -1,4 +1,4 @@
# Technique T0147: Software
# Technique T0147: Software Asset
* **Summary**: A Software is a program developed to run on computers or devices that helps users achieve specific goals, such as improving productivity, automating tasks, or having fun.

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |

View file

@ -1,4 +1,4 @@
# Technique T0148.002: Bank Account
# Technique T0148.002: Bank Account Asset
* **Summary**: A Bank Account is a financial account that allows individuals or organisations to store, manage, and access their money, typically for saving, spending, or investment purposes.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |

View file

@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account, T0148.003: Payment Processing Platform). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). |
| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br><i>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform)., This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform)., This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |

View file

@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |

View file

@ -1,13 +1,13 @@
# Technique T0149.001: Domain
# Technique T0149.001: Domain Asset
* **Summary**: A Domain is a web address (such as “www.google.com”), used to navigate to Websites on the internet.<br><br>Domains differ from Websites in that Websites are considered to be developed web pages which host content, whereas Domains do not necessarily host public-facing web content. <br><br>A threat actor may register a new domain to bypass the old domain being blocked.
* **Summary**: A Domain is a web address (such as “google[.]com”), used to navigate to Websites on the internet.<br><br>Domains differ from Websites in that Websites are considered to be developed web pages which host content, whereas Domains do not necessarily host public-facing web content. <br><br>A threat actor may register a new domain to bypass the old domain being blocked.
* **Belongs to tactic stage**: TA06
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

View file

@ -1,13 +1,13 @@
# Technique T0149.002: Email Domain
# Technique T0149.002: Email Domain Asset
* **Summary**: An Email Domain is a Domain (such as “meta.com”) which has the ability to send emails. <br><br>Any Domain which has an MX (Mail Exchange) record and configured SMTP (Simple Mail Transfer Protocol) settings can send and receive emails, and is therefore an Email Domain.
* **Summary**: An Email Domain is a Domain (such as “meta[.]com”) which has the ability to send emails (e.g. from an @meta[.]com address). <br><br>Any Domain which has an MX (Mail Exchange) record and configured SMTP (Simple Mail Transfer Protocol) settings can send and receive emails, and is therefore an Email Domain.
* **Belongs to tactic stage**: TA06
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). |

View file

@ -8,7 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0149.004: Redirecting Domain
# Technique T0149.004: Redirecting Domain Asset
* **Summary**: A Redirecting Domain is a Domain which has been configured to redirect users to another Domain when visited.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i><br><br>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0149.005: Server
# Technique T0149.005: Server Asset
* **Summary**: A Server is a computer which provides resources, services, or data to other computers over a network. There are different types of servers, such as web servers (which serve web pages and applications to users), database servers (which manage and provide access to databases), and file servers (which store and share files across a network).
@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0149.006: IP Address
# Technique T0149.006: IP Address Asset
* **Summary**: An IP Address is a unique numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. IP addresses are commonly a part of any online infrastructure.<br><br>IP addresses can be in IPV4 dotted decimal (x.x.x.x) or IPV6 colon-separated hexadecimal (y:y:y:y:y:y:y:y) formats.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0149.007: VPN
# Technique T0149.007: VPN Asset
* **Summary**: A VPN (Virtual Private Network) is a service which creates secure, encrypted connections over the internet, allowing users to transmit data safely and access network resources remotely. It masks IP Addresses, enhancing privacy and security by preventing unauthorised access and tracking. VPNs are commonly used for protecting sensitive information, bypassing geographic restrictions, and maintaining online anonymity.<br><br>VPNs can also allow a threat actor to pose as if they are located in one country while in reality being based in another. By doing so, they can try to either mis-attribute their activities to another actor or better hide their own identity.

View file

@ -1,4 +1,4 @@
# Technique T0149.008: Proxy IP Address
# Technique T0149.008: Proxy IP Address Asset
* **Summary**: A Proxy IP Address allows a threat actor to mask their real IP Address by putting a layer between them and the online content theyre connecting with. <br><br>Proxy IP Addresses can hide the connection between the threat actor and their online infrastructure.

View file

@ -1,4 +1,4 @@
# Technique T0150.001: Newly Created
# Technique T0150.001: Newly Created Asset
* **Summary**: A Newly Created Asset is an asset which has been created and used for the first time in a documented potential incident.<br><br>For example, analysts which can identify a recent creation date of Accounts participating in the spread of a new narrative can assert these are Newly Created Assets.<br><br>Analysts should use Dormant if the asset was created and laid dormant for an extended period of time before activity.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0150.002: Dormant
# Technique T0150.002: Dormant Asset
* **Summary**: A Dormant Asset is an asset which was inactive for an extended period before being used in a documented potential incident.

View file

@ -1,4 +1,4 @@
# Technique T0150.003: Pre-Existing
# Technique T0150.003: Pre-Existing Asset
* **Summary**: Pre-Existing Assets are assets which existed before the observed incident which have not been Repurposed; i.e. they are still being used for their original purpose. <br><br>An example could be an Account which presented itself with a Journalist Persona prior to and during the observed potential incident.
@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><i>The evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |

View file

@ -1,4 +1,4 @@
# Technique T0150.004: Repurposed
# Technique T0150.004: Repurposed Asset
* **Summary**: Repurposed Assets are assets which have been identified as being used previously, but are now being used for different purposes, or have new Presented Personas.<br><br>Actors have been documented compromising assets, and then repurposing them to present Inauthentic Personas as part of their operations.
@ -7,9 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account, T0150.007: Rented, T0150.004: Repurposed, T00145.006: Attractive Person Account Imagery). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i><br><br>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View file

@ -1,4 +1,4 @@
# Technique T0150.005: Compromised
# Technique T0150.005: Compromised Asset
* **Summary**: A Compromised Asset is an asset which was originally created or belonged to another person or organisation, but which an actor has gained access to without their consent.<br><br>See also MITRE ATT&CK T1708: Valid Accounts.
@ -7,10 +7,10 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00066 The online war between Qatar and Saudi Arabia](../../generated_pages/incidents/I00066.md) | _"In the early hours of 24 May 2017, a news story appeared on the website of Qatar's official news agency, QNA, reporting that the country's emir, Sheikh Tamim bin Hamad al-Thani, had made an astonishing speech."_ <br /> <br />_"[…]_ <br /> <br />_"Qatar claimed that the QNA had been hacked. And they said the hack was designed to deliberately spread fake news about the country's leader and its foreign policies. The Qataris specifically blamed UAE, an allegation later repeated by a Washington Post report which cited US intelligence sources. The UAE categorically denied those reports._ <br /> <br />_"But the story of the emir's speech unleashed a media free-for-all. Within minutes, Saudi and UAE-owned TV networks - Al Arabiya and Sky News Arabia - picked up on the comments attributed to al-Thani. Both networks accused Qatar of funding extremist groups and of destabilising the region."_ <br /> <br />This incident demonstrates how threat actors used a compromised website to allow for an inauthentic narrative to be given a level of credibility which caused significant political fallout (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.004: Website, T0150.005: Compromised). |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet)., <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [I00066 The online war between Qatar and Saudi Arabia](../../generated_pages/incidents/I00066.md) | _"In the early hours of 24 May 2017, a news story appeared on the website of Qatar's official news agency, QNA, reporting that the country's emir, Sheikh Tamim bin Hamad al-Thani, had made an astonishing speech."_ <br /> <br />_"[…]_ <br /> <br />_"Qatar claimed that the QNA had been hacked. And they said the hack was designed to deliberately spread fake news about the country's leader and its foreign policies. The Qataris specifically blamed UAE, an allegation later repeated by a Washington Post report which cited US intelligence sources. The UAE categorically denied those reports._ <br /> <br />_"But the story of the emir's speech unleashed a media free-for-all. Within minutes, Saudi and UAE-owned TV networks - Al Arabiya and Sky News Arabia - picked up on the comments attributed to al-Thani. Both networks accused Qatar of funding extremist groups and of destabilising the region."_ <br /> <br />This incident demonstrates how threat actors used a compromised website to allow for an inauthentic narrative to be given a level of credibility which caused significant political fallout (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.004: Website Asset, T0150.005: Compromised Asset). |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet)., <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |

View file

@ -1,4 +1,4 @@
# Technique T0150.006: Purchased
# Technique T0150.006: Purchased Asset
* **Summary**: A Purchased Asset is an asset which actors paid for the ownership of. <br><br>For example, threat actors have been observed selling compromised social media accounts on dark web marketplaces, which can be used to disguise operation activity.
@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.</i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). |

View file

@ -1,4 +1,4 @@
# Technique T0150.007: Rented
# Technique T0150.007: Rented Asset
* **Summary**: A Rented Asset is an asset which actors are temporarily renting or subscribing to. <br><br>For example, threat actors have been observed renting temporary access to legitimate accounts on online platforms in order to disguise operation activity.
@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.007: Rented, T0151.017: Dating Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account, T0150.007: Rented, T0150.004: Repurposed, T00145.006: Attractive Person Account Imagery). |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). |

View file

@ -1,4 +1,4 @@
# Technique T0150.008: Bulk Created
# Technique T0150.008: Bulk Created Asset
* **Summary**: A Bulk Created Asset is an asset which was created alongside many other instances of the same asset.<br><br>Actors have been observed bulk creating Accounts on Social Media Platforms such as Facebook. Indicators of bulk asset creation include its creation date, assets naming conventions, their configuration (e.g. templated personas, visually similar profile pictures), or their activity (e.g. post timings, narratives posted).

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -7,8 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). |
| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). |

View file

@ -7,10 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00006 Columbian Chemicals](../../generated_pages/incidents/I00006.md) | Use SMS/text messages |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br>In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br>In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona)., <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |

Some files were not shown because too many files have changed in this diff Show more