diff --git a/CODE/generate_DISARM_pages.py b/CODE/generate_DISARM_pages.py index 80a19df..3e5ecbf 100644 --- a/CODE/generate_DISARM_pages.py +++ b/CODE/generate_DISARM_pages.py @@ -540,8 +540,14 @@ class Disarm: tactic=row['tactic_id'], summary=row['summary']) if objecttype == 'technique': tactic_name = self.df_tactics.loc[self.df_tactics['disarm_id'] == row['tactic_id'], 'name'].values[0] + if "." in row['disarm_id']: + parent_technique_id = row['disarm_id'].split(".")[0] + parent_technique_name = self.df_techniques.loc[self.df_techniques['disarm_id'] == parent_technique_id, 'name'].values[0] + parent_technique = "**Parent Technique:** " + parent_technique_id + ' ' + parent_technique_name + else: + parent_technique = '' metatext = template.format(type = 'Technique', id=row['disarm_id'], name=row['name'], - tactic=f"{row['tactic_id']} {tactic_name}", summary=row['summary'], + tactic=f"{row['tactic_id']} {tactic_name} {parent_technique}", summary=row['summary'], associatedtechniques=self.create_associated_techniques_string(row['disarm_id']), incidents=self.create_technique_incidents_string(row['disarm_id']), counters=self.create_technique_counters_string(row['disarm_id'])) @@ -583,7 +589,7 @@ class Disarm: print('Updating {}'.format(datafile)) with open(datafile, 'w') as f: f.write(metatext) - f.write(warntext) + #f.write(warntext) f.write(usertext) f.close() return diff --git a/DISARM_MASTER_DATA/DISARM_DATA_MASTER.xlsx b/DISARM_MASTER_DATA/DISARM_DATA_MASTER.xlsx index e3f3deb..8f02c93 100644 Binary files a/DISARM_MASTER_DATA/DISARM_DATA_MASTER.xlsx and b/DISARM_MASTER_DATA/DISARM_DATA_MASTER.xlsx differ diff --git a/generated_pages/incidents/I00071.md b/generated_pages/incidents/I00071.md index 3f71200..2c5a957 100644 --- a/generated_pages/incidents/I00071.md +++ b/generated_pages/incidents/I00071.md @@ -23,11 +23,9 @@ | --------- | ------------------------- | | [T0085.004 Develop Document](../../generated_pages/techniques/T0085.004.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | | [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | -| [T0097.108 Expert Persona](../../generated_pages/techniques/T0097.108.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | | [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | | [T0097.206 Government Institution Persona](../../generated_pages/techniques/T0097.206.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | | [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | | [T0150.005 Compromised Asset](../../generated_pages/techniques/T0150.005.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00074.md b/generated_pages/incidents/I00074.md index 5a560c3..147a8a9 100644 --- a/generated_pages/incidents/I00074.md +++ b/generated_pages/incidents/I00074.md @@ -22,6 +22,7 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0097.106 Recruiter Persona](../../generated_pages/techniques/T0097.106.md) | “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend). | +| [T0097.106 Recruiter Persona](../../generated_pages/techniques/T0097.106.md) | “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend). | | [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | | [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | “[Russia’s Internet Research Agency, the IRA] pushed narratives with longform blog content. They created media properties, websites designed to produce stories that would resonate with those targeted. It appears, based on the data set provided by Alphabet, that the IRA may have also expanded into think tank-style communiques. One such page, previously unattributed to the IRA but included in the Alphabet data, was GI Analytics, a geopolitics blog with an international masthead that included American authors. This page was promoted via AdWords and YouTube videos; it has strong ties to more traditional Russian propaganda networks, which will be discussed later in this analysis. GI Analytics wrote articles articulating nuanced academic positions on a variety of sophisticated topics. From the site’s About page:

““Our purpose and mission are to provide high-quality analysis at a time when we are faced with a multitude of crises, a collapsing global economy, imperialist wars, environmental disasters, corporate greed, terrorism, deceit, GMO food, a migration crisis and a crackdown on small farmers and ranchers.””


In this example Alphabet’s technical indicators allowed them to assert that GI Analytics, which presented itself as a think tank, was a fabricated institution associated with Russia’s Internet Research Agency (T0097.204: Think Tank Persona, T0143.002: Fabricated Persona). | | [T0097.208 Social Cause Persona](../../generated_pages/techniques/T0097.208.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | @@ -29,4 +30,3 @@ | [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00076.md b/generated_pages/incidents/I00076.md index 4a4256b..6eb1aed 100644 --- a/generated_pages/incidents/I00076.md +++ b/generated_pages/incidents/I00076.md @@ -21,7 +21,6 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0084.002 Plagiarise Content](../../generated_pages/techniques/T0084.002.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | | [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | “In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”

In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). | | [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | “Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.

“The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”


In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. | | [T0097.110 Party Official Persona](../../generated_pages/techniques/T0097.110.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | @@ -31,4 +30,3 @@ | [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00078.md b/generated_pages/incidents/I00078.md index 26a9447..b000c23 100644 --- a/generated_pages/incidents/I00078.md +++ b/generated_pages/incidents/I00078.md @@ -21,11 +21,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | -| [T0097.106 Recruiter Persona](../../generated_pages/techniques/T0097.106.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | | [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | | [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | | [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00079.md b/generated_pages/incidents/I00079.md index 13b3e95..b97d582 100644 --- a/generated_pages/incidents/I00079.md +++ b/generated_pages/incidents/I00079.md @@ -28,7 +28,5 @@ | [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | | [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | | [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | -| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00080.md b/generated_pages/incidents/I00080.md index 0fcd18b..4439fcd 100644 --- a/generated_pages/incidents/I00080.md +++ b/generated_pages/incidents/I00080.md @@ -24,8 +24,6 @@ | [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | | [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | | [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | -| [T0144.002 Persona Template](../../generated_pages/techniques/T0144.002.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | | [T0145.007 Stock Image Account Imagery](../../generated_pages/techniques/T0145.007.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler noun|linking verb|noun/verb/adjective|,” which appears to reveal the formula used to write Twitter bios for the accounts.”


This behaviour matches T0145.007: Stock Image Account Imagery because the account was identified as using a stock image as its profile picture. | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00086.md b/generated_pages/incidents/I00086.md index e5b0678..fa76fb3 100644 --- a/generated_pages/incidents/I00086.md +++ b/generated_pages/incidents/I00086.md @@ -26,8 +26,8 @@ | [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):

“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.” | | [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | | [T0144.002 Persona Template](../../generated_pages/techniques/T0144.002.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | +| [T0144.002 Persona Template](../../generated_pages/techniques/T0144.002.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | | [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) | “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). | | [T0145.006 Attractive Person Account Imagery](../../generated_pages/techniques/T0145.006.md) | “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00096.md b/generated_pages/incidents/I00096.md index 57ab169..7e6589a 100644 --- a/generated_pages/incidents/I00096.md +++ b/generated_pages/incidents/I00096.md @@ -22,14 +22,8 @@ Alex Scroxton | ComputerWeekly | [https://web.archive.org/web/20240405154259/htt | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0087.001 Develop AI-Generated Videos (Deepfakes)](../../generated_pages/techniques/T0087.001.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | | [T0088.001 Develop AI-Generated Audio (Deepfakes)](../../generated_pages/techniques/T0088.001.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | -| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | | [T0097.110 Party Official Persona](../../generated_pages/techniques/T0097.110.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | -| [T0115 Post Content](../../generated_pages/techniques/T0115.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | -| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | | [T0152.006 Video Platform](../../generated_pages/techniques/T0152.006.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | -| [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00099.md b/generated_pages/incidents/I00099.md index ac59eea..6747da7 100644 --- a/generated_pages/incidents/I00099.md +++ b/generated_pages/incidents/I00099.md @@ -24,7 +24,5 @@ | [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | | [T0152.004 Website Asset](../../generated_pages/techniques/T0152.004.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | | [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | -| [T0155.005 Paid Access Asset](../../generated_pages/techniques/T0155.005.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00101.md b/generated_pages/incidents/I00101.md index 1dbc135..cbe92ec 100644 --- a/generated_pages/incidents/I00101.md +++ b/generated_pages/incidents/I00101.md @@ -21,10 +21,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0124 Suppress Opposition](../../generated_pages/techniques/T0124.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | | [T0146.004 Administrator Account Asset](../../generated_pages/techniques/T0146.004.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | | [T0150.005 Compromised Asset](../../generated_pages/techniques/T0150.005.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | | [T0151.011 Community Sub-Forum](../../generated_pages/techniques/T0151.011.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00102.md b/generated_pages/incidents/I00102.md index 5b23ac2..10ea91b 100644 --- a/generated_pages/incidents/I00102.md +++ b/generated_pages/incidents/I00102.md @@ -21,16 +21,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0084.004 Appropriate Content](../../generated_pages/techniques/T0084.004.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -| [T0085.004 Develop Document](../../generated_pages/techniques/T0085.004.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | | [T0101 Create Localised Content](../../generated_pages/techniques/T0101.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -| [T0129.006 Deny Involvement](../../generated_pages/techniques/T0129.006.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | | [T0146.006 Open Access Platform](../../generated_pages/techniques/T0146.006.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | | [T0151.012 Image Board Platform](../../generated_pages/techniques/T0151.012.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -| [T0152.005 Paste Platform](../../generated_pages/techniques/T0152.005.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -| [T0152.010 File Hosting Platform](../../generated_pages/techniques/T0152.010.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00103.md b/generated_pages/incidents/I00103.md index 0eb4e30..5474ed8 100644 --- a/generated_pages/incidents/I00103.md +++ b/generated_pages/incidents/I00103.md @@ -22,9 +22,7 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0088.001 Develop AI-Generated Audio (Deepfakes)](../../generated_pages/techniques/T0088.001.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | -| [T0146 Account Asset](../../generated_pages/techniques/T0146.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | | [T0149.005 Server Asset](../../generated_pages/techniques/T0149.005.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | | [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00104.md b/generated_pages/incidents/I00104.md index b0edba4..ef7ecf9 100644 --- a/generated_pages/incidents/I00104.md +++ b/generated_pages/incidents/I00104.md @@ -21,11 +21,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0115 Post Content](../../generated_pages/techniques/T0115.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | -| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | | [T0146.007 Automated Account Asset](../../generated_pages/techniques/T0146.007.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | | [T0151.012 Image Board Platform](../../generated_pages/techniques/T0151.012.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | | [T0152.005 Paste Platform](../../generated_pages/techniques/T0152.005.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00106.md b/generated_pages/incidents/I00106.md index ba5830a..bf6d5a1 100644 --- a/generated_pages/incidents/I00106.md +++ b/generated_pages/incidents/I00106.md @@ -21,12 +21,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0068 Respond to Breaking News Event or Active Crisis](../../generated_pages/techniques/T0068.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | | [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | -| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | -| [T0148.007 eCommerce Platform](../../generated_pages/techniques/T0148.007.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | | [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | | [T0151.003 Online Community Page](../../generated_pages/techniques/T0151.003.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00107.md b/generated_pages/incidents/I00107.md index d008c9d..8abca0b 100644 --- a/generated_pages/incidents/I00107.md +++ b/generated_pages/incidents/I00107.md @@ -21,10 +21,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | | [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | | [T0149.003 Lookalike Domain](../../generated_pages/techniques/T0149.003.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | | [T0152.003 Website Hosting Platform](../../generated_pages/techniques/T0152.003.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00108.md b/generated_pages/incidents/I00108.md index e29de39..8c89204 100644 --- a/generated_pages/incidents/I00108.md +++ b/generated_pages/incidents/I00108.md @@ -22,13 +22,7 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0092 Build Network](../../generated_pages/techniques/T0092.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | -| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | | [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | -| [T0151.003 Online Community Page](../../generated_pages/techniques/T0151.003.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | -| [T0152.003 Website Hosting Platform](../../generated_pages/techniques/T0152.003.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | -| [T0152.004 Website Asset](../../generated_pages/techniques/T0152.004.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | -| [T0153.005 Online Advertising Platform](../../generated_pages/techniques/T0153.005.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | | [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00109.md b/generated_pages/incidents/I00109.md index 0b68a41..25320d1 100644 --- a/generated_pages/incidents/I00109.md +++ b/generated_pages/incidents/I00109.md @@ -22,21 +22,10 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0061 Sell Merchandise](../../generated_pages/techniques/T0061.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | -| [T0097.207 NGO Persona](../../generated_pages/techniques/T0097.207.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | -| [T0146 Account Asset](../../generated_pages/techniques/T0146.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | | [T0148.003 Payment Processing Platform](../../generated_pages/techniques/T0148.003.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | | [T0148.004 Payment Processing Capability](../../generated_pages/techniques/T0148.004.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | | [T0148.004 Payment Processing Capability](../../generated_pages/techniques/T0148.004.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | | [T0149.001 Domain Asset](../../generated_pages/techniques/T0149.001.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | -| [T0149.005 Server Asset](../../generated_pages/techniques/T0149.005.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | -| [T0149.006 IP Address Asset](../../generated_pages/techniques/T0149.006.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | -| [T0150.006 Purchased Asset](../../generated_pages/techniques/T0150.006.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | -| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | -| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | -| [T0151.009 Legacy Online Forum Platform](../../generated_pages/techniques/T0151.009.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | | [T0152.004 Website Asset](../../generated_pages/techniques/T0152.004.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | -| [T0152.006 Video Platform](../../generated_pages/techniques/T0152.006.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | -| [T0155 Gated Asset](../../generated_pages/techniques/T0155.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00110.md b/generated_pages/incidents/I00110.md index 0e8ee60..07bfc11 100644 --- a/generated_pages/incidents/I00110.md +++ b/generated_pages/incidents/I00110.md @@ -22,14 +22,10 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0017 Conduct Fundraising](../../generated_pages/techniques/T0017.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | -| [T0085.005 Develop Book](../../generated_pages/techniques/T0085.005.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | | [T0087 Develop Video-Based Content](../../generated_pages/techniques/T0087.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | -| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | | [T0121.001 Bypass Content Blocking](../../generated_pages/techniques/T0121.001.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | -| [T0146 Account Asset](../../generated_pages/techniques/T0146.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | | [T0148.006 Crowdfunding Platform](../../generated_pages/techniques/T0148.006.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | | [T0148.007 eCommerce Platform](../../generated_pages/techniques/T0148.007.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | | [T0152.012 Subscription Service Platform](../../generated_pages/techniques/T0152.012.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00111.md b/generated_pages/incidents/I00111.md index 590d04e..a147231 100644 --- a/generated_pages/incidents/I00111.md +++ b/generated_pages/incidents/I00111.md @@ -24,10 +24,7 @@ | [T0085 Develop Text-Based Content](../../generated_pages/techniques/T0085.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | | [T0087 Develop Video-Based Content](../../generated_pages/techniques/T0087.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | | [T0088 Develop Audio-Based Content](../../generated_pages/techniques/T0088.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | -| [T0146 Account Asset](../../generated_pages/techniques/T0146.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | | [T0151.014 Comments Section](../../generated_pages/techniques/T0151.014.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | | [T0152.012 Subscription Service Platform](../../generated_pages/techniques/T0152.012.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | -| [T0155.006 Subscription Access Asset](../../generated_pages/techniques/T0155.006.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00113.md b/generated_pages/incidents/I00113.md index 6889665..93197e4 100644 --- a/generated_pages/incidents/I00113.md +++ b/generated_pages/incidents/I00113.md @@ -29,8 +29,6 @@ | [T0150.004 Repurposed Asset](../../generated_pages/techniques/T0150.004.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | | [T0150.007 Rented Asset](../../generated_pages/techniques/T0150.007.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | | [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | -| [T0151.007 Chat Broadcast Group](../../generated_pages/techniques/T0151.007.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | | [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00114.md b/generated_pages/incidents/I00114.md index 2a7335a..da98278 100644 --- a/generated_pages/incidents/I00114.md +++ b/generated_pages/incidents/I00114.md @@ -21,7 +21,6 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0097.208 Social Cause Persona](../../generated_pages/techniques/T0097.208.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| | [T0114 Deliver Ads](../../generated_pages/techniques/T0114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| | [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| | [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| @@ -29,4 +28,3 @@ | [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00116.md b/generated_pages/incidents/I00116.md index dfc96f1..07e9c3c 100644 --- a/generated_pages/incidents/I00116.md +++ b/generated_pages/incidents/I00116.md @@ -21,14 +21,11 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0097.205 Business Persona](../../generated_pages/techniques/T0097.205.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | | [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | | [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | | [T0146.002 Paid Account Asset](../../generated_pages/techniques/T0146.002.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | | [T0146.003 Verified Account Asset](../../generated_pages/techniques/T0146.003.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | | [T0146.005 Lookalike Account ID](../../generated_pages/techniques/T0146.005.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | | [T0150.001 Newly Created Asset](../../generated_pages/techniques/T0150.001.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | -| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00118.md b/generated_pages/incidents/I00118.md index b5763a9..193c928 100644 --- a/generated_pages/incidents/I00118.md +++ b/generated_pages/incidents/I00118.md @@ -23,10 +23,8 @@ | --------- | ------------------------- | | [T0089.001 Obtain Authentic Documents](../../generated_pages/techniques/T0089.001.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | | [T0097.105 Military Personnel Persona](../../generated_pages/techniques/T0097.105.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | -| [T0115 Post Content](../../generated_pages/techniques/T0115.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | | [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | | [T0146 Account Asset](../../generated_pages/techniques/T0146.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | | [T0151.009 Legacy Online Forum Platform](../../generated_pages/techniques/T0151.009.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00119.md b/generated_pages/incidents/I00119.md index 38e6a86..20a66f5 100644 --- a/generated_pages/incidents/I00119.md +++ b/generated_pages/incidents/I00119.md @@ -23,12 +23,9 @@ | --------- | ------------------------- | | [T0089 Obtain Private Documents](../../generated_pages/techniques/T0089.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | | [T0097.100 Individual Persona](../../generated_pages/techniques/T0097.100.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | -| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | | [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | | [T0150.003 Pre-Existing Asset](../../generated_pages/techniques/T0150.003.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | | [T0152.001 Blogging Platform](../../generated_pages/techniques/T0152.001.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | -| [T0152.002 Blog Asset](../../generated_pages/techniques/T0152.002.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | | [T0153.001 Email Platform](../../generated_pages/techniques/T0153.001.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00120.md b/generated_pages/incidents/I00120.md index 7f8beda..0eb440e 100644 --- a/generated_pages/incidents/I00120.md +++ b/generated_pages/incidents/I00120.md @@ -22,10 +22,8 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0097.203 Fact Checking Organisation Persona](../../generated_pages/techniques/T0097.203.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | -| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | | [T0146.003 Verified Account Asset](../../generated_pages/techniques/T0146.003.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | | [T0150.003 Pre-Existing Asset](../../generated_pages/techniques/T0150.003.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | | [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00122.md b/generated_pages/incidents/I00122.md index 6ec4185..1b1746d 100644 --- a/generated_pages/incidents/I00122.md +++ b/generated_pages/incidents/I00122.md @@ -22,12 +22,9 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0048 Harass](../../generated_pages/techniques/T0048.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | -| [T0049.005 Conduct Swarming](../../generated_pages/techniques/T0049.005.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | -| [T0057 Organise Events](../../generated_pages/techniques/T0057.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | | [T0126.002 Facilitate Logistics or Support for Attendance](../../generated_pages/techniques/T0126.002.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | | [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | | [T0151.005 Chat Community Server](../../generated_pages/techniques/T0151.005.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | | [T0151.006 Chat Room](../../generated_pages/techniques/T0151.006.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00123.md b/generated_pages/incidents/I00123.md index 43d5b05..7f6a4d5 100644 --- a/generated_pages/incidents/I00123.md +++ b/generated_pages/incidents/I00123.md @@ -21,7 +21,6 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0048 Harass](../../generated_pages/techniques/T0048.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). | | [T0049.005 Conduct Swarming](../../generated_pages/techniques/T0049.005.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). | | [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | | [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) | Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):

The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.

The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The group’s overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.
| @@ -32,4 +31,3 @@ | [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00125.md b/generated_pages/incidents/I00125.md index 4bd752b..7f6dd71 100644 --- a/generated_pages/incidents/I00125.md +++ b/generated_pages/incidents/I00125.md @@ -24,8 +24,6 @@ | [T0087 Develop Video-Based Content](../../generated_pages/techniques/T0087.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | | [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | | [T0146 Account Asset](../../generated_pages/techniques/T0146.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | -| [T0150.004 Repurposed Asset](../../generated_pages/techniques/T0150.004.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | | [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00126.md b/generated_pages/incidents/I00126.md index 0f13440..ea29a96 100644 --- a/generated_pages/incidents/I00126.md +++ b/generated_pages/incidents/I00126.md @@ -25,9 +25,6 @@ | [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | | [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | | [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | -| [T0147.003 Malware Asset](../../generated_pages/techniques/T0147.003.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | | [T0149.002 Email Domain Asset](../../generated_pages/techniques/T0149.002.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | -| [T0149.003 Lookalike Domain](../../generated_pages/techniques/T0149.003.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00128.md b/generated_pages/incidents/I00128.md index 3f6aee5..445ad42 100644 --- a/generated_pages/incidents/I00128.md +++ b/generated_pages/incidents/I00128.md @@ -22,11 +22,9 @@ | Technique | Description given for this incident | | --------- | ------------------------- | | [T0097.208 Social Cause Persona](../../generated_pages/techniques/T0097.208.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | -| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | | [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | | [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | | [T0152.004 Website Asset](../../generated_pages/techniques/T0152.004.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | | [T0153.004 QR Code Asset](../../generated_pages/techniques/T0153.004.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/incidents/I00129.md b/generated_pages/incidents/I00129.md index 28f6326..1f806e4 100644 --- a/generated_pages/incidents/I00129.md +++ b/generated_pages/incidents/I00129.md @@ -21,7 +21,6 @@ | Technique | Description given for this incident | | --------- | ------------------------- | -| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | | [T0146.004 Administrator Account Asset](../../generated_pages/techniques/T0146.004.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | | [T0148.009 Cryptocurrency Wallet](../../generated_pages/techniques/T0148.009.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | | [T0150.005 Compromised Asset](../../generated_pages/techniques/T0150.005.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | @@ -29,4 +28,3 @@ | [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0002.md b/generated_pages/techniques/T0002.md index f296f0b..6dfa6db 100644 --- a/generated_pages/techniques/T0002.md +++ b/generated_pages/techniques/T0002.md @@ -2,6 +2,58 @@ **Summary**: Organise citizens around pro-state messaging. Coordinate paid or volunteer groups to push state propaganda. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00029 Create fake website to issue counter narrative and counter narrative through physical merchandise](../../generated_pages/counters/C00029.md) | D03 | +| [C00030 Develop a compelling counter narrative (truth based)](../../generated_pages/counters/C00030.md) | D03 | +| [C00031 Dilute the core narrative - create multiple permutations, target / amplify](../../generated_pages/counters/C00031.md) | D03 | +| [C00082 Ground truthing as automated response to pollution](../../generated_pages/counters/C00082.md) | D03 | +| [C00084 Modify disinformation narratives, and rebroadcast them](../../generated_pages/counters/C00084.md) | D03 | + + +# Technique T0002: Facilitate State Propaganda + +**Summary**: Organise citizens around pro-state messaging. Coordinate paid or volunteer groups to push state propaganda. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00029 Create fake website to issue counter narrative and counter narrative through physical merchandise](../../generated_pages/counters/C00029.md) | D03 | +| [C00030 Develop a compelling counter narrative (truth based)](../../generated_pages/counters/C00030.md) | D03 | +| [C00031 Dilute the core narrative - create multiple permutations, target / amplify](../../generated_pages/counters/C00031.md) | D03 | +| [C00082 Ground truthing as automated response to pollution](../../generated_pages/counters/C00082.md) | D03 | +| [C00084 Modify disinformation narratives, and rebroadcast them](../../generated_pages/counters/C00084.md) | D03 | + + +# Technique T0002: Facilitate State Propaganda + +**Summary**: Organise citizens around pro-state messaging. Coordinate paid or volunteer groups to push state propaganda. + **Tactic**: TA02 Plan Objectives @@ -24,4 +76,3 @@ | [C00084 Modify disinformation narratives, and rebroadcast them](../../generated_pages/counters/C00084.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0003.md b/generated_pages/techniques/T0003.md index 4e39874..6259131 100644 --- a/generated_pages/techniques/T0003.md +++ b/generated_pages/techniques/T0003.md @@ -2,6 +2,52 @@ **Summary**: Use or adapt existing narrative themes, where narratives are the baseline stories of a target audience. Narratives form the bedrock of our worldviews. New information is understood through a process firmly grounded in this bedrock. If new information is not consitent with the prevailing narratives of an audience, it will be ignored. Effective campaigns will frame their misinformation in the context of these narratives. Highly effective campaigns will make extensive use of audience-appropriate archetypes and meta-narratives throughout their content creation and amplifiction practices. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00080 Create competing narrative](../../generated_pages/counters/C00080.md) | D03 | +| [C00081 Highlight flooding and noise, and explain motivations](../../generated_pages/counters/C00081.md) | D03 | + + +# Technique T0003: Leverage Existing Narratives + +**Summary**: Use or adapt existing narrative themes, where narratives are the baseline stories of a target audience. Narratives form the bedrock of our worldviews. New information is understood through a process firmly grounded in this bedrock. If new information is not consitent with the prevailing narratives of an audience, it will be ignored. Effective campaigns will frame their misinformation in the context of these narratives. Highly effective campaigns will make extensive use of audience-appropriate archetypes and meta-narratives throughout their content creation and amplifiction practices. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00080 Create competing narrative](../../generated_pages/counters/C00080.md) | D03 | +| [C00081 Highlight flooding and noise, and explain motivations](../../generated_pages/counters/C00081.md) | D03 | + + +# Technique T0003: Leverage Existing Narratives + +**Summary**: Use or adapt existing narrative themes, where narratives are the baseline stories of a target audience. Narratives form the bedrock of our worldviews. New information is understood through a process firmly grounded in this bedrock. If new information is not consitent with the prevailing narratives of an audience, it will be ignored. Effective campaigns will frame their misinformation in the context of these narratives. Highly effective campaigns will make extensive use of audience-appropriate archetypes and meta-narratives throughout their content creation and amplifiction practices. + **Tactic**: TA14 Develop Narratives @@ -21,4 +67,3 @@ | [C00081 Highlight flooding and noise, and explain motivations](../../generated_pages/counters/C00081.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0004.md b/generated_pages/techniques/T0004.md index 371001a..d157bcf 100644 --- a/generated_pages/techniques/T0004.md +++ b/generated_pages/techniques/T0004.md @@ -2,6 +2,50 @@ **Summary**: Advance competing narratives connected to same issue ie: on one hand deny incident while at same time expresses dismiss. Suppressing or discouraging narratives already spreading requires an alternative. The most simple set of narrative techniques in response would be the construction and promotion of contradictory alternatives centred on denial, deflection, dismissal, counter-charges, excessive standards of proof, bias in prohibition or enforcement, and so on. These competing narratives allow loyalists cover, but are less compelling to opponents and fence-sitters than campaigns built around existing narratives or highly explanatory master narratives. Competing narratives, as such, are especially useful in the "firehose of misinformation" approach. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00042 Address truth contained in narratives](../../generated_pages/counters/C00042.md) | D04 | + + +# Technique T0004: Develop Competing Narratives + +**Summary**: Advance competing narratives connected to same issue ie: on one hand deny incident while at same time expresses dismiss. Suppressing or discouraging narratives already spreading requires an alternative. The most simple set of narrative techniques in response would be the construction and promotion of contradictory alternatives centred on denial, deflection, dismissal, counter-charges, excessive standards of proof, bias in prohibition or enforcement, and so on. These competing narratives allow loyalists cover, but are less compelling to opponents and fence-sitters than campaigns built around existing narratives or highly explanatory master narratives. Competing narratives, as such, are especially useful in the "firehose of misinformation" approach. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00042 Address truth contained in narratives](../../generated_pages/counters/C00042.md) | D04 | + + +# Technique T0004: Develop Competing Narratives + +**Summary**: Advance competing narratives connected to same issue ie: on one hand deny incident while at same time expresses dismiss. Suppressing or discouraging narratives already spreading requires an alternative. The most simple set of narrative techniques in response would be the construction and promotion of contradictory alternatives centred on denial, deflection, dismissal, counter-charges, excessive standards of proof, bias in prohibition or enforcement, and so on. These competing narratives allow loyalists cover, but are less compelling to opponents and fence-sitters than campaigns built around existing narratives or highly explanatory master narratives. Competing narratives, as such, are especially useful in the "firehose of misinformation" approach. + **Tactic**: TA14 Develop Narratives @@ -20,4 +64,3 @@ | [C00042 Address truth contained in narratives](../../generated_pages/counters/C00042.md) | D04 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0010.md b/generated_pages/techniques/T0010.md index 5157450..7ce763f 100644 --- a/generated_pages/techniques/T0010.md +++ b/generated_pages/techniques/T0010.md @@ -2,6 +2,70 @@ **Summary**: Cultivate propagandists for a cause, the goals of which are not fully comprehended, and who are used cynically by the leaders of the cause. Independent actors use social media and specialised web sites to strategically reinforce and spread messages compatible with their own. Their networks are infiltrated and used by state media disinformation organisations to amplify the state’s own disinformation strategies against target populations. Many are traffickers in conspiracy theories or hoaxes, unified by a suspicion of Western governments and mainstream media. Their narratives, which appeal to leftists hostile to globalism and military intervention and nationalists against immigration, are frequently infiltrated and shaped by state-controlled trolls and altered news items from agencies such as RT and Sputnik. Also know as "useful idiots" or "unwitting agents". +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00009 Educate high profile influencers on best practices](../../generated_pages/counters/C00009.md) | D02 | +| [C00046 Marginalise and discredit extremist groups](../../generated_pages/counters/C00046.md) | D04 | +| [C00048 Name and Shame Influencers](../../generated_pages/counters/C00048.md) | D07 | +| [C00051 Counter social engineering training](../../generated_pages/counters/C00051.md) | D02 | +| [C00111 Reduce polarisation by connecting and presenting sympathetic renditions of opposite views](../../generated_pages/counters/C00111.md) | D04 | +| [C00130 Mentorship: elders, youth, credit. Learn vicariously.](../../generated_pages/counters/C00130.md) | D07 | +| [C00162 Unravel/target the Potemkin villages](../../generated_pages/counters/C00162.md) | D03 | +| [C00169 develop a creative content hub](../../generated_pages/counters/C00169.md) | D03 | +| [C00195 Redirect searches away from disinformation or extremist content](../../generated_pages/counters/C00195.md) | D02 | +| [C00200 Respected figure (influencer) disavows misinfo](../../generated_pages/counters/C00200.md) | D03 | +| [C00203 Stop offering press credentials to propaganda outlets](../../generated_pages/counters/C00203.md) | D03 | + + +# Technique T0010: Cultivate Ignorant Agents + +**Summary**: Cultivate propagandists for a cause, the goals of which are not fully comprehended, and who are used cynically by the leaders of the cause. Independent actors use social media and specialised web sites to strategically reinforce and spread messages compatible with their own. Their networks are infiltrated and used by state media disinformation organisations to amplify the state’s own disinformation strategies against target populations. Many are traffickers in conspiracy theories or hoaxes, unified by a suspicion of Western governments and mainstream media. Their narratives, which appeal to leftists hostile to globalism and military intervention and nationalists against immigration, are frequently infiltrated and shaped by state-controlled trolls and altered news items from agencies such as RT and Sputnik. Also know as "useful idiots" or "unwitting agents". + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00009 Educate high profile influencers on best practices](../../generated_pages/counters/C00009.md) | D02 | +| [C00046 Marginalise and discredit extremist groups](../../generated_pages/counters/C00046.md) | D04 | +| [C00048 Name and Shame Influencers](../../generated_pages/counters/C00048.md) | D07 | +| [C00051 Counter social engineering training](../../generated_pages/counters/C00051.md) | D02 | +| [C00111 Reduce polarisation by connecting and presenting sympathetic renditions of opposite views](../../generated_pages/counters/C00111.md) | D04 | +| [C00130 Mentorship: elders, youth, credit. Learn vicariously.](../../generated_pages/counters/C00130.md) | D07 | +| [C00162 Unravel/target the Potemkin villages](../../generated_pages/counters/C00162.md) | D03 | +| [C00169 develop a creative content hub](../../generated_pages/counters/C00169.md) | D03 | +| [C00195 Redirect searches away from disinformation or extremist content](../../generated_pages/counters/C00195.md) | D02 | +| [C00200 Respected figure (influencer) disavows misinfo](../../generated_pages/counters/C00200.md) | D03 | +| [C00203 Stop offering press credentials to propaganda outlets](../../generated_pages/counters/C00203.md) | D03 | + + +# Technique T0010: Cultivate Ignorant Agents + +**Summary**: Cultivate propagandists for a cause, the goals of which are not fully comprehended, and who are used cynically by the leaders of the cause. Independent actors use social media and specialised web sites to strategically reinforce and spread messages compatible with their own. Their networks are infiltrated and used by state media disinformation organisations to amplify the state’s own disinformation strategies against target populations. Many are traffickers in conspiracy theories or hoaxes, unified by a suspicion of Western governments and mainstream media. Their narratives, which appeal to leftists hostile to globalism and military intervention and nationalists against immigration, are frequently infiltrated and shaped by state-controlled trolls and altered news items from agencies such as RT and Sputnik. Also know as "useful idiots" or "unwitting agents". + **Tactic**: TA15 Establish Assets @@ -30,4 +94,3 @@ | [C00203 Stop offering press credentials to propaganda outlets](../../generated_pages/counters/C00203.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0014.001.md b/generated_pages/techniques/T0014.001.md index 820d9fc..b927819 100644 --- a/generated_pages/techniques/T0014.001.md +++ b/generated_pages/techniques/T0014.001.md @@ -2,6 +2,48 @@ **Summary**: Raising funds from malign actors may include contributions from foreign agents, cutouts or proxies, shell companies, dark money groups, etc. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0014 Prepare Fundraising Campaigns + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0014.001: Raise Funds from Malign Actors + +**Summary**: Raising funds from malign actors may include contributions from foreign agents, cutouts or proxies, shell companies, dark money groups, etc. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0014 Prepare Fundraising Campaigns + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0014.001: Raise Funds from Malign Actors + +**Summary**: Raising funds from malign actors may include contributions from foreign agents, cutouts or proxies, shell companies, dark money groups, etc. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0014.002.md b/generated_pages/techniques/T0014.002.md index 9ed8f0b..4acadd1 100644 --- a/generated_pages/techniques/T0014.002.md +++ b/generated_pages/techniques/T0014.002.md @@ -2,6 +2,48 @@ **Summary**: Raising funds from ignorant agents may include scams, donations intended for one stated purpose but then used for another, etc. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0014 Prepare Fundraising Campaigns + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0014.002: Raise Funds from Ignorant Agents + +**Summary**: Raising funds from ignorant agents may include scams, donations intended for one stated purpose but then used for another, etc. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0014 Prepare Fundraising Campaigns + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0014.002: Raise Funds from Ignorant Agents + +**Summary**: Raising funds from ignorant agents may include scams, donations intended for one stated purpose but then used for another, etc. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0014.md b/generated_pages/techniques/T0014.md index b11182a..59fb482 100644 --- a/generated_pages/techniques/T0014.md +++ b/generated_pages/techniques/T0014.md @@ -2,6 +2,54 @@ **Summary**: Fundraising campaigns refer to an influence operation’s systematic effort to seek financial support for a charity, cause, or other enterprise using online activities that further promote operation information pathways while raising a profit. Many influence operations have engaged in crowdfunding services on platforms including Tipee, Patreon, and GoFundMe. An operation may use its previously prepared fundraising campaigns (see: Develop Information Pathways) to promote operation messaging while raising money to support its activities. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00059 Verification of project before posting fund requests](../../generated_pages/counters/C00059.md) | D02 | +| [C00155 Ban incident actors from funding sites](../../generated_pages/counters/C00155.md) | D02 | +| [C00216 Use advertiser controls to stem flow of funds to bad actors](../../generated_pages/counters/C00216.md) | D02 | + + +# Technique T0014: Prepare Fundraising Campaigns + +**Summary**: Fundraising campaigns refer to an influence operation’s systematic effort to seek financial support for a charity, cause, or other enterprise using online activities that further promote operation information pathways while raising a profit. Many influence operations have engaged in crowdfunding services on platforms including Tipee, Patreon, and GoFundMe. An operation may use its previously prepared fundraising campaigns (see: Develop Information Pathways) to promote operation messaging while raising money to support its activities. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00059 Verification of project before posting fund requests](../../generated_pages/counters/C00059.md) | D02 | +| [C00155 Ban incident actors from funding sites](../../generated_pages/counters/C00155.md) | D02 | +| [C00216 Use advertiser controls to stem flow of funds to bad actors](../../generated_pages/counters/C00216.md) | D02 | + + +# Technique T0014: Prepare Fundraising Campaigns + +**Summary**: Fundraising campaigns refer to an influence operation’s systematic effort to seek financial support for a charity, cause, or other enterprise using online activities that further promote operation information pathways while raising a profit. Many influence operations have engaged in crowdfunding services on platforms including Tipee, Patreon, and GoFundMe. An operation may use its previously prepared fundraising campaigns (see: Develop Information Pathways) to promote operation messaging while raising money to support its activities. + **Tactic**: TA15 Establish Assets @@ -22,4 +70,3 @@ | [C00216 Use advertiser controls to stem flow of funds to bad actors](../../generated_pages/counters/C00216.md) | D02 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0015.001.md b/generated_pages/techniques/T0015.001.md index bdec23e..1c2f12e 100644 --- a/generated_pages/techniques/T0015.001.md +++ b/generated_pages/techniques/T0015.001.md @@ -2,6 +2,48 @@ **Summary**: Use a dedicated, existing hashtag for the campaign/incident. This Technique covers behaviours previously documented by T0104.005: Use Hashtags, which has since been deprecated. +**Tactic**: TA06 Develop Content **Parent Technique:** T0015 Create Hashtags and Search Artefacts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0015.001: Use Existing Hashtag + +**Summary**: Use a dedicated, existing hashtag for the campaign/incident. This Technique covers behaviours previously documented by T0104.005: Use Hashtags, which has since been deprecated. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0015 Create Hashtags and Search Artefacts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0015.001: Use Existing Hashtag + +**Summary**: Use a dedicated, existing hashtag for the campaign/incident. This Technique covers behaviours previously documented by T0104.005: Use Hashtags, which has since been deprecated. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0015.002.md b/generated_pages/techniques/T0015.002.md index 6f96f7c..9b38697 100644 --- a/generated_pages/techniques/T0015.002.md +++ b/generated_pages/techniques/T0015.002.md @@ -2,6 +2,48 @@ **Summary**: Create a campaign/incident specific hashtag. This Technique covers behaviours previously documented by T0104.006: Create Dedicated Hashtag, which has since been deprecated. +**Tactic**: TA06 Develop Content **Parent Technique:** T0015 Create Hashtags and Search Artefacts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0015.002: Create New Hashtag + +**Summary**: Create a campaign/incident specific hashtag. This Technique covers behaviours previously documented by T0104.006: Create Dedicated Hashtag, which has since been deprecated. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0015 Create Hashtags and Search Artefacts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0015.002: Create New Hashtag + +**Summary**: Create a campaign/incident specific hashtag. This Technique covers behaviours previously documented by T0104.006: Create Dedicated Hashtag, which has since been deprecated. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0015.md b/generated_pages/techniques/T0015.md index 725c115..67c8e2f 100644 --- a/generated_pages/techniques/T0015.md +++ b/generated_pages/techniques/T0015.md @@ -2,6 +2,52 @@ **Summary**: Create one or more hashtags and/or hashtag groups. Many incident-based campaigns will create hashtags to promote their fabricated event. Creating a hashtag for an incident can have two important effects: 1. Create a perception of reality around an event. Certainly only "real" events would be discussed in a hashtag. After all, the event has a name!, and 2. Publicise the story more widely through trending lists and search behaviour. Asset needed to direct/control/manage "conversation" connected to launching new incident/campaign with new hashtag for applicable social media sites). +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”, which posted hashtags alongside campaign content (T0015: Create Hashtags and Search Artefacts):

“The accounts post generic images to fill their account feed to make the account seem real. They then employ a hidden hashtag in their posts, consisting of a seemingly random string of numbers and letters.

“The hypothesis regarding this tactic is that the group orchestrating these accounts utilizes these hashtags as a means of indexing them. This system likely serves a dual purpose: firstly, to keep track of the network’s expansive network of accounts and unique posts, and secondly, to streamline the process of boosting engagement among these accounts. By searching for these specific, unique hashtags, the group can quickly locate posts from their network and engage with them using other fake accounts, thereby artificially inflating the visibility and perceived authenticity of the fake account.”
| + + + +| Counters | Response types | +| -------- | -------------- | +| [C00066 Co-opt a hashtag and drown it out (hijack it back)](../../generated_pages/counters/C00066.md) | D03 | + + +# Technique T0015: Create Hashtags and Search Artefacts + +**Summary**: Create one or more hashtags and/or hashtag groups. Many incident-based campaigns will create hashtags to promote their fabricated event. Creating a hashtag for an incident can have two important effects: 1. Create a perception of reality around an event. Certainly only "real" events would be discussed in a hashtag. After all, the event has a name!, and 2. Publicise the story more widely through trending lists and search behaviour. Asset needed to direct/control/manage "conversation" connected to launching new incident/campaign with new hashtag for applicable social media sites). + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”, which posted hashtags alongside campaign content (T0015: Create Hashtags and Search Artefacts):

“The accounts post generic images to fill their account feed to make the account seem real. They then employ a hidden hashtag in their posts, consisting of a seemingly random string of numbers and letters.

“The hypothesis regarding this tactic is that the group orchestrating these accounts utilizes these hashtags as a means of indexing them. This system likely serves a dual purpose: firstly, to keep track of the network’s expansive network of accounts and unique posts, and secondly, to streamline the process of boosting engagement among these accounts. By searching for these specific, unique hashtags, the group can quickly locate posts from their network and engage with them using other fake accounts, thereby artificially inflating the visibility and perceived authenticity of the fake account.”
| + + + +| Counters | Response types | +| -------- | -------------- | +| [C00066 Co-opt a hashtag and drown it out (hijack it back)](../../generated_pages/counters/C00066.md) | D03 | + + +# Technique T0015: Create Hashtags and Search Artefacts + +**Summary**: Create one or more hashtags and/or hashtag groups. Many incident-based campaigns will create hashtags to promote their fabricated event. Creating a hashtag for an incident can have two important effects: 1. Create a perception of reality around an event. Certainly only "real" events would be discussed in a hashtag. After all, the event has a name!, and 2. Publicise the story more widely through trending lists and search behaviour. Asset needed to direct/control/manage "conversation" connected to launching new incident/campaign with new hashtag for applicable social media sites). + **Tactic**: TA06 Develop Content @@ -21,4 +67,3 @@ | [C00066 Co-opt a hashtag and drown it out (hijack it back)](../../generated_pages/counters/C00066.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0016.md b/generated_pages/techniques/T0016.md index a3612fc..9019fb9 100644 --- a/generated_pages/techniques/T0016.md +++ b/generated_pages/techniques/T0016.md @@ -2,6 +2,60 @@ **Summary**: Create attention grabbing headlines (outrage, doubt, humour) required to drive traffic & engagement. This is a key asset. +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], however, the Donbas News International (DNI) agency, based in Donetsk, Ukraine, and (since September 2016) an official state media outlet of the unrecognized separatist Donetsk People’s Republic, ran an article under the sensational headline, “US sends 3,600 tanks against Russia — massive NATO deployment under way.” DNI is run by Finnish exile Janus Putkonen, described by the Finnish national broadcaster, YLE, as a “Finnish info warrior”, and the first foreigner to be granted a Donetsk passport.

“The equally sensational opening paragraph ran, “The NATO war preparation against Russia, ‘Operation Atlantic Resolve’, is in full swing. 2,000 US tanks will be sent in coming days from Germany to Eastern Europe, and 1,600 US tanks is deployed to storage facilities in the Netherlands. At the same time, NATO countries are sending thousands of soldiers in to Russian borders.”

“The report is based around an obvious factual error, conflating the total number of vehicles with the actual number of tanks, and therefore multiplying the actual tank force 20 times over. For context, military website globalfirepower.com puts the total US tank force at 8,848. If the DNI story had been true, it would have meant sending 40% of all the US’ main battle tanks to Europe in one go.

“Could this have been an innocent mistake? The simple answer is “no”. The journalist who penned the story had a sufficient command of the details to be able to write, later in the same article, “In January, 26 tanks, 100 other vehicles and 120 containers will be transported by train to Lithuania. Germany will send the 122nd Infantry Battalion.” Yet the same author apparently believed, in the headline and first paragraph, that every single vehicle in Atlantic Resolve is a tank. To call this an innocent mistake is simply not plausible.

“The DNI story can only realistically be considered a deliberate fake designed to caricaturize and demonize NATO, the United States and Germany (tactfully referred to in the report as having “rolled over Eastern Europe in its war of extermination 75 years ago”) by grossly overstating the number of MBTs involved.”


This behaviour matches T0016: Create Clickbait because the person who wrote the story is shown to be aware of the fact that there were non-tank vehicles later in their story, but still chose to give the article a sensationalist headline claiming that all vehicles being sent were tanks. | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00073 Inoculate populations through media literacy training](../../generated_pages/counters/C00073.md) | D02 | +| [C00076 Prohibit images in political discourse channels](../../generated_pages/counters/C00076.md) | D02 | +| [C00105 Buy more advertising than misinformation creators](../../generated_pages/counters/C00105.md) | D03 | +| [C00106 Click-bait centrist content](../../generated_pages/counters/C00106.md) | D03 | +| [C00178 Fill information voids with non-disinformation content](../../generated_pages/counters/C00178.md) | D04 | + + +# Technique T0016: Create Clickbait + +**Summary**: Create attention grabbing headlines (outrage, doubt, humour) required to drive traffic & engagement. This is a key asset. + +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], however, the Donbas News International (DNI) agency, based in Donetsk, Ukraine, and (since September 2016) an official state media outlet of the unrecognized separatist Donetsk People’s Republic, ran an article under the sensational headline, “US sends 3,600 tanks against Russia — massive NATO deployment under way.” DNI is run by Finnish exile Janus Putkonen, described by the Finnish national broadcaster, YLE, as a “Finnish info warrior”, and the first foreigner to be granted a Donetsk passport.

“The equally sensational opening paragraph ran, “The NATO war preparation against Russia, ‘Operation Atlantic Resolve’, is in full swing. 2,000 US tanks will be sent in coming days from Germany to Eastern Europe, and 1,600 US tanks is deployed to storage facilities in the Netherlands. At the same time, NATO countries are sending thousands of soldiers in to Russian borders.”

“The report is based around an obvious factual error, conflating the total number of vehicles with the actual number of tanks, and therefore multiplying the actual tank force 20 times over. For context, military website globalfirepower.com puts the total US tank force at 8,848. If the DNI story had been true, it would have meant sending 40% of all the US’ main battle tanks to Europe in one go.

“Could this have been an innocent mistake? The simple answer is “no”. The journalist who penned the story had a sufficient command of the details to be able to write, later in the same article, “In January, 26 tanks, 100 other vehicles and 120 containers will be transported by train to Lithuania. Germany will send the 122nd Infantry Battalion.” Yet the same author apparently believed, in the headline and first paragraph, that every single vehicle in Atlantic Resolve is a tank. To call this an innocent mistake is simply not plausible.

“The DNI story can only realistically be considered a deliberate fake designed to caricaturize and demonize NATO, the United States and Germany (tactfully referred to in the report as having “rolled over Eastern Europe in its war of extermination 75 years ago”) by grossly overstating the number of MBTs involved.”


This behaviour matches T0016: Create Clickbait because the person who wrote the story is shown to be aware of the fact that there were non-tank vehicles later in their story, but still chose to give the article a sensationalist headline claiming that all vehicles being sent were tanks. | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00073 Inoculate populations through media literacy training](../../generated_pages/counters/C00073.md) | D02 | +| [C00076 Prohibit images in political discourse channels](../../generated_pages/counters/C00076.md) | D02 | +| [C00105 Buy more advertising than misinformation creators](../../generated_pages/counters/C00105.md) | D03 | +| [C00106 Click-bait centrist content](../../generated_pages/counters/C00106.md) | D03 | +| [C00178 Fill information voids with non-disinformation content](../../generated_pages/counters/C00178.md) | D04 | + + +# Technique T0016: Create Clickbait + +**Summary**: Create attention grabbing headlines (outrage, doubt, humour) required to drive traffic & engagement. This is a key asset. + **Tactic**: TA05 Microtarget @@ -25,4 +79,3 @@ | [C00178 Fill information voids with non-disinformation content](../../generated_pages/counters/C00178.md) | D04 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0017.001.md b/generated_pages/techniques/T0017.001.md index b5e7ce1..0707ac1 100644 --- a/generated_pages/techniques/T0017.001.md +++ b/generated_pages/techniques/T0017.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may Conduct Crowdfunding Campaigns on platforms such as GoFundMe, GiveSendGo, Tipeee, Patreon, etc. +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0017 Conduct Fundraising + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0017.001: Conduct Crowdfunding Campaigns + +**Summary**: An influence operation may Conduct Crowdfunding Campaigns on platforms such as GoFundMe, GiveSendGo, Tipeee, Patreon, etc. + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0017 Conduct Fundraising + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0017.001: Conduct Crowdfunding Campaigns + +**Summary**: An influence operation may Conduct Crowdfunding Campaigns on platforms such as GoFundMe, GiveSendGo, Tipeee, Patreon, etc. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0017.md b/generated_pages/techniques/T0017.md index 062a439..c23c043 100644 --- a/generated_pages/techniques/T0017.md +++ b/generated_pages/techniques/T0017.md @@ -2,6 +2,54 @@ **Summary**: Fundraising campaigns refer to an influence operation’s systematic effort to seek financial support for a charity, cause, or other enterprise using online activities that further promote operation information pathways while raising a profit. Many influence operations have engaged in crowdfunding services166 on platforms including Tipee, Patreon, and GoFundMe. An operation may use its previously prepared fundraising campaigns to promote operation messaging while raising money to support its activities. +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00058 Report crowdfunder as violator](../../generated_pages/counters/C00058.md) | D02 | +| [C00067 Denigrate the recipient/ project (of online funding)](../../generated_pages/counters/C00067.md) | D03 | + + +# Technique T0017: Conduct Fundraising + +**Summary**: Fundraising campaigns refer to an influence operation’s systematic effort to seek financial support for a charity, cause, or other enterprise using online activities that further promote operation information pathways while raising a profit. Many influence operations have engaged in crowdfunding services166 on platforms including Tipee, Patreon, and GoFundMe. An operation may use its previously prepared fundraising campaigns to promote operation messaging while raising money to support its activities. + +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00058 Report crowdfunder as violator](../../generated_pages/counters/C00058.md) | D02 | +| [C00067 Denigrate the recipient/ project (of online funding)](../../generated_pages/counters/C00067.md) | D03 | + + +# Technique T0017: Conduct Fundraising + +**Summary**: Fundraising campaigns refer to an influence operation’s systematic effort to seek financial support for a charity, cause, or other enterprise using online activities that further promote operation information pathways while raising a profit. Many influence operations have engaged in crowdfunding services166 on platforms including Tipee, Patreon, and GoFundMe. An operation may use its previously prepared fundraising campaigns to promote operation messaging while raising money to support its activities. + **Tactic**: TA10 Drive Offline Activity @@ -22,4 +70,3 @@ | [C00067 Denigrate the recipient/ project (of online funding)](../../generated_pages/counters/C00067.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0018.md b/generated_pages/techniques/T0018.md index 2830dbb..013d185 100644 --- a/generated_pages/techniques/T0018.md +++ b/generated_pages/techniques/T0018.md @@ -2,6 +2,52 @@ **Summary**: Create or fund advertisements targeted at specific populations +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)

Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.

Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.

Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.


Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00065 Reduce political targeting](../../generated_pages/counters/C00065.md) | D03 | + + +# Technique T0018: Purchase Targeted Advertisements + +**Summary**: Create or fund advertisements targeted at specific populations + +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)

Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.

Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.

Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.


Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00065 Reduce political targeting](../../generated_pages/counters/C00065.md) | D03 | + + +# Technique T0018: Purchase Targeted Advertisements + +**Summary**: Create or fund advertisements targeted at specific populations + **Tactic**: TA05 Microtarget @@ -21,4 +67,3 @@ | [C00065 Reduce political targeting](../../generated_pages/counters/C00065.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0020.md b/generated_pages/techniques/T0020.md index 2d33115..ac81e8d 100644 --- a/generated_pages/techniques/T0020.md +++ b/generated_pages/techniques/T0020.md @@ -2,6 +2,50 @@ **Summary**: Iteratively test incident performance (messages, content etc), e.g. A/B test headline/content enagagement metrics; website and/or funding campaign conversion rates +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00090 Fake engagement system](../../generated_pages/counters/C00090.md) | D05 | + + +# Technique T0020: Trial Content + +**Summary**: Iteratively test incident performance (messages, content etc), e.g. A/B test headline/content enagagement metrics; website and/or funding campaign conversion rates + +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00090 Fake engagement system](../../generated_pages/counters/C00090.md) | D05 | + + +# Technique T0020: Trial Content + +**Summary**: Iteratively test incident performance (messages, content etc), e.g. A/B test headline/content enagagement metrics; website and/or funding campaign conversion rates + **Tactic**: TA08 Conduct Pump Priming @@ -20,4 +64,3 @@ | [C00090 Fake engagement system](../../generated_pages/counters/C00090.md) | D05 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0022.001.md b/generated_pages/techniques/T0022.001.md index 1495435..0a49efe 100644 --- a/generated_pages/techniques/T0022.001.md +++ b/generated_pages/techniques/T0022.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may amplify an existing conspiracy theory narrative that aligns with its incident or campaign goals. By amplifying existing conspiracy theory narratives, operators can leverage the power of the existing communities that support and propagate those theories without needing to expend resources creating new narratives or building momentum and buy in around new narratives. +**Tactic**: TA14 Develop Narratives **Parent Technique:** T0022 Leverage Conspiracy Theory Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0022.001: Amplify Existing Conspiracy Theory Narratives + +**Summary**: An influence operation may amplify an existing conspiracy theory narrative that aligns with its incident or campaign goals. By amplifying existing conspiracy theory narratives, operators can leverage the power of the existing communities that support and propagate those theories without needing to expend resources creating new narratives or building momentum and buy in around new narratives. + +**Tactic**: TA14 Develop Narratives **Parent Technique:** T0022 Leverage Conspiracy Theory Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0022.001: Amplify Existing Conspiracy Theory Narratives + +**Summary**: An influence operation may amplify an existing conspiracy theory narrative that aligns with its incident or campaign goals. By amplifying existing conspiracy theory narratives, operators can leverage the power of the existing communities that support and propagate those theories without needing to expend resources creating new narratives or building momentum and buy in around new narratives. + **Tactic**: TA14 Develop Narratives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0022.002.md b/generated_pages/techniques/T0022.002.md index 40aa295..c0c69a9 100644 --- a/generated_pages/techniques/T0022.002.md +++ b/generated_pages/techniques/T0022.002.md @@ -2,6 +2,48 @@ **Summary**: While this requires more resources than amplifying existing conspiracy theory narratives, an influence operation may develop original conspiracy theory narratives in order to achieve greater control and alignment over the narrative and their campaign goals. Prominent examples include the USSR's Operation INFEKTION disinformation campaign run by the KGB in the 1980s to plant the idea that the United States had invented HIV/AIDS as part of a biological weapons research project at Fort Detrick, Maryland. More recently, Fort Detrick featured prominently in a new conspiracy theory narratives around the origins of the COVID-19 outbreak and pandemic. +**Tactic**: TA14 Develop Narratives **Parent Technique:** T0022 Leverage Conspiracy Theory Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0022.002: Develop Original Conspiracy Theory Narratives + +**Summary**: While this requires more resources than amplifying existing conspiracy theory narratives, an influence operation may develop original conspiracy theory narratives in order to achieve greater control and alignment over the narrative and their campaign goals. Prominent examples include the USSR's Operation INFEKTION disinformation campaign run by the KGB in the 1980s to plant the idea that the United States had invented HIV/AIDS as part of a biological weapons research project at Fort Detrick, Maryland. More recently, Fort Detrick featured prominently in a new conspiracy theory narratives around the origins of the COVID-19 outbreak and pandemic. + +**Tactic**: TA14 Develop Narratives **Parent Technique:** T0022 Leverage Conspiracy Theory Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0022.002: Develop Original Conspiracy Theory Narratives + +**Summary**: While this requires more resources than amplifying existing conspiracy theory narratives, an influence operation may develop original conspiracy theory narratives in order to achieve greater control and alignment over the narrative and their campaign goals. Prominent examples include the USSR's Operation INFEKTION disinformation campaign run by the KGB in the 1980s to plant the idea that the United States had invented HIV/AIDS as part of a biological weapons research project at Fort Detrick, Maryland. More recently, Fort Detrick featured prominently in a new conspiracy theory narratives around the origins of the COVID-19 outbreak and pandemic. + **Tactic**: TA14 Develop Narratives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0022.md b/generated_pages/techniques/T0022.md index 5176ee4..f5cacc5 100644 --- a/generated_pages/techniques/T0022.md +++ b/generated_pages/techniques/T0022.md @@ -2,6 +2,58 @@ **Summary**: "Conspiracy narratives" appeal to the human desire for explanatory order, by invoking the participation of poweful (often sinister) actors in pursuit of their own political goals. These narratives are especially appealing when an audience is low-information, marginalised or otherwise inclined to reject the prevailing explanation. Conspiracy narratives are an important component of the "firehose of falsehoods" model. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00096 Strengthen institutions that are always truth tellers](../../generated_pages/counters/C00096.md) | D07 | +| [C00119 Engage payload and debunk.](../../generated_pages/counters/C00119.md) | D07 | +| [C00156 Better tell your country or organisation story](../../generated_pages/counters/C00156.md) | D03 | +| [C00161 Coalition Building with stakeholders and Third-Party Inducements](../../generated_pages/counters/C00161.md) | D07 | +| [C00164 compatriot policy](../../generated_pages/counters/C00164.md) | D03 | + + +# Technique T0022: Leverage Conspiracy Theory Narratives + +**Summary**: "Conspiracy narratives" appeal to the human desire for explanatory order, by invoking the participation of poweful (often sinister) actors in pursuit of their own political goals. These narratives are especially appealing when an audience is low-information, marginalised or otherwise inclined to reject the prevailing explanation. Conspiracy narratives are an important component of the "firehose of falsehoods" model. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00096 Strengthen institutions that are always truth tellers](../../generated_pages/counters/C00096.md) | D07 | +| [C00119 Engage payload and debunk.](../../generated_pages/counters/C00119.md) | D07 | +| [C00156 Better tell your country or organisation story](../../generated_pages/counters/C00156.md) | D03 | +| [C00161 Coalition Building with stakeholders and Third-Party Inducements](../../generated_pages/counters/C00161.md) | D07 | +| [C00164 compatriot policy](../../generated_pages/counters/C00164.md) | D03 | + + +# Technique T0022: Leverage Conspiracy Theory Narratives + +**Summary**: "Conspiracy narratives" appeal to the human desire for explanatory order, by invoking the participation of poweful (often sinister) actors in pursuit of their own political goals. These narratives are especially appealing when an audience is low-information, marginalised or otherwise inclined to reject the prevailing explanation. Conspiracy narratives are an important component of the "firehose of falsehoods" model. + **Tactic**: TA14 Develop Narratives @@ -24,4 +76,3 @@ | [C00164 compatriot policy](../../generated_pages/counters/C00164.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0023.001.md b/generated_pages/techniques/T0023.001.md index 38187b5..b6ec4c7 100644 --- a/generated_pages/techniques/T0023.001.md +++ b/generated_pages/techniques/T0023.001.md @@ -2,6 +2,48 @@ **Summary**: Reframing context refers to removing an event from its surrounding context to distort its intended meaning. Rather than deny that an event occurred, reframing context frames an event in a manner that may lead the target audience to draw a different conclusion about its intentions. +**Tactic**: TA06 Develop Content **Parent Technique:** T0023 Distort Facts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0023.001: Reframe Context + +**Summary**: Reframing context refers to removing an event from its surrounding context to distort its intended meaning. Rather than deny that an event occurred, reframing context frames an event in a manner that may lead the target audience to draw a different conclusion about its intentions. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0023 Distort Facts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0023.001: Reframe Context + +**Summary**: Reframing context refers to removing an event from its surrounding context to distort its intended meaning. Rather than deny that an event occurred, reframing context frames an event in a manner that may lead the target audience to draw a different conclusion about its intentions. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0023.002.md b/generated_pages/techniques/T0023.002.md index a16ab48..c7b629e 100644 --- a/generated_pages/techniques/T0023.002.md +++ b/generated_pages/techniques/T0023.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may edit open-source content, such as collaborative blogs or encyclopaedias, to promote its narratives on outlets with existing credibility and audiences. Editing open-source content may allow an operation to post content on platforms without dedicating resources to the creation and maintenance of its own assets. +**Tactic**: TA06 Develop Content **Parent Technique:** T0023 Distort Facts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0023.002: Edit Open-Source Content + +**Summary**: An influence operation may edit open-source content, such as collaborative blogs or encyclopaedias, to promote its narratives on outlets with existing credibility and audiences. Editing open-source content may allow an operation to post content on platforms without dedicating resources to the creation and maintenance of its own assets. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0023 Distort Facts + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0023.002: Edit Open-Source Content + +**Summary**: An influence operation may edit open-source content, such as collaborative blogs or encyclopaedias, to promote its narratives on outlets with existing credibility and audiences. Editing open-source content may allow an operation to post content on platforms without dedicating resources to the creation and maintenance of its own assets. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0023.md b/generated_pages/techniques/T0023.md index de4aec2..3a72984 100644 --- a/generated_pages/techniques/T0023.md +++ b/generated_pages/techniques/T0023.md @@ -2,6 +2,50 @@ **Summary**: Change, twist, or exaggerate existing facts to construct a narrative that differs from reality. Examples: images and ideas can be distorted by being placed in an improper content +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0023: Distort Facts + +**Summary**: Change, twist, or exaggerate existing facts to construct a narrative that differs from reality. Examples: images and ideas can be distorted by being placed in an improper content + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0023: Distort Facts + +**Summary**: Change, twist, or exaggerate existing facts to construct a narrative that differs from reality. Examples: images and ideas can be distorted by being placed in an improper content + **Tactic**: TA06 Develop Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0029.md b/generated_pages/techniques/T0029.md index d72b343..95fd674 100644 --- a/generated_pages/techniques/T0029.md +++ b/generated_pages/techniques/T0029.md @@ -2,6 +2,58 @@ **Summary**: Create fake online polls, or manipulate existing online polls. Data gathering tactic to target those who engage, and potentially their networks of friends/followers as well +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00044 Keep people from posting to social media immediately](../../generated_pages/counters/C00044.md) | D03 | +| [C00097 Require use of verified identities to contribute to poll or comment](../../generated_pages/counters/C00097.md) | D02 | +| [C00101 Create friction by rate-limiting engagement](../../generated_pages/counters/C00101.md) | D04 | +| [C00103 Create a bot that engages / distract trolls](../../generated_pages/counters/C00103.md) | D05 | +| [C00123 Remove or rate limit botnets](../../generated_pages/counters/C00123.md) | D03 | + + +# Technique T0029: Online Polls + +**Summary**: Create fake online polls, or manipulate existing online polls. Data gathering tactic to target those who engage, and potentially their networks of friends/followers as well + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00044 Keep people from posting to social media immediately](../../generated_pages/counters/C00044.md) | D03 | +| [C00097 Require use of verified identities to contribute to poll or comment](../../generated_pages/counters/C00097.md) | D02 | +| [C00101 Create friction by rate-limiting engagement](../../generated_pages/counters/C00101.md) | D04 | +| [C00103 Create a bot that engages / distract trolls](../../generated_pages/counters/C00103.md) | D05 | +| [C00123 Remove or rate limit botnets](../../generated_pages/counters/C00123.md) | D03 | + + +# Technique T0029: Online Polls + +**Summary**: Create fake online polls, or manipulate existing online polls. Data gathering tactic to target those who engage, and potentially their networks of friends/followers as well + **Tactic**: TA07 Select Channels and Affordances @@ -24,4 +76,3 @@ | [C00123 Remove or rate limit botnets](../../generated_pages/counters/C00123.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0039.md b/generated_pages/techniques/T0039.md index 2ebf287..0311597 100644 --- a/generated_pages/techniques/T0039.md +++ b/generated_pages/techniques/T0039.md @@ -2,6 +2,56 @@ **Summary**: Influencers are people on social media platforms who have large audiences.

Threat Actors can try to trick Influencers such as celebrities, journalists, or local leaders who aren’t associated with their campaign into amplifying campaign content. This gives them access to the Influencer’s audience without having to go through the effort of building it themselves, and it helps legitimise their message by associating it with the Influencer, benefitting from their audience’s trust in them. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00087 Make more noise than the disinformation](../../generated_pages/counters/C00087.md) | D04 | +| [C00114 Don't engage with payloads](../../generated_pages/counters/C00114.md) | D02 | +| [C00154 Ask media not to report false information](../../generated_pages/counters/C00154.md) | D02 | +| [C00160 find and train influencers](../../generated_pages/counters/C00160.md) | D02 | + + +# Technique T0039: Bait Influencer + +**Summary**: Influencers are people on social media platforms who have large audiences.

Threat Actors can try to trick Influencers such as celebrities, journalists, or local leaders who aren’t associated with their campaign into amplifying campaign content. This gives them access to the Influencer’s audience without having to go through the effort of building it themselves, and it helps legitimise their message by associating it with the Influencer, benefitting from their audience’s trust in them. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00087 Make more noise than the disinformation](../../generated_pages/counters/C00087.md) | D04 | +| [C00114 Don't engage with payloads](../../generated_pages/counters/C00114.md) | D02 | +| [C00154 Ask media not to report false information](../../generated_pages/counters/C00154.md) | D02 | +| [C00160 find and train influencers](../../generated_pages/counters/C00160.md) | D02 | + + +# Technique T0039: Bait Influencer + +**Summary**: Influencers are people on social media platforms who have large audiences.

Threat Actors can try to trick Influencers such as celebrities, journalists, or local leaders who aren’t associated with their campaign into amplifying campaign content. This gives them access to the Influencer’s audience without having to go through the effort of building it themselves, and it helps legitimise their message by associating it with the Influencer, benefitting from their audience’s trust in them. + **Tactic**: TA17 Maximise Exposure @@ -23,4 +73,3 @@ | [C00160 find and train influencers](../../generated_pages/counters/C00160.md) | D02 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0040.md b/generated_pages/techniques/T0040.md index 20b67c2..52fb5ad 100644 --- a/generated_pages/techniques/T0040.md +++ b/generated_pages/techniques/T0040.md @@ -2,6 +2,50 @@ **Summary**: Campaigns often leverage tactical and informational asymmetries on the threat surface, as seen in the Distort and Deny strategies, and the "firehose of misinformation". Specifically, conspiracy theorists can be repeatedly wrong, but advocates of the truth need to be perfect. By constantly escalating demands for proof, propagandists can effectively leverage this asymmetry while also priming its future use, often with an even greater asymmetric advantage. The conspiracist is offered freer rein for a broader range of "questions" while the truth teller is burdened with higher and higher standards of proof. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00112 "Prove they are not an op!"](../../generated_pages/counters/C00112.md) | D02 | + + +# Technique T0040: Demand Insurmountable Proof + +**Summary**: Campaigns often leverage tactical and informational asymmetries on the threat surface, as seen in the Distort and Deny strategies, and the "firehose of misinformation". Specifically, conspiracy theorists can be repeatedly wrong, but advocates of the truth need to be perfect. By constantly escalating demands for proof, propagandists can effectively leverage this asymmetry while also priming its future use, often with an even greater asymmetric advantage. The conspiracist is offered freer rein for a broader range of "questions" while the truth teller is burdened with higher and higher standards of proof. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00112 "Prove they are not an op!"](../../generated_pages/counters/C00112.md) | D02 | + + +# Technique T0040: Demand Insurmountable Proof + +**Summary**: Campaigns often leverage tactical and informational asymmetries on the threat surface, as seen in the Distort and Deny strategies, and the "firehose of misinformation". Specifically, conspiracy theorists can be repeatedly wrong, but advocates of the truth need to be perfect. By constantly escalating demands for proof, propagandists can effectively leverage this asymmetry while also priming its future use, often with an even greater asymmetric advantage. The conspiracist is offered freer rein for a broader range of "questions" while the truth teller is burdened with higher and higher standards of proof. + **Tactic**: TA14 Develop Narratives @@ -20,4 +64,3 @@ | [C00112 "Prove they are not an op!"](../../generated_pages/counters/C00112.md) | D02 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0042.md b/generated_pages/techniques/T0042.md index 7e09d9e..8e75b92 100644 --- a/generated_pages/techniques/T0042.md +++ b/generated_pages/techniques/T0042.md @@ -2,6 +2,48 @@ **Summary**: Wrap lies or altered context/facts around truths. Influence campaigns pursue a variety of objectives with respect to target audiences, prominent among them: 1. undermine a narrative commonly referenced in the target audience; or 2. promote a narrative less common in the target audience, but preferred by the attacker. In both cases, the attacker is presented with a heavy lift. They must change the relative importance of various narratives in the interpretation of events, despite contrary tendencies. When messaging makes use of factual reporting to promote these adjustments in the narrative space, they are less likely to be dismissed out of hand; when messaging can juxtapose a (factual) truth about current affairs with the (abstract) truth explicated in these narratives, propagandists can undermine or promote them selectively. Context matters. +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0042: Seed Kernel of Truth + +**Summary**: Wrap lies or altered context/facts around truths. Influence campaigns pursue a variety of objectives with respect to target audiences, prominent among them: 1. undermine a narrative commonly referenced in the target audience; or 2. promote a narrative less common in the target audience, but preferred by the attacker. In both cases, the attacker is presented with a heavy lift. They must change the relative importance of various narratives in the interpretation of events, despite contrary tendencies. When messaging makes use of factual reporting to promote these adjustments in the narrative space, they are less likely to be dismissed out of hand; when messaging can juxtapose a (factual) truth about current affairs with the (abstract) truth explicated in these narratives, propagandists can undermine or promote them selectively. Context matters. + +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0042: Seed Kernel of Truth + +**Summary**: Wrap lies or altered context/facts around truths. Influence campaigns pursue a variety of objectives with respect to target audiences, prominent among them: 1. undermine a narrative commonly referenced in the target audience; or 2. promote a narrative less common in the target audience, but preferred by the attacker. In both cases, the attacker is presented with a heavy lift. They must change the relative importance of various narratives in the interpretation of events, despite contrary tendencies. When messaging makes use of factual reporting to promote these adjustments in the narrative space, they are less likely to be dismissed out of hand; when messaging can juxtapose a (factual) truth about current affairs with the (abstract) truth explicated in these narratives, propagandists can undermine or promote them selectively. Context matters. + **Tactic**: TA08 Conduct Pump Priming @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0044.md b/generated_pages/techniques/T0044.md index 1b1154c..33ee262 100644 --- a/generated_pages/techniques/T0044.md +++ b/generated_pages/techniques/T0044.md @@ -2,6 +2,52 @@ **Summary**: Try a wide variety of messages in the early hours surrounding an incident or event, to give a misleading account or impression. +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00086 Distract from noise with addictive content](../../generated_pages/counters/C00086.md) | D04 | +| [C00118 Repurpose images with new text](../../generated_pages/counters/C00118.md) | D04 | + + +# Technique T0044: Seed Distortions + +**Summary**: Try a wide variety of messages in the early hours surrounding an incident or event, to give a misleading account or impression. + +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00086 Distract from noise with addictive content](../../generated_pages/counters/C00086.md) | D04 | +| [C00118 Repurpose images with new text](../../generated_pages/counters/C00118.md) | D04 | + + +# Technique T0044: Seed Distortions + +**Summary**: Try a wide variety of messages in the early hours surrounding an incident or event, to give a misleading account or impression. + **Tactic**: TA08 Conduct Pump Priming @@ -21,4 +67,3 @@ | [C00118 Repurpose images with new text](../../generated_pages/counters/C00118.md) | D04 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0045.md b/generated_pages/techniques/T0045.md index ad1c4f8..1e07b5f 100644 --- a/generated_pages/techniques/T0045.md +++ b/generated_pages/techniques/T0045.md @@ -2,6 +2,52 @@ **Summary**: Use the fake experts that were set up during Establish Legitimacy. Pseudo-experts are disposable assets that often appear once and then disappear. Give "credility" to misinformation. Take advantage of credential bias +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00113 Debunk and defuse a fake expert / credentials.](../../generated_pages/counters/C00113.md) | D02 | +| [C00184 Media exposure](../../generated_pages/counters/C00184.md) | D04 | + + +# Technique T0045: Use Fake Experts + +**Summary**: Use the fake experts that were set up during Establish Legitimacy. Pseudo-experts are disposable assets that often appear once and then disappear. Give "credility" to misinformation. Take advantage of credential bias + +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00113 Debunk and defuse a fake expert / credentials.](../../generated_pages/counters/C00113.md) | D02 | +| [C00184 Media exposure](../../generated_pages/counters/C00184.md) | D04 | + + +# Technique T0045: Use Fake Experts + +**Summary**: Use the fake experts that were set up during Establish Legitimacy. Pseudo-experts are disposable assets that often appear once and then disappear. Give "credility" to misinformation. Take advantage of credential bias + **Tactic**: TA08 Conduct Pump Priming @@ -21,4 +67,3 @@ | [C00184 Media exposure](../../generated_pages/counters/C00184.md) | D04 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0046.md b/generated_pages/techniques/T0046.md index 3146331..ad045ac 100644 --- a/generated_pages/techniques/T0046.md +++ b/generated_pages/techniques/T0046.md @@ -2,6 +2,50 @@ **Summary**: Manipulate content engagement metrics (ie: Reddit & Twitter) to influence/impact news search results (e.g. Google), also elevates RT & Sputnik headline into Google news alert emails. aka "Black-hat SEO" +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00117 Downgrade / de-amplify so message is seen by fewer people](../../generated_pages/counters/C00117.md) | D04 | + + +# Technique T0046: Use Search Engine Optimisation + +**Summary**: Manipulate content engagement metrics (ie: Reddit & Twitter) to influence/impact news search results (e.g. Google), also elevates RT & Sputnik headline into Google news alert emails. aka "Black-hat SEO" + +**Tactic**: TA08 Conduct Pump Priming + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00117 Downgrade / de-amplify so message is seen by fewer people](../../generated_pages/counters/C00117.md) | D04 | + + +# Technique T0046: Use Search Engine Optimisation + +**Summary**: Manipulate content engagement metrics (ie: Reddit & Twitter) to influence/impact news search results (e.g. Google), also elevates RT & Sputnik headline into Google news alert emails. aka "Black-hat SEO" + **Tactic**: TA08 Conduct Pump Priming @@ -20,4 +64,3 @@ | [C00117 Downgrade / de-amplify so message is seen by fewer people](../../generated_pages/counters/C00117.md) | D04 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0047.md b/generated_pages/techniques/T0047.md index 86b6e20..10181ef 100644 --- a/generated_pages/techniques/T0047.md +++ b/generated_pages/techniques/T0047.md @@ -2,6 +2,50 @@ **Summary**: Use political influence or the power of state to stop critical social media comments. Government requested/driven content take downs (see Google Transperancy reports). +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00120 Open dialogue about design of platforms to produce different outcomes](../../generated_pages/counters/C00120.md) | D07 | + + +# Technique T0047: Censor Social Media as a Political Force + +**Summary**: Use political influence or the power of state to stop critical social media comments. Government requested/driven content take downs (see Google Transperancy reports). + +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00120 Open dialogue about design of platforms to produce different outcomes](../../generated_pages/counters/C00120.md) | D07 | + + +# Technique T0047: Censor Social Media as a Political Force + +**Summary**: Use political influence or the power of state to stop critical social media comments. Government requested/driven content take downs (see Google Transperancy reports). + **Tactic**: TA18 Drive Online Harms @@ -20,4 +64,3 @@ | [C00120 Open dialogue about design of platforms to produce different outcomes](../../generated_pages/counters/C00120.md) | D07 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0048.001.md b/generated_pages/techniques/T0048.001.md index a6d0892..2c73268 100644 --- a/generated_pages/techniques/T0048.001.md +++ b/generated_pages/techniques/T0048.001.md @@ -2,6 +2,48 @@ **Summary**: Cancel culture refers to the phenomenon in which individuals collectively refrain from supporting an individual, organisation, business, or other entity, usually following a real or falsified controversy. An influence operation may exploit cancel culture by emphasising an adversary’s problematic or disputed behaviour and presenting its own content as an alternative. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.001: Boycott/"Cancel" Opponents + +**Summary**: Cancel culture refers to the phenomenon in which individuals collectively refrain from supporting an individual, organisation, business, or other entity, usually following a real or falsified controversy. An influence operation may exploit cancel culture by emphasising an adversary’s problematic or disputed behaviour and presenting its own content as an alternative. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.001: Boycott/"Cancel" Opponents + +**Summary**: Cancel culture refers to the phenomenon in which individuals collectively refrain from supporting an individual, organisation, business, or other entity, usually following a real or falsified controversy. An influence operation may exploit cancel culture by emphasising an adversary’s problematic or disputed behaviour and presenting its own content as an alternative. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0048.002.md b/generated_pages/techniques/T0048.002.md index 5266a07..d1422d5 100644 --- a/generated_pages/techniques/T0048.002.md +++ b/generated_pages/techniques/T0048.002.md @@ -2,6 +2,48 @@ **Summary**: Examples include social identities like gender, sexuality, race, ethnicity, religion, ability, nationality, etc. as well as roles and occupations like journalist or activist. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.002: Harass People Based on Identities + +**Summary**: Examples include social identities like gender, sexuality, race, ethnicity, religion, ability, nationality, etc. as well as roles and occupations like journalist or activist. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.002: Harass People Based on Identities + +**Summary**: Examples include social identities like gender, sexuality, race, ethnicity, religion, ability, nationality, etc. as well as roles and occupations like journalist or activist. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0048.003.md b/generated_pages/techniques/T0048.003.md index 38ea81f..5354aaf 100644 --- a/generated_pages/techniques/T0048.003.md +++ b/generated_pages/techniques/T0048.003.md @@ -2,6 +2,48 @@ **Summary**: Doxing refers to online harassment in which individuals publicly release private information about another individual, including names, addresses, employment information, pictures, family members, and other sensitive information. An influence operation may dox its opposition to encourage individuals aligned with operation narratives to harass the doxed individuals themselves or otherwise discourage the doxed individuals from posting or proliferating conflicting content. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.003: Threaten to Dox + +**Summary**: Doxing refers to online harassment in which individuals publicly release private information about another individual, including names, addresses, employment information, pictures, family members, and other sensitive information. An influence operation may dox its opposition to encourage individuals aligned with operation narratives to harass the doxed individuals themselves or otherwise discourage the doxed individuals from posting or proliferating conflicting content. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.003: Threaten to Dox + +**Summary**: Doxing refers to online harassment in which individuals publicly release private information about another individual, including names, addresses, employment information, pictures, family members, and other sensitive information. An influence operation may dox its opposition to encourage individuals aligned with operation narratives to harass the doxed individuals themselves or otherwise discourage the doxed individuals from posting or proliferating conflicting content. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0048.004.md b/generated_pages/techniques/T0048.004.md index f588cc3..a9a136e 100644 --- a/generated_pages/techniques/T0048.004.md +++ b/generated_pages/techniques/T0048.004.md @@ -2,6 +2,48 @@ **Summary**: Doxing refers to online harassment in which individuals publicly release private information about another individual, including names, addresses, employment information, pictures, family members, and other sensitive information. An influence operation may dox its opposition to encourage individuals aligned with operation narratives to harass the doxed individuals themselves or otherwise discourage the doxed individuals from posting or proliferating conflicting content. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.004: Dox + +**Summary**: Doxing refers to online harassment in which individuals publicly release private information about another individual, including names, addresses, employment information, pictures, family members, and other sensitive information. An influence operation may dox its opposition to encourage individuals aligned with operation narratives to harass the doxed individuals themselves or otherwise discourage the doxed individuals from posting or proliferating conflicting content. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0048 Harass + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048.004: Dox + +**Summary**: Doxing refers to online harassment in which individuals publicly release private information about another individual, including names, addresses, employment information, pictures, family members, and other sensitive information. An influence operation may dox its opposition to encourage individuals aligned with operation narratives to harass the doxed individuals themselves or otherwise discourage the doxed individuals from posting or proliferating conflicting content. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0048.md b/generated_pages/techniques/T0048.md index dda2f8f..9986d31 100644 --- a/generated_pages/techniques/T0048.md +++ b/generated_pages/techniques/T0048.md @@ -2,6 +2,51 @@ **Summary**: Threatening or harassing believers of opposing narratives refers to the use of intimidation techniques, including cyberbullying and doxing, to discourage opponents from voicing their dissent. An influence operation may threaten or harass believers of the opposing narratives to deter individuals from posting or proliferating conflicting content. +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048: Harass + +**Summary**: Threatening or harassing believers of opposing narratives refers to the use of intimidation techniques, including cyberbullying and doxing, to discourage opponents from voicing their dissent. An influence operation may threaten or harass believers of the opposing narratives to deter individuals from posting or proliferating conflicting content. + +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0048: Harass + +**Summary**: Threatening or harassing believers of opposing narratives refers to the use of intimidation techniques, including cyberbullying and doxing, to discourage opponents from voicing their dissent. An influence operation may threaten or harass believers of the opposing narratives to deter individuals from posting or proliferating conflicting content. + **Tactic**: TA18 Drive Online Harms @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.001.md b/generated_pages/techniques/T0049.001.md index 0c561c8..e723887 100644 --- a/generated_pages/techniques/T0049.001.md +++ b/generated_pages/techniques/T0049.001.md @@ -2,6 +2,48 @@ **Summary**: Use trolls to amplify narratives and/or manipulate narratives. Fake profiles/sockpuppets operating to support individuals/narratives from the entire political spectrum (left/right binary). Operating with increased emphasis on promoting local content and promoting real Twitter users generating their own, often divisive political content, as it's easier to amplify existing content than create new/original content. Trolls operate where ever there's a socially divisive issue (issues that can/are be politicized). +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.001: Trolls Amplify and Manipulate + +**Summary**: Use trolls to amplify narratives and/or manipulate narratives. Fake profiles/sockpuppets operating to support individuals/narratives from the entire political spectrum (left/right binary). Operating with increased emphasis on promoting local content and promoting real Twitter users generating their own, often divisive political content, as it's easier to amplify existing content than create new/original content. Trolls operate where ever there's a socially divisive issue (issues that can/are be politicized). + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.001: Trolls Amplify and Manipulate + +**Summary**: Use trolls to amplify narratives and/or manipulate narratives. Fake profiles/sockpuppets operating to support individuals/narratives from the entire political spectrum (left/right binary). Operating with increased emphasis on promoting local content and promoting real Twitter users generating their own, often divisive political content, as it's easier to amplify existing content than create new/original content. Trolls operate where ever there's a socially divisive issue (issues that can/are be politicized). + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.002.md b/generated_pages/techniques/T0049.002.md index 09e5ff1..318a057 100644 --- a/generated_pages/techniques/T0049.002.md +++ b/generated_pages/techniques/T0049.002.md @@ -2,6 +2,48 @@ **Summary**: Hashtags can be used by communities to collate information they post about particular topics (such as their interests, or current events) and users can find communities to join by exploring hashtags they’re interested in.

Threat actors can flood an existing hashtag to try to ruin hashtag functionality, posting content unrelated to the hashtag alongside it, making it a less reliable source of relevant information. They may also try to flood existing hashtags with campaign content, with the intent of maximising exposure to users.

This Technique covers cases where threat actors flood existing hashtags with campaign content.

This Technique covers behaviours previously documented by T0019.002: Hijack Hashtags, which has since been deprecated. This Technique was previously called Hijack Existing Hashtag. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.002: Flood Existing Hashtag + +**Summary**: Hashtags can be used by communities to collate information they post about particular topics (such as their interests, or current events) and users can find communities to join by exploring hashtags they’re interested in.

Threat actors can flood an existing hashtag to try to ruin hashtag functionality, posting content unrelated to the hashtag alongside it, making it a less reliable source of relevant information. They may also try to flood existing hashtags with campaign content, with the intent of maximising exposure to users.

This Technique covers cases where threat actors flood existing hashtags with campaign content.

This Technique covers behaviours previously documented by T0019.002: Hijack Hashtags, which has since been deprecated. This Technique was previously called Hijack Existing Hashtag. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.002: Flood Existing Hashtag + +**Summary**: Hashtags can be used by communities to collate information they post about particular topics (such as their interests, or current events) and users can find communities to join by exploring hashtags they’re interested in.

Threat actors can flood an existing hashtag to try to ruin hashtag functionality, posting content unrelated to the hashtag alongside it, making it a less reliable source of relevant information. They may also try to flood existing hashtags with campaign content, with the intent of maximising exposure to users.

This Technique covers cases where threat actors flood existing hashtags with campaign content.

This Technique covers behaviours previously documented by T0019.002: Hijack Hashtags, which has since been deprecated. This Technique was previously called Hijack Existing Hashtag. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.003.md b/generated_pages/techniques/T0049.003.md index 844fafc..a9357de 100644 --- a/generated_pages/techniques/T0049.003.md +++ b/generated_pages/techniques/T0049.003.md @@ -2,6 +2,48 @@ **Summary**: Automated forwarding and reposting refer to the proliferation of operation content using automated means, such as artificial intelligence or social media bots. An influence operation may use automated activity to increase content exposure without dedicating the resources, including personnel and time, traditionally required to forward and repost content. Use bots to amplify narratives above algorithm thresholds. Bots are automated/programmed profiles designed to amplify content (ie: automatically retweet or like) and give appearance it's more "popular" than it is. They can operate as a network, to function in a coordinated/orchestrated manner. In some cases (more so now) they are an inexpensive/disposable assets used for minimal deployment as bot detection tools improve and platforms are more responsive. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.003: Bots Amplify via Automated Forwarding and Reposting + +**Summary**: Automated forwarding and reposting refer to the proliferation of operation content using automated means, such as artificial intelligence or social media bots. An influence operation may use automated activity to increase content exposure without dedicating the resources, including personnel and time, traditionally required to forward and repost content. Use bots to amplify narratives above algorithm thresholds. Bots are automated/programmed profiles designed to amplify content (ie: automatically retweet or like) and give appearance it's more "popular" than it is. They can operate as a network, to function in a coordinated/orchestrated manner. In some cases (more so now) they are an inexpensive/disposable assets used for minimal deployment as bot detection tools improve and platforms are more responsive. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.003: Bots Amplify via Automated Forwarding and Reposting + +**Summary**: Automated forwarding and reposting refer to the proliferation of operation content using automated means, such as artificial intelligence or social media bots. An influence operation may use automated activity to increase content exposure without dedicating the resources, including personnel and time, traditionally required to forward and repost content. Use bots to amplify narratives above algorithm thresholds. Bots are automated/programmed profiles designed to amplify content (ie: automatically retweet or like) and give appearance it's more "popular" than it is. They can operate as a network, to function in a coordinated/orchestrated manner. In some cases (more so now) they are an inexpensive/disposable assets used for minimal deployment as bot detection tools improve and platforms are more responsive. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.004.md b/generated_pages/techniques/T0049.004.md index 43bb061..605936c 100644 --- a/generated_pages/techniques/T0049.004.md +++ b/generated_pages/techniques/T0049.004.md @@ -2,6 +2,48 @@ **Summary**: Spamoflauge refers to the practice of disguising spam messages as legitimate. Spam refers to the use of electronic messaging systems to send out unrequested or unwanted messages in bulk. Simple methods of spamoflauge include replacing letters with numbers to fool keyword-based email spam filters, for example, "you've w0n our jackp0t!". Spamoflauge may extend to more complex techniques such as modifying the grammar or word choice of the language, casting messages as images which spam detectors cannot automatically read, or encapsulating messages in password protected attachments, such as .pdf or .zip files. Influence operations may use spamoflauge to avoid spam filtering systems and increase the likelihood of the target audience receiving operation messaging. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.004: Utilise Spamoflauge + +**Summary**: Spamoflauge refers to the practice of disguising spam messages as legitimate. Spam refers to the use of electronic messaging systems to send out unrequested or unwanted messages in bulk. Simple methods of spamoflauge include replacing letters with numbers to fool keyword-based email spam filters, for example, "you've w0n our jackp0t!". Spamoflauge may extend to more complex techniques such as modifying the grammar or word choice of the language, casting messages as images which spam detectors cannot automatically read, or encapsulating messages in password protected attachments, such as .pdf or .zip files. Influence operations may use spamoflauge to avoid spam filtering systems and increase the likelihood of the target audience receiving operation messaging. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.004: Utilise Spamoflauge + +**Summary**: Spamoflauge refers to the practice of disguising spam messages as legitimate. Spam refers to the use of electronic messaging systems to send out unrequested or unwanted messages in bulk. Simple methods of spamoflauge include replacing letters with numbers to fool keyword-based email spam filters, for example, "you've w0n our jackp0t!". Spamoflauge may extend to more complex techniques such as modifying the grammar or word choice of the language, casting messages as images which spam detectors cannot automatically read, or encapsulating messages in password protected attachments, such as .pdf or .zip files. Influence operations may use spamoflauge to avoid spam filtering systems and increase the likelihood of the target audience receiving operation messaging. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.005.md b/generated_pages/techniques/T0049.005.md index 9ad4d07..6901563 100644 --- a/generated_pages/techniques/T0049.005.md +++ b/generated_pages/techniques/T0049.005.md @@ -2,6 +2,51 @@ **Summary**: Swarming refers to the coordinated use of accounts to overwhelm the information space with operation content. Unlike information flooding, swarming centres exclusively around a specific event or actor rather than a general narrative. Swarming relies on “horizontal communication” between information assets rather than a top-down, vertical command-and-control approach. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.005: Conduct Swarming + +**Summary**: Swarming refers to the coordinated use of accounts to overwhelm the information space with operation content. Unlike information flooding, swarming centres exclusively around a specific event or actor rather than a general narrative. Swarming relies on “horizontal communication” between information assets rather than a top-down, vertical command-and-control approach. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.005: Conduct Swarming + +**Summary**: Swarming refers to the coordinated use of accounts to overwhelm the information space with operation content. Unlike information flooding, swarming centres exclusively around a specific event or actor rather than a general narrative. Swarming relies on “horizontal communication” between information assets rather than a top-down, vertical command-and-control approach. + **Tactic**: TA17 Maximise Exposure @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.006.md b/generated_pages/techniques/T0049.006.md index cded68a..6d74e4d 100644 --- a/generated_pages/techniques/T0049.006.md +++ b/generated_pages/techniques/T0049.006.md @@ -2,6 +2,48 @@ **Summary**: Keyword squatting refers to the creation of online content, such as websites, articles, or social media accounts, around a specific search engine-optimized term to overwhelm the search results of that term. An influence may keyword squat to increase content exposure to target audience members who query the exploited term in a search engine and manipulate the narrative around the term. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.006: Conduct Keyword Squatting + +**Summary**: Keyword squatting refers to the creation of online content, such as websites, articles, or social media accounts, around a specific search engine-optimized term to overwhelm the search results of that term. An influence may keyword squat to increase content exposure to target audience members who query the exploited term in a search engine and manipulate the narrative around the term. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.006: Conduct Keyword Squatting + +**Summary**: Keyword squatting refers to the creation of online content, such as websites, articles, or social media accounts, around a specific search engine-optimized term to overwhelm the search results of that term. An influence may keyword squat to increase content exposure to target audience members who query the exploited term in a search engine and manipulate the narrative around the term. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.007.md b/generated_pages/techniques/T0049.007.md index 01568b9..98c8ce6 100644 --- a/generated_pages/techniques/T0049.007.md +++ b/generated_pages/techniques/T0049.007.md @@ -2,6 +2,48 @@ **Summary**: Inauthentic sites circulate cross-post stories and amplify narratives. Often these sites have no masthead, bylines or attribution. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.007: Inauthentic Sites Amplify News and Narratives + +**Summary**: Inauthentic sites circulate cross-post stories and amplify narratives. Often these sites have no masthead, bylines or attribution. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.007: Inauthentic Sites Amplify News and Narratives + +**Summary**: Inauthentic sites circulate cross-post stories and amplify narratives. Often these sites have no masthead, bylines or attribution. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.008.md b/generated_pages/techniques/T0049.008.md index d6dd30c..3ee5b6a 100644 --- a/generated_pages/techniques/T0049.008.md +++ b/generated_pages/techniques/T0049.008.md @@ -2,6 +2,48 @@ **Summary**: Information Pollution occurs when threat actors attempt to ruin a source of information by flooding it with lots of inauthentic or unreliable content, intending to make it harder for legitimate users to find the information they’re looking for.

This sub-technique’s objective is to reduce exposure to target information, rather than promoting exposure to campaign content, for which the parent Technique T0049 can be used.

Analysts will need to infer what the motive for flooding an information space was when deciding whether to use T0049 or T0049.008 to tag a case when an information space is flooded. If such inference is not possible, default to T0049.

This Technique previously used the ID T0019. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.008: Generate Information Pollution + +**Summary**: Information Pollution occurs when threat actors attempt to ruin a source of information by flooding it with lots of inauthentic or unreliable content, intending to make it harder for legitimate users to find the information they’re looking for.

This sub-technique’s objective is to reduce exposure to target information, rather than promoting exposure to campaign content, for which the parent Technique T0049 can be used.

Analysts will need to infer what the motive for flooding an information space was when deciding whether to use T0049 or T0049.008 to tag a case when an information space is flooded. If such inference is not possible, default to T0049.

This Technique previously used the ID T0019. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0049 Flood Information Space + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0049.008: Generate Information Pollution + +**Summary**: Information Pollution occurs when threat actors attempt to ruin a source of information by flooding it with lots of inauthentic or unreliable content, intending to make it harder for legitimate users to find the information they’re looking for.

This sub-technique’s objective is to reduce exposure to target information, rather than promoting exposure to campaign content, for which the parent Technique T0049 can be used.

Analysts will need to infer what the motive for flooding an information space was when deciding whether to use T0049 or T0049.008 to tag a case when an information space is flooded. If such inference is not possible, default to T0049.

This Technique previously used the ID T0019. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0049.md b/generated_pages/techniques/T0049.md index 0a9f98c..9a3f20a 100644 --- a/generated_pages/techniques/T0049.md +++ b/generated_pages/techniques/T0049.md @@ -2,6 +2,50 @@ **Summary**: Flooding sources of information (e.g. Social Media feeds) with a high volume of inauthentic content.

This can be done to control/shape online conversations, drown out opposing points of view, or make it harder to find legitimate information.

Bots and/or patriotic trolls are effective tools to achieve this effect.

This Technique previously used the name Flooding the Information Space. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00131 Seize and analyse botnet servers](../../generated_pages/counters/C00131.md) | D02 | + + +# Technique T0049: Flood Information Space + +**Summary**: Flooding sources of information (e.g. Social Media feeds) with a high volume of inauthentic content.

This can be done to control/shape online conversations, drown out opposing points of view, or make it harder to find legitimate information.

Bots and/or patriotic trolls are effective tools to achieve this effect.

This Technique previously used the name Flooding the Information Space. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00131 Seize and analyse botnet servers](../../generated_pages/counters/C00131.md) | D02 | + + +# Technique T0049: Flood Information Space + +**Summary**: Flooding sources of information (e.g. Social Media feeds) with a high volume of inauthentic content.

This can be done to control/shape online conversations, drown out opposing points of view, or make it harder to find legitimate information.

Bots and/or patriotic trolls are effective tools to achieve this effect.

This Technique previously used the name Flooding the Information Space. + **Tactic**: TA17 Maximise Exposure @@ -20,4 +64,3 @@ | [C00131 Seize and analyse botnet servers](../../generated_pages/counters/C00131.md) | D02 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0057.001.md b/generated_pages/techniques/T0057.001.md index 6fdf11d..4d12cd3 100644 --- a/generated_pages/techniques/T0057.001.md +++ b/generated_pages/techniques/T0057.001.md @@ -2,6 +2,48 @@ **Summary**: Paying for physical action occurs when an influence operation pays individuals to act in the physical realm. An influence operation may pay for physical action to create specific situations and frame them in a way that supports operation narratives, for example, paying a group of people to burn a car to later post an image of the burning car and frame it as an act of protest. +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0057 Organise Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0057.001: Pay for Physical Action + +**Summary**: Paying for physical action occurs when an influence operation pays individuals to act in the physical realm. An influence operation may pay for physical action to create specific situations and frame them in a way that supports operation narratives, for example, paying a group of people to burn a car to later post an image of the burning car and frame it as an act of protest. + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0057 Organise Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0057.001: Pay for Physical Action + +**Summary**: Paying for physical action occurs when an influence operation pays individuals to act in the physical realm. An influence operation may pay for physical action to create specific situations and frame them in a way that supports operation narratives, for example, paying a group of people to burn a car to later post an image of the burning car and frame it as an act of protest. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0057.002.md b/generated_pages/techniques/T0057.002.md index e3d388b..52f19fb 100644 --- a/generated_pages/techniques/T0057.002.md +++ b/generated_pages/techniques/T0057.002.md @@ -2,6 +2,48 @@ **Summary**: Symbolic action refers to activities specifically intended to advance an operation’s narrative by signalling something to the audience, for example, a military parade supporting a state’s narrative of military superiority. An influence operation may use symbolic action to create falsified evidence supporting operation narratives in the physical information space. +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0057 Organise Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0057.002: Conduct Symbolic Action + +**Summary**: Symbolic action refers to activities specifically intended to advance an operation’s narrative by signalling something to the audience, for example, a military parade supporting a state’s narrative of military superiority. An influence operation may use symbolic action to create falsified evidence supporting operation narratives in the physical information space. + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0057 Organise Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0057.002: Conduct Symbolic Action + +**Summary**: Symbolic action refers to activities specifically intended to advance an operation’s narrative by signalling something to the audience, for example, a military parade supporting a state’s narrative of military superiority. An influence operation may use symbolic action to create falsified evidence supporting operation narratives in the physical information space. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0057.md b/generated_pages/techniques/T0057.md index 6f9bee8..f0aea70 100644 --- a/generated_pages/techniques/T0057.md +++ b/generated_pages/techniques/T0057.md @@ -2,6 +2,51 @@ **Summary**: Coordinate and promote real-world events across media platforms, e.g. rallies, protests, gatherings in support of incident narratives. +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00129 Use banking to cut off access](../../generated_pages/counters/C00129.md) | D02 | + + +# Technique T0057: Organise Events + +**Summary**: Coordinate and promote real-world events across media platforms, e.g. rallies, protests, gatherings in support of incident narratives. + +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00129 Use banking to cut off access](../../generated_pages/counters/C00129.md) | D02 | + + +# Technique T0057: Organise Events + +**Summary**: Coordinate and promote real-world events across media platforms, e.g. rallies, protests, gatherings in support of incident narratives. + **Tactic**: TA10 Drive Offline Activity @@ -21,4 +66,3 @@ | [C00129 Use banking to cut off access](../../generated_pages/counters/C00129.md) | D02 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0059.md b/generated_pages/techniques/T0059.md index d13fc25..7cd0e3d 100644 --- a/generated_pages/techniques/T0059.md +++ b/generated_pages/techniques/T0059.md @@ -2,6 +2,48 @@ **Summary**: Play the long game refers to two phenomena: 1. To plan messaging and allow it to grow organically without conducting your own amplification. This is methodical and slow and requires years for the message to take hold 2. To develop a series of seemingly disconnected messaging narratives that eventually combine into a new narrative. +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0059: Play the Long Game + +**Summary**: Play the long game refers to two phenomena: 1. To plan messaging and allow it to grow organically without conducting your own amplification. This is methodical and slow and requires years for the message to take hold 2. To develop a series of seemingly disconnected messaging narratives that eventually combine into a new narrative. + +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0059: Play the Long Game + +**Summary**: Play the long game refers to two phenomena: 1. To plan messaging and allow it to grow organically without conducting your own amplification. This is methodical and slow and requires years for the message to take hold 2. To develop a series of seemingly disconnected messaging narratives that eventually combine into a new narrative. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0060.md b/generated_pages/techniques/T0060.md index 6ca16d8..b01d60e 100644 --- a/generated_pages/techniques/T0060.md +++ b/generated_pages/techniques/T0060.md @@ -2,6 +2,54 @@ **Summary**: continue narrative or message amplification after the main incident work has finished +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00138 Spam domestic actors with lawsuits](../../generated_pages/counters/C00138.md) | D03 | +| [C00143 (botnet) DMCA takedown requests to waste group time](../../generated_pages/counters/C00143.md) | D04 | +| [C00147 Make amplification of social media posts expire (e.g. can't like/ retweet after n days)](../../generated_pages/counters/C00147.md) | D03 | + + +# Technique T0060: Continue to Amplify + +**Summary**: continue narrative or message amplification after the main incident work has finished + +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | +| [C00138 Spam domestic actors with lawsuits](../../generated_pages/counters/C00138.md) | D03 | +| [C00143 (botnet) DMCA takedown requests to waste group time](../../generated_pages/counters/C00143.md) | D04 | +| [C00147 Make amplification of social media posts expire (e.g. can't like/ retweet after n days)](../../generated_pages/counters/C00147.md) | D03 | + + +# Technique T0060: Continue to Amplify + +**Summary**: continue narrative or message amplification after the main incident work has finished + **Tactic**: TA11 Persist in the Information Environment @@ -22,4 +70,3 @@ | [C00147 Make amplification of social media posts expire (e.g. can't like/ retweet after n days)](../../generated_pages/counters/C00147.md) | D03 | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0061.md b/generated_pages/techniques/T0061.md index 038e1e7..ba0406b 100644 --- a/generated_pages/techniques/T0061.md +++ b/generated_pages/techniques/T0061.md @@ -2,6 +2,50 @@ **Summary**: Sell mechandise refers to getting the message or narrative into physical space in the offline world while making money +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0061: Sell Merchandise + +**Summary**: Sell mechandise refers to getting the message or narrative into physical space in the offline world while making money + +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0061: Sell Merchandise + +**Summary**: Sell mechandise refers to getting the message or narrative into physical space in the offline world while making money + **Tactic**: TA10 Drive Offline Activity @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0065.md b/generated_pages/techniques/T0065.md index f95a57b..db01488 100644 --- a/generated_pages/techniques/T0065.md +++ b/generated_pages/techniques/T0065.md @@ -2,6 +2,48 @@ **Summary**: Create or coopt broadcast capabilities (e.g. TV, radio etc). +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0065: Prepare Physical Broadcast Capabilities + +**Summary**: Create or coopt broadcast capabilities (e.g. TV, radio etc). + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0065: Prepare Physical Broadcast Capabilities + +**Summary**: Create or coopt broadcast capabilities (e.g. TV, radio etc). + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0066.md b/generated_pages/techniques/T0066.md index eb9fb4c..d931907 100644 --- a/generated_pages/techniques/T0066.md +++ b/generated_pages/techniques/T0066.md @@ -2,6 +2,48 @@ **Summary**: Plan to degrade an adversary’s image or ability to act. This could include preparation and use of harmful information about the adversary’s actions or reputation. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0066: Degrade Adversary + +**Summary**: Plan to degrade an adversary’s image or ability to act. This could include preparation and use of harmful information about the adversary’s actions or reputation. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0066: Degrade Adversary + +**Summary**: Plan to degrade an adversary’s image or ability to act. This could include preparation and use of harmful information about the adversary’s actions or reputation. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0068.md b/generated_pages/techniques/T0068.md index 192eb85..192eb44 100644 --- a/generated_pages/techniques/T0068.md +++ b/generated_pages/techniques/T0068.md @@ -2,6 +2,49 @@ **Summary**: Media attention on a story or event is heightened during a breaking news event, where unclear facts and incomplete information increase speculation, rumours, and conspiracy theories, which are all vulnerable to manipulation. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0068: Respond to Breaking News Event or Active Crisis + +**Summary**: Media attention on a story or event is heightened during a breaking news event, where unclear facts and incomplete information increase speculation, rumours, and conspiracy theories, which are all vulnerable to manipulation. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0068: Respond to Breaking News Event or Active Crisis + +**Summary**: Media attention on a story or event is heightened during a breaking news event, where unclear facts and incomplete information increase speculation, rumours, and conspiracy theories, which are all vulnerable to manipulation. + **Tactic**: TA14 Develop Narratives @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0072.001.md b/generated_pages/techniques/T0072.001.md index e6b7fca..8d5bcda 100644 --- a/generated_pages/techniques/T0072.001.md +++ b/generated_pages/techniques/T0072.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may target populations in a specific geographic location, such as a region, state, or city. An influence operation may use geographic segmentation to Create Localised Content (see: Establish Legitimacy). +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.001: Geographic Segmentation + +**Summary**: An influence operation may target populations in a specific geographic location, such as a region, state, or city. An influence operation may use geographic segmentation to Create Localised Content (see: Establish Legitimacy). + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.001: Geographic Segmentation + +**Summary**: An influence operation may target populations in a specific geographic location, such as a region, state, or city. An influence operation may use geographic segmentation to Create Localised Content (see: Establish Legitimacy). + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0072.002.md b/generated_pages/techniques/T0072.002.md index 62bbceb..b4a4b86 100644 --- a/generated_pages/techniques/T0072.002.md +++ b/generated_pages/techniques/T0072.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may target populations based on demographic segmentation, including age, gender, and income. Demographic segmentation may be useful for influence operations aiming to change state policies that affect a specific population sector. For example, an influence operation attempting to influence Medicare funding in the United States would likely target U.S. voters over 65 years of age. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.002: Demographic Segmentation + +**Summary**: An influence operation may target populations based on demographic segmentation, including age, gender, and income. Demographic segmentation may be useful for influence operations aiming to change state policies that affect a specific population sector. For example, an influence operation attempting to influence Medicare funding in the United States would likely target U.S. voters over 65 years of age. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.002: Demographic Segmentation + +**Summary**: An influence operation may target populations based on demographic segmentation, including age, gender, and income. Demographic segmentation may be useful for influence operations aiming to change state policies that affect a specific population sector. For example, an influence operation attempting to influence Medicare funding in the United States would likely target U.S. voters over 65 years of age. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0072.003.md b/generated_pages/techniques/T0072.003.md index bccbd67..b9310a8 100644 --- a/generated_pages/techniques/T0072.003.md +++ b/generated_pages/techniques/T0072.003.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may target populations based on their income bracket, wealth, or other financial or economic division. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.003: Economic Segmentation + +**Summary**: An influence operation may target populations based on their income bracket, wealth, or other financial or economic division. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.003: Economic Segmentation + +**Summary**: An influence operation may target populations based on their income bracket, wealth, or other financial or economic division. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0072.004.md b/generated_pages/techniques/T0072.004.md index 495f98c..b21fd94 100644 --- a/generated_pages/techniques/T0072.004.md +++ b/generated_pages/techniques/T0072.004.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may target populations based on psychographic segmentation, which uses audience values and decision-making processes. An operation may individually gather psychographic data with its own surveys or collection tools or externally purchase data from social media companies or online surveys, such as personality quizzes. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.004: Psychographic Segmentation + +**Summary**: An influence operation may target populations based on psychographic segmentation, which uses audience values and decision-making processes. An operation may individually gather psychographic data with its own surveys or collection tools or externally purchase data from social media companies or online surveys, such as personality quizzes. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.004: Psychographic Segmentation + +**Summary**: An influence operation may target populations based on psychographic segmentation, which uses audience values and decision-making processes. An operation may individually gather psychographic data with its own surveys or collection tools or externally purchase data from social media companies or online surveys, such as personality quizzes. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0072.005.md b/generated_pages/techniques/T0072.005.md index b37f78d..603bc64 100644 --- a/generated_pages/techniques/T0072.005.md +++ b/generated_pages/techniques/T0072.005.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may target populations based on their political affiliations, especially when aiming to manipulate voting or change policy. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.005: Political Segmentation + +**Summary**: An influence operation may target populations based on their political affiliations, especially when aiming to manipulate voting or change policy. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0072 Segment Audiences + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072.005: Political Segmentation + +**Summary**: An influence operation may target populations based on their political affiliations, especially when aiming to manipulate voting or change policy. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0072.md b/generated_pages/techniques/T0072.md index c919e59..f23f6f7 100644 --- a/generated_pages/techniques/T0072.md +++ b/generated_pages/techniques/T0072.md @@ -2,6 +2,48 @@ **Summary**: Create audience segmentations by features of interest to the influence campaign, including political affiliation, geographic location, income, demographics, and psychographics. +**Tactic**: TA13 Target Audience Analysis + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072: Segment Audiences + +**Summary**: Create audience segmentations by features of interest to the influence campaign, including political affiliation, geographic location, income, demographics, and psychographics. + +**Tactic**: TA13 Target Audience Analysis + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0072: Segment Audiences + +**Summary**: Create audience segmentations by features of interest to the influence campaign, including political affiliation, geographic location, income, demographics, and psychographics. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0073.md b/generated_pages/techniques/T0073.md index 393ebae..88d5538 100644 --- a/generated_pages/techniques/T0073.md +++ b/generated_pages/techniques/T0073.md @@ -2,6 +2,48 @@ **Summary**: Determining the target audiences (segments of the population) who will receive campaign narratives and artefacts intended to achieve the strategic ends. +**Tactic**: TA01 Plan Strategy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0073: Determine Target Audiences + +**Summary**: Determining the target audiences (segments of the population) who will receive campaign narratives and artefacts intended to achieve the strategic ends. + +**Tactic**: TA01 Plan Strategy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0073: Determine Target Audiences + +**Summary**: Determining the target audiences (segments of the population) who will receive campaign narratives and artefacts intended to achieve the strategic ends. + **Tactic**: TA01 Plan Strategy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0074.001.md b/generated_pages/techniques/T0074.001.md index b9fd24f..a5cb39b 100644 --- a/generated_pages/techniques/T0074.001.md +++ b/generated_pages/techniques/T0074.001.md @@ -2,6 +2,48 @@ **Summary**: Favourable position on the international stage in terms of great power politics or regional rivalry. Geopolitics plays out in the realms of foreign policy, national security, diplomacy, and intelligence. It involves nation-state governments, heads of state, foreign ministers, intergovernmental organisations, and regional security alliances. +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.001: Geopolitical Advantage + +**Summary**: Favourable position on the international stage in terms of great power politics or regional rivalry. Geopolitics plays out in the realms of foreign policy, national security, diplomacy, and intelligence. It involves nation-state governments, heads of state, foreign ministers, intergovernmental organisations, and regional security alliances. + +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.001: Geopolitical Advantage + +**Summary**: Favourable position on the international stage in terms of great power politics or regional rivalry. Geopolitics plays out in the realms of foreign policy, national security, diplomacy, and intelligence. It involves nation-state governments, heads of state, foreign ministers, intergovernmental organisations, and regional security alliances. + **Tactic**: TA01 Plan Strategy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0074.002.md b/generated_pages/techniques/T0074.002.md index b9c7cb4..cf4da59 100644 --- a/generated_pages/techniques/T0074.002.md +++ b/generated_pages/techniques/T0074.002.md @@ -2,6 +2,48 @@ **Summary**: Favourable position vis-à-vis national or sub-national political opponents such as political parties, interest groups, politicians, candidates. +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.002: Domestic Political Advantage + +**Summary**: Favourable position vis-à-vis national or sub-national political opponents such as political parties, interest groups, politicians, candidates. + +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.002: Domestic Political Advantage + +**Summary**: Favourable position vis-à-vis national or sub-national political opponents such as political parties, interest groups, politicians, candidates. + **Tactic**: TA01 Plan Strategy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0074.003.md b/generated_pages/techniques/T0074.003.md index f5b9b42..b5ae92a 100644 --- a/generated_pages/techniques/T0074.003.md +++ b/generated_pages/techniques/T0074.003.md @@ -2,6 +2,48 @@ **Summary**: Favourable position domestically or internationally in the realms of commerce, trade, finance, industry. Economics involves nation-states, corporations, banks, trade blocs, industry associations, cartels. +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.003: Economic Advantage + +**Summary**: Favourable position domestically or internationally in the realms of commerce, trade, finance, industry. Economics involves nation-states, corporations, banks, trade blocs, industry associations, cartels. + +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.003: Economic Advantage + +**Summary**: Favourable position domestically or internationally in the realms of commerce, trade, finance, industry. Economics involves nation-states, corporations, banks, trade blocs, industry associations, cartels. + **Tactic**: TA01 Plan Strategy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0074.004.md b/generated_pages/techniques/T0074.004.md index 6c3c859..459daea 100644 --- a/generated_pages/techniques/T0074.004.md +++ b/generated_pages/techniques/T0074.004.md @@ -2,6 +2,48 @@ **Summary**: Favourable position domestically or internationally in the market for ideas, beliefs, and world views. Competition plays out among faith systems, political systems, and value systems. It can involve sub-national, national or supra-national movements. +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.004: Ideological Advantage + +**Summary**: Favourable position domestically or internationally in the market for ideas, beliefs, and world views. Competition plays out among faith systems, political systems, and value systems. It can involve sub-national, national or supra-national movements. + +**Tactic**: TA01 Plan Strategy **Parent Technique:** T0074 Determine Strategic Ends + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074.004: Ideological Advantage + +**Summary**: Favourable position domestically or internationally in the market for ideas, beliefs, and world views. Competition plays out among faith systems, political systems, and value systems. It can involve sub-national, national or supra-national movements. + **Tactic**: TA01 Plan Strategy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0074.md b/generated_pages/techniques/T0074.md index 38ec84e..6e3fd08 100644 --- a/generated_pages/techniques/T0074.md +++ b/generated_pages/techniques/T0074.md @@ -2,6 +2,48 @@ **Summary**: These are the long-term end-states the campaign aims to bring about. They typically involve an advantageous position vis-a-vis competitors in terms of power or influence. The strategic goal may be to improve or simply to hold one’s position. Competition occurs in the public sphere in the domains of war, diplomacy, politics, economics, and ideology, and can play out between armed groups, nation-states, political parties, corporations, interest groups, or individuals. +**Tactic**: TA01 Plan Strategy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074: Determine Strategic Ends + +**Summary**: These are the long-term end-states the campaign aims to bring about. They typically involve an advantageous position vis-a-vis competitors in terms of power or influence. The strategic goal may be to improve or simply to hold one’s position. Competition occurs in the public sphere in the domains of war, diplomacy, politics, economics, and ideology, and can play out between armed groups, nation-states, political parties, corporations, interest groups, or individuals. + +**Tactic**: TA01 Plan Strategy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0074: Determine Strategic Ends + +**Summary**: These are the long-term end-states the campaign aims to bring about. They typically involve an advantageous position vis-a-vis competitors in terms of power or influence. The strategic goal may be to improve or simply to hold one’s position. Competition occurs in the public sphere in the domains of war, diplomacy, politics, economics, and ideology, and can play out between armed groups, nation-states, political parties, corporations, interest groups, or individuals. + **Tactic**: TA01 Plan Strategy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0075.001.md b/generated_pages/techniques/T0075.001.md index 1b591ea..944895a 100644 --- a/generated_pages/techniques/T0075.001.md +++ b/generated_pages/techniques/T0075.001.md @@ -2,6 +2,48 @@ **Summary**: Plan to delegitimize the media landscape and degrade public trust in reporting, by discrediting credible sources. This makes it easier to promote influence operation content. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0075 Dismiss + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0075.001: Discredit Credible Sources + +**Summary**: Plan to delegitimize the media landscape and degrade public trust in reporting, by discrediting credible sources. This makes it easier to promote influence operation content. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0075 Dismiss + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0075.001: Discredit Credible Sources + +**Summary**: Plan to delegitimize the media landscape and degrade public trust in reporting, by discrediting credible sources. This makes it easier to promote influence operation content. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0075.md b/generated_pages/techniques/T0075.md index 588fdb6..643014f 100644 --- a/generated_pages/techniques/T0075.md +++ b/generated_pages/techniques/T0075.md @@ -2,6 +2,48 @@ **Summary**: Push back against criticism by dismissing your critics. This might be arguing that the critics use a different standard for you than with other actors or themselves; or arguing that their criticism is biassed. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0075: Dismiss + +**Summary**: Push back against criticism by dismissing your critics. This might be arguing that the critics use a different standard for you than with other actors or themselves; or arguing that their criticism is biassed. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0075: Dismiss + +**Summary**: Push back against criticism by dismissing your critics. This might be arguing that the critics use a different standard for you than with other actors or themselves; or arguing that their criticism is biassed. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0076.md b/generated_pages/techniques/T0076.md index 2ef15ba..18a3445 100644 --- a/generated_pages/techniques/T0076.md +++ b/generated_pages/techniques/T0076.md @@ -2,6 +2,48 @@ **Summary**: Twist the narrative. Take information, or artefacts like images, and change the framing around them. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0076: Distort + +**Summary**: Twist the narrative. Take information, or artefacts like images, and change the framing around them. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0076: Distort + +**Summary**: Twist the narrative. Take information, or artefacts like images, and change the framing around them. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0077.md b/generated_pages/techniques/T0077.md index d22059e..9d4e7f1 100644 --- a/generated_pages/techniques/T0077.md +++ b/generated_pages/techniques/T0077.md @@ -2,6 +2,48 @@ **Summary**: Shift attention to a different narrative or actor, for instance by accusing critics of the same activity that they’ve accused you of (e.g. police brutality). +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0077: Distract + +**Summary**: Shift attention to a different narrative or actor, for instance by accusing critics of the same activity that they’ve accused you of (e.g. police brutality). + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0077: Distract + +**Summary**: Shift attention to a different narrative or actor, for instance by accusing critics of the same activity that they’ve accused you of (e.g. police brutality). + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0078.md b/generated_pages/techniques/T0078.md index 024c26b..5dc04aa 100644 --- a/generated_pages/techniques/T0078.md +++ b/generated_pages/techniques/T0078.md @@ -2,6 +2,48 @@ **Summary**: Threaten the critic or narrator of events. For instance, threaten journalists or news outlets reporting on a story. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0078: Dismay + +**Summary**: Threaten the critic or narrator of events. For instance, threaten journalists or news outlets reporting on a story. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0078: Dismay + +**Summary**: Threaten the critic or narrator of events. For instance, threaten journalists or news outlets reporting on a story. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0079.md b/generated_pages/techniques/T0079.md index 171ae57..dda0119 100644 --- a/generated_pages/techniques/T0079.md +++ b/generated_pages/techniques/T0079.md @@ -2,6 +2,48 @@ **Summary**: Create conflict between subgroups, to widen divisions in a community +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0079: Divide + +**Summary**: Create conflict between subgroups, to widen divisions in a community + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0079: Divide + +**Summary**: Create conflict between subgroups, to widen divisions in a community + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0080.001.md b/generated_pages/techniques/T0080.001.md index 79227b7..d880646 100644 --- a/generated_pages/techniques/T0080.001.md +++ b/generated_pages/techniques/T0080.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may use social media analytics to determine which factors will increase the operation content’s exposure to its target audience on social media platforms, including views, interactions, and sentiment relating to topics and content types. The social media platform itself or a third-party tool may collect the metrics. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.001: Monitor Social Media Analytics + +**Summary**: An influence operation may use social media analytics to determine which factors will increase the operation content’s exposure to its target audience on social media platforms, including views, interactions, and sentiment relating to topics and content types. The social media platform itself or a third-party tool may collect the metrics. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.001: Monitor Social Media Analytics + +**Summary**: An influence operation may use social media analytics to determine which factors will increase the operation content’s exposure to its target audience on social media platforms, including views, interactions, and sentiment relating to topics and content types. The social media platform itself or a third-party tool may collect the metrics. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0080.002.md b/generated_pages/techniques/T0080.002.md index af36364..2578f90 100644 --- a/generated_pages/techniques/T0080.002.md +++ b/generated_pages/techniques/T0080.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may evaluate its own or third-party media surveys to determine what type of content appeals to its target audience. Media surveys may provide insight into an audience’s political views, social class, general interests, or other indicators used to tailor operation messaging to its target audience. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.002: Evaluate Media Surveys + +**Summary**: An influence operation may evaluate its own or third-party media surveys to determine what type of content appeals to its target audience. Media surveys may provide insight into an audience’s political views, social class, general interests, or other indicators used to tailor operation messaging to its target audience. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.002: Evaluate Media Surveys + +**Summary**: An influence operation may evaluate its own or third-party media surveys to determine what type of content appeals to its target audience. Media surveys may provide insight into an audience’s political views, social class, general interests, or other indicators used to tailor operation messaging to its target audience. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0080.003.md b/generated_pages/techniques/T0080.003.md index 517b824..d2deed2 100644 --- a/generated_pages/techniques/T0080.003.md +++ b/generated_pages/techniques/T0080.003.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may identify trending hashtags on social media platforms for later use in boosting operation content. A hashtag40 refers to a word or phrase preceded by the hash symbol (#) on social media used to identify messages and posts relating to a specific topic. All public posts that use the same hashtag are aggregated onto a centralised page dedicated to the word or phrase and sorted either chronologically or by popularity. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.003: Identify Trending Topics/Hashtags + +**Summary**: An influence operation may identify trending hashtags on social media platforms for later use in boosting operation content. A hashtag40 refers to a word or phrase preceded by the hash symbol (#) on social media used to identify messages and posts relating to a specific topic. All public posts that use the same hashtag are aggregated onto a centralised page dedicated to the word or phrase and sorted either chronologically or by popularity. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.003: Identify Trending Topics/Hashtags + +**Summary**: An influence operation may identify trending hashtags on social media platforms for later use in boosting operation content. A hashtag40 refers to a word or phrase preceded by the hash symbol (#) on social media used to identify messages and posts relating to a specific topic. All public posts that use the same hashtag are aggregated onto a centralised page dedicated to the word or phrase and sorted either chronologically or by popularity. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0080.004.md b/generated_pages/techniques/T0080.004.md index 70c2ea9..2b091ee 100644 --- a/generated_pages/techniques/T0080.004.md +++ b/generated_pages/techniques/T0080.004.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may conduct web traffic analysis to determine which search engines, keywords, websites, and advertisements gain the most traction with its target audience. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.004: Conduct Web Traffic Analysis + +**Summary**: An influence operation may conduct web traffic analysis to determine which search engines, keywords, websites, and advertisements gain the most traction with its target audience. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.004: Conduct Web Traffic Analysis + +**Summary**: An influence operation may conduct web traffic analysis to determine which search engines, keywords, websites, and advertisements gain the most traction with its target audience. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0080.005.md b/generated_pages/techniques/T0080.005.md index b5643bf..165dcef 100644 --- a/generated_pages/techniques/T0080.005.md +++ b/generated_pages/techniques/T0080.005.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may survey a target audience’s Internet availability and degree of media freedom to determine which target audience members will have access to operation content and on which platforms. An operation may face more difficulty targeting an information environment with heavy restrictions and media control than an environment with independent media, freedom of speech and of the press, and individual liberties. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.005: Assess Degree/Type of Media Access + +**Summary**: An influence operation may survey a target audience’s Internet availability and degree of media freedom to determine which target audience members will have access to operation content and on which platforms. An operation may face more difficulty targeting an information environment with heavy restrictions and media control than an environment with independent media, freedom of speech and of the press, and individual liberties. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0080 Map Target Audience Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080.005: Assess Degree/Type of Media Access + +**Summary**: An influence operation may survey a target audience’s Internet availability and degree of media freedom to determine which target audience members will have access to operation content and on which platforms. An operation may face more difficulty targeting an information environment with heavy restrictions and media control than an environment with independent media, freedom of speech and of the press, and individual liberties. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0080.md b/generated_pages/techniques/T0080.md index b6b8e49..82fb142 100644 --- a/generated_pages/techniques/T0080.md +++ b/generated_pages/techniques/T0080.md @@ -2,6 +2,48 @@ **Summary**: Mapping the target audience information environment analyses the information space itself, including social media analytics, web traffic, and media surveys. Mapping the information environment may help the influence operation determine the most realistic and popular information channels to reach its target audience. Mapping the target audience information environment aids influence operations in determining the most vulnerable areas of the information space to target with messaging. +**Tactic**: TA13 Target Audience Analysis + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080: Map Target Audience Information Environment + +**Summary**: Mapping the target audience information environment analyses the information space itself, including social media analytics, web traffic, and media surveys. Mapping the information environment may help the influence operation determine the most realistic and popular information channels to reach its target audience. Mapping the target audience information environment aids influence operations in determining the most vulnerable areas of the information space to target with messaging. + +**Tactic**: TA13 Target Audience Analysis + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0080: Map Target Audience Information Environment + +**Summary**: Mapping the target audience information environment analyses the information space itself, including social media analytics, web traffic, and media surveys. Mapping the information environment may help the influence operation determine the most realistic and popular information channels to reach its target audience. Mapping the target audience information environment aids influence operations in determining the most vulnerable areas of the information space to target with messaging. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.001.md b/generated_pages/techniques/T0081.001.md index 3d692a0..389beee 100644 --- a/generated_pages/techniques/T0081.001.md +++ b/generated_pages/techniques/T0081.001.md @@ -2,6 +2,48 @@ **Summary**: Find or plan to create areas (social media groups, search term groups, hashtag groups etc) where individuals only engage with people they agree with. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.001: Find Echo Chambers + +**Summary**: Find or plan to create areas (social media groups, search term groups, hashtag groups etc) where individuals only engage with people they agree with. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.001: Find Echo Chambers + +**Summary**: Find or plan to create areas (social media groups, search term groups, hashtag groups etc) where individuals only engage with people they agree with. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.002.md b/generated_pages/techniques/T0081.002.md index 1e29acb..b1124c7 100644 --- a/generated_pages/techniques/T0081.002.md +++ b/generated_pages/techniques/T0081.002.md @@ -2,6 +2,48 @@ **Summary**: A data void refers to a word or phrase that results in little, manipulative, or low-quality search engine data. Data voids are hard to detect and relatively harmless until exploited by an entity aiming to quickly proliferate false or misleading information during a phenomenon that causes a high number of individuals to query the term or phrase. In the Plan phase, an influence operation may identify data voids for later exploitation in the operation. A 2019 report by Michael Golebiewski identifies five types of data voids. (1) “Breaking news” data voids occur when a keyword gains popularity during a short period of time, allowing an influence operation to publish false content before legitimate news outlets have an opportunity to publish relevant information. (2) An influence operation may create a “strategic new terms” data void by creating their own terms and publishing information online before promoting their keyword to the target audience. (3) An influence operation may publish content on “outdated terms” that have decreased in popularity, capitalising on most search engines’ preferences for recency. (4) “Fragmented concepts” data voids separate connections between similar ideas, isolating segment queries to distinct search engine results. (5) An influence operation may use “problematic queries” that previously resulted in disturbing or inappropriate content to promote messaging until mainstream media recontextualizes the term. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.002: Identify Data Voids + +**Summary**: A data void refers to a word or phrase that results in little, manipulative, or low-quality search engine data. Data voids are hard to detect and relatively harmless until exploited by an entity aiming to quickly proliferate false or misleading information during a phenomenon that causes a high number of individuals to query the term or phrase. In the Plan phase, an influence operation may identify data voids for later exploitation in the operation. A 2019 report by Michael Golebiewski identifies five types of data voids. (1) “Breaking news” data voids occur when a keyword gains popularity during a short period of time, allowing an influence operation to publish false content before legitimate news outlets have an opportunity to publish relevant information. (2) An influence operation may create a “strategic new terms” data void by creating their own terms and publishing information online before promoting their keyword to the target audience. (3) An influence operation may publish content on “outdated terms” that have decreased in popularity, capitalising on most search engines’ preferences for recency. (4) “Fragmented concepts” data voids separate connections between similar ideas, isolating segment queries to distinct search engine results. (5) An influence operation may use “problematic queries” that previously resulted in disturbing or inappropriate content to promote messaging until mainstream media recontextualizes the term. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.002: Identify Data Voids + +**Summary**: A data void refers to a word or phrase that results in little, manipulative, or low-quality search engine data. Data voids are hard to detect and relatively harmless until exploited by an entity aiming to quickly proliferate false or misleading information during a phenomenon that causes a high number of individuals to query the term or phrase. In the Plan phase, an influence operation may identify data voids for later exploitation in the operation. A 2019 report by Michael Golebiewski identifies five types of data voids. (1) “Breaking news” data voids occur when a keyword gains popularity during a short period of time, allowing an influence operation to publish false content before legitimate news outlets have an opportunity to publish relevant information. (2) An influence operation may create a “strategic new terms” data void by creating their own terms and publishing information online before promoting their keyword to the target audience. (3) An influence operation may publish content on “outdated terms” that have decreased in popularity, capitalising on most search engines’ preferences for recency. (4) “Fragmented concepts” data voids separate connections between similar ideas, isolating segment queries to distinct search engine results. (5) An influence operation may use “problematic queries” that previously resulted in disturbing or inappropriate content to promote messaging until mainstream media recontextualizes the term. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.003.md b/generated_pages/techniques/T0081.003.md index 811769c..a229ee3 100644 --- a/generated_pages/techniques/T0081.003.md +++ b/generated_pages/techniques/T0081.003.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may exploit existing racial, religious, demographic, or social prejudices to further polarise its target audience from the rest of the public. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.003: Identify Existing Prejudices + +**Summary**: An influence operation may exploit existing racial, religious, demographic, or social prejudices to further polarise its target audience from the rest of the public. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.003: Identify Existing Prejudices + +**Summary**: An influence operation may exploit existing racial, religious, demographic, or social prejudices to further polarise its target audience from the rest of the public. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.004.md b/generated_pages/techniques/T0081.004.md index 429f82c..dac74fc 100644 --- a/generated_pages/techniques/T0081.004.md +++ b/generated_pages/techniques/T0081.004.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may identify existing fissures to pit target populations against one another or facilitate a “divide-and-conquer" approach to tailor operation narratives along the divides. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.004: Identify Existing Fissures + +**Summary**: An influence operation may identify existing fissures to pit target populations against one another or facilitate a “divide-and-conquer" approach to tailor operation narratives along the divides. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.004: Identify Existing Fissures + +**Summary**: An influence operation may identify existing fissures to pit target populations against one another or facilitate a “divide-and-conquer" approach to tailor operation narratives along the divides. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.005.md b/generated_pages/techniques/T0081.005.md index 90e0486..53e5223 100644 --- a/generated_pages/techniques/T0081.005.md +++ b/generated_pages/techniques/T0081.005.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may assess preexisting conspiracy theories or suspicions in a population to identify existing narratives that support operational objectives. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.005: Identify Existing Conspiracy Narratives/Suspicions + +**Summary**: An influence operation may assess preexisting conspiracy theories or suspicions in a population to identify existing narratives that support operational objectives. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.005: Identify Existing Conspiracy Narratives/Suspicions + +**Summary**: An influence operation may assess preexisting conspiracy theories or suspicions in a population to identify existing narratives that support operational objectives. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.006.md b/generated_pages/techniques/T0081.006.md index 20d20a2..1f691d8 100644 --- a/generated_pages/techniques/T0081.006.md +++ b/generated_pages/techniques/T0081.006.md @@ -2,6 +2,48 @@ **Summary**: A wedge issue is a divisive political issue, usually concerning a social phenomenon, that divides individuals along a defined line. An influence operation may exploit wedge issues by intentionally polarising the public along the wedge issue line and encouraging opposition between factions. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.006: Identify Wedge Issues + +**Summary**: A wedge issue is a divisive political issue, usually concerning a social phenomenon, that divides individuals along a defined line. An influence operation may exploit wedge issues by intentionally polarising the public along the wedge issue line and encouraging opposition between factions. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.006: Identify Wedge Issues + +**Summary**: A wedge issue is a divisive political issue, usually concerning a social phenomenon, that divides individuals along a defined line. An influence operation may exploit wedge issues by intentionally polarising the public along the wedge issue line and encouraging opposition between factions. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.007.md b/generated_pages/techniques/T0081.007.md index b61c722..98ff433 100644 --- a/generated_pages/techniques/T0081.007.md +++ b/generated_pages/techniques/T0081.007.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may identify or create a real or imaginary adversary to centre operation narratives against. A real adversary may include certain politicians or political parties while imaginary adversaries may include falsified “deep state”62 actors that, according to conspiracies, run the state behind public view. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.007: Identify Target Audience Adversaries + +**Summary**: An influence operation may identify or create a real or imaginary adversary to centre operation narratives against. A real adversary may include certain politicians or political parties while imaginary adversaries may include falsified “deep state”62 actors that, according to conspiracies, run the state behind public view. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.007: Identify Target Audience Adversaries + +**Summary**: An influence operation may identify or create a real or imaginary adversary to centre operation narratives against. A real adversary may include certain politicians or political parties while imaginary adversaries may include falsified “deep state”62 actors that, according to conspiracies, run the state behind public view. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.008.md b/generated_pages/techniques/T0081.008.md index 259b1e1..44152fd 100644 --- a/generated_pages/techniques/T0081.008.md +++ b/generated_pages/techniques/T0081.008.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may exploit existing weaknesses in a target’s media system. These weaknesses may include existing biases among media agencies, vulnerability to false news agencies on social media, or existing distrust of traditional media sources. An existing distrust among the public in the media system’s credibility holds high potential for exploitation by an influence operation when establishing alternative news agencies to spread operation content. +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.008: Identify Media System Vulnerabilities + +**Summary**: An influence operation may exploit existing weaknesses in a target’s media system. These weaknesses may include existing biases among media agencies, vulnerability to false news agencies on social media, or existing distrust of traditional media sources. An existing distrust among the public in the media system’s credibility holds high potential for exploitation by an influence operation when establishing alternative news agencies to spread operation content. + +**Tactic**: TA13 Target Audience Analysis **Parent Technique:** T0081 Identify Social and Technical Vulnerabilities + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081.008: Identify Media System Vulnerabilities + +**Summary**: An influence operation may exploit existing weaknesses in a target’s media system. These weaknesses may include existing biases among media agencies, vulnerability to false news agencies on social media, or existing distrust of traditional media sources. An existing distrust among the public in the media system’s credibility holds high potential for exploitation by an influence operation when establishing alternative news agencies to spread operation content. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0081.md b/generated_pages/techniques/T0081.md index 1bc5f97..86f6b47 100644 --- a/generated_pages/techniques/T0081.md +++ b/generated_pages/techniques/T0081.md @@ -2,6 +2,48 @@ **Summary**: Identifying social and technical vulnerabilities determines weaknesses within the target audience information environment for later exploitation. Vulnerabilities include decisive political issues, weak cybersecurity infrastructure, search engine data voids, and other technical and non technical weaknesses in the target information environment. Identifying social and technical vulnerabilities facilitates the later exploitation of the identified weaknesses to advance operation objectives. +**Tactic**: TA13 Target Audience Analysis + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081: Identify Social and Technical Vulnerabilities + +**Summary**: Identifying social and technical vulnerabilities determines weaknesses within the target audience information environment for later exploitation. Vulnerabilities include decisive political issues, weak cybersecurity infrastructure, search engine data voids, and other technical and non technical weaknesses in the target information environment. Identifying social and technical vulnerabilities facilitates the later exploitation of the identified weaknesses to advance operation objectives. + +**Tactic**: TA13 Target Audience Analysis + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0081: Identify Social and Technical Vulnerabilities + +**Summary**: Identifying social and technical vulnerabilities determines weaknesses within the target audience information environment for later exploitation. Vulnerabilities include decisive political issues, weak cybersecurity infrastructure, search engine data voids, and other technical and non technical weaknesses in the target information environment. Identifying social and technical vulnerabilities facilitates the later exploitation of the identified weaknesses to advance operation objectives. + **Tactic**: TA13 Target Audience Analysis @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0082.md b/generated_pages/techniques/T0082.md index 2d64fb4..2641574 100644 --- a/generated_pages/techniques/T0082.md +++ b/generated_pages/techniques/T0082.md @@ -2,6 +2,48 @@ **Summary**: Actors may develop new narratives to further strategic or tactical goals, especially when existing narratives adequately align with the campaign goals. New narratives provide more control in terms of crafting the message to achieve specific goals. However, new narratives may require more effort to disseminate than adapting or adopting existing narratives. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0082: Develop New Narratives + +**Summary**: Actors may develop new narratives to further strategic or tactical goals, especially when existing narratives adequately align with the campaign goals. New narratives provide more control in terms of crafting the message to achieve specific goals. However, new narratives may require more effort to disseminate than adapting or adopting existing narratives. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0082: Develop New Narratives + +**Summary**: Actors may develop new narratives to further strategic or tactical goals, especially when existing narratives adequately align with the campaign goals. New narratives provide more control in terms of crafting the message to achieve specific goals. However, new narratives may require more effort to disseminate than adapting or adopting existing narratives. + **Tactic**: TA14 Develop Narratives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0083.md b/generated_pages/techniques/T0083.md index 3adb187..a8b3972 100644 --- a/generated_pages/techniques/T0083.md +++ b/generated_pages/techniques/T0083.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may seek to exploit the preexisting weaknesses, fears, and enemies of the target audience for integration into the operation’s narratives and overall strategy. Integrating existing vulnerabilities into the operational approach conserves resources by exploiting already weak areas of the target information environment instead of forcing the operation to create new vulnerabilities in the environment. +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0083: Integrate Target Audience Vulnerabilities into Narrative + +**Summary**: An influence operation may seek to exploit the preexisting weaknesses, fears, and enemies of the target audience for integration into the operation’s narratives and overall strategy. Integrating existing vulnerabilities into the operational approach conserves resources by exploiting already weak areas of the target information environment instead of forcing the operation to create new vulnerabilities in the environment. + +**Tactic**: TA14 Develop Narratives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0083: Integrate Target Audience Vulnerabilities into Narrative + +**Summary**: An influence operation may seek to exploit the preexisting weaknesses, fears, and enemies of the target audience for integration into the operation’s narratives and overall strategy. Integrating existing vulnerabilities into the operational approach conserves resources by exploiting already weak areas of the target information environment instead of forcing the operation to create new vulnerabilities in the environment. + **Tactic**: TA14 Develop Narratives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0084.001.md b/generated_pages/techniques/T0084.001.md index 5582be2..65711ee 100644 --- a/generated_pages/techniques/T0084.001.md +++ b/generated_pages/techniques/T0084.001.md @@ -2,6 +2,48 @@ **Summary**: Copypasta refers to a piece of text that has been copied and pasted multiple times across various online platforms. A copypasta’s final form may differ from its original source text as users add, delete, or otherwise edit the content as they repost the text. +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.001: Use Copypasta + +**Summary**: Copypasta refers to a piece of text that has been copied and pasted multiple times across various online platforms. A copypasta’s final form may differ from its original source text as users add, delete, or otherwise edit the content as they repost the text. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.001: Use Copypasta + +**Summary**: Copypasta refers to a piece of text that has been copied and pasted multiple times across various online platforms. A copypasta’s final form may differ from its original source text as users add, delete, or otherwise edit the content as they repost the text. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0084.002.md b/generated_pages/techniques/T0084.002.md index d481780..302d5e1 100644 --- a/generated_pages/techniques/T0084.002.md +++ b/generated_pages/techniques/T0084.002.md @@ -2,6 +2,51 @@ **Summary**: An influence operation may take content from other sources without proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.002: Plagiarise Content + +**Summary**: An influence operation may take content from other sources without proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.002: Plagiarise Content + +**Summary**: An influence operation may take content from other sources without proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. + **Tactic**: TA06 Develop Content @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0084.003.md b/generated_pages/techniques/T0084.003.md index ff64fa5..c1af625 100644 --- a/generated_pages/techniques/T0084.003.md +++ b/generated_pages/techniques/T0084.003.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may take authentic content from other sources and add deceptive labels or deceptively translate the content into other langauges. +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.003: Deceptively Labelled or Translated + +**Summary**: An influence operation may take authentic content from other sources and add deceptive labels or deceptively translate the content into other langauges. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.003: Deceptively Labelled or Translated + +**Summary**: An influence operation may take authentic content from other sources and add deceptive labels or deceptively translate the content into other langauges. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0084.004.md b/generated_pages/techniques/T0084.004.md index 41585a3..3aaa56c 100644 --- a/generated_pages/techniques/T0084.004.md +++ b/generated_pages/techniques/T0084.004.md @@ -2,6 +2,49 @@ **Summary**: An influence operation may take content from other sources with proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. Examples include the appropriation of content from one inauthentic news site to another inauthentic news site or network in ways that align with the originators licencing or terms of service. +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.004: Appropriate Content + +**Summary**: An influence operation may take content from other sources with proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. Examples include the appropriation of content from one inauthentic news site to another inauthentic news site or network in ways that align with the originators licencing or terms of service. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0084 Reuse Existing Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084.004: Appropriate Content + +**Summary**: An influence operation may take content from other sources with proper attribution. This content may be either misinformation content shared by others without malicious intent but now leveraged by the campaign as disinformation or disinformation content from other sources. Examples include the appropriation of content from one inauthentic news site to another inauthentic news site or network in ways that align with the originators licencing or terms of service. + **Tactic**: TA06 Develop Content @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0084.md b/generated_pages/techniques/T0084.md index 36f949b..caddd33 100644 --- a/generated_pages/techniques/T0084.md +++ b/generated_pages/techniques/T0084.md @@ -2,6 +2,48 @@ **Summary**: When an operation recycles content from its own previous operations or plagiarises from external operations. An operation may launder information to conserve resources that would have otherwise been utilised to develop new content. +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084: Reuse Existing Content + +**Summary**: When an operation recycles content from its own previous operations or plagiarises from external operations. An operation may launder information to conserve resources that would have otherwise been utilised to develop new content. + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0084: Reuse Existing Content + +**Summary**: When an operation recycles content from its own previous operations or plagiarises from external operations. An operation may launder information to conserve resources that would have otherwise been utilised to develop new content. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.001.md b/generated_pages/techniques/T0085.001.md index 2e8f47f..96e2eb3 100644 --- a/generated_pages/techniques/T0085.001.md +++ b/generated_pages/techniques/T0085.001.md @@ -2,6 +2,50 @@ **Summary**: AI-generated texts refers to synthetic text composed by computers using text-generating AI technology. Autonomous generation refers to content created by a bot without human input, also known as bot-created content generation. Autonomous generation represents the next step in automation after language generation and may lead to automated journalism. An influence operation may use read fakes or autonomous generation to quickly develop and distribute content to the target audience. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0085.008 Machine Translated Text](../../generated_pages/techniques/T0085.008.md) | Use this sub-technique when AI has been used to generate a translation of a piece of text. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.001: Develop AI-Generated Text + +**Summary**: AI-generated texts refers to synthetic text composed by computers using text-generating AI technology. Autonomous generation refers to content created by a bot without human input, also known as bot-created content generation. Autonomous generation represents the next step in automation after language generation and may lead to automated journalism. An influence operation may use read fakes or autonomous generation to quickly develop and distribute content to the target audience. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0085.008 Machine Translated Text](../../generated_pages/techniques/T0085.008.md) | Use this sub-technique when AI has been used to generate a translation of a piece of text. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.001: Develop AI-Generated Text + +**Summary**: AI-generated texts refers to synthetic text composed by computers using text-generating AI technology. Autonomous generation refers to content created by a bot without human input, also known as bot-created content generation. Autonomous generation represents the next step in automation after language generation and may lead to automated journalism. An influence operation may use read fakes or autonomous generation to quickly develop and distribute content to the target audience. + **Tactic**: TA06 Develop Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.003.md b/generated_pages/techniques/T0085.003.md index 1a436ab..c31f80e 100644 --- a/generated_pages/techniques/T0085.003.md +++ b/generated_pages/techniques/T0085.003.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may develop false or misleading news articles aligned to their campaign goals or narratives. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.003: Develop Inauthentic News Articles + +**Summary**: An influence operation may develop false or misleading news articles aligned to their campaign goals or narratives. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.003: Develop Inauthentic News Articles + +**Summary**: An influence operation may develop false or misleading news articles aligned to their campaign goals or narratives. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.004.md b/generated_pages/techniques/T0085.004.md index afb1f23..f0c0fb0 100644 --- a/generated_pages/techniques/T0085.004.md +++ b/generated_pages/techniques/T0085.004.md @@ -2,6 +2,53 @@ **Summary**: Produce text in the form of a document. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.004: Develop Document + +**Summary**: Produce text in the form of a document. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.004: Develop Document + +**Summary**: Produce text in the form of a document. + **Tactic**: TA06 Develop Content @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.005.md b/generated_pages/techniques/T0085.005.md index 1b1a13b..907b7f5 100644 --- a/generated_pages/techniques/T0085.005.md +++ b/generated_pages/techniques/T0085.005.md @@ -2,6 +2,49 @@ **Summary**: Produce text content in the form of a book. 

This technique covers both e-books and physical books, however, the former is more easily deployed by threat actors given the lower cost to develop. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.005: Develop Book + +**Summary**: Produce text content in the form of a book. 

This technique covers both e-books and physical books, however, the former is more easily deployed by threat actors given the lower cost to develop. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.005: Develop Book + +**Summary**: Produce text content in the form of a book. 

This technique covers both e-books and physical books, however, the former is more easily deployed by threat actors given the lower cost to develop. + **Tactic**: TA06 Develop Content @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.006.md b/generated_pages/techniques/T0085.006.md index e2a86a0..9d61921 100644 --- a/generated_pages/techniques/T0085.006.md +++ b/generated_pages/techniques/T0085.006.md @@ -2,6 +2,48 @@ **Summary**: Opinion articles (aka “Op-Eds” or “Editorials”) are articles or regular columns flagged as “opinion” posted to news sources, and can be contributed by people outside the organisation. 

Flagging articles as opinions allow news organisations to distinguish them from the typical expectations of objective news reporting while distancing the presented opinion from the organisation or its employees.

The use of this technique is not by itself an indication of malicious or inauthentic content; Op-eds are a common format in media. However, threat actors exploit op-eds to, for example, submit opinion articles to local media to promote their narratives.

Examples from the perspective of a news site involve publishing op-eds from perceived prestigious voices to give legitimacy to an inauthentic publication, or supporting causes by hosting op-eds from actors aligned with the organisation’s goals. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.006: Develop Opinion Article + +**Summary**: Opinion articles (aka “Op-Eds” or “Editorials”) are articles or regular columns flagged as “opinion” posted to news sources, and can be contributed by people outside the organisation. 

Flagging articles as opinions allow news organisations to distinguish them from the typical expectations of objective news reporting while distancing the presented opinion from the organisation or its employees.

The use of this technique is not by itself an indication of malicious or inauthentic content; Op-eds are a common format in media. However, threat actors exploit op-eds to, for example, submit opinion articles to local media to promote their narratives.

Examples from the perspective of a news site involve publishing op-eds from perceived prestigious voices to give legitimacy to an inauthentic publication, or supporting causes by hosting op-eds from actors aligned with the organisation’s goals. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.006: Develop Opinion Article + +**Summary**: Opinion articles (aka “Op-Eds” or “Editorials”) are articles or regular columns flagged as “opinion” posted to news sources, and can be contributed by people outside the organisation. 

Flagging articles as opinions allow news organisations to distinguish them from the typical expectations of objective news reporting while distancing the presented opinion from the organisation or its employees.

The use of this technique is not by itself an indication of malicious or inauthentic content; Op-eds are a common format in media. However, threat actors exploit op-eds to, for example, submit opinion articles to local media to promote their narratives.

Examples from the perspective of a news site involve publishing op-eds from perceived prestigious voices to give legitimacy to an inauthentic publication, or supporting causes by hosting op-eds from actors aligned with the organisation’s goals. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.007.md b/generated_pages/techniques/T0085.007.md index 26c554d..adf8299 100644 --- a/generated_pages/techniques/T0085.007.md +++ b/generated_pages/techniques/T0085.007.md @@ -2,6 +2,48 @@ **Summary**: Create fake academic research. Example: fake social science research is often aimed at hot-button social issues such as gender, race and sexuality. Fake science research can target Climate Science debate or pseudoscience like anti-vaxx.

This Technique previously used the ID T0019.001. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.007: Create Fake Research + +**Summary**: Create fake academic research. Example: fake social science research is often aimed at hot-button social issues such as gender, race and sexuality. Fake science research can target Climate Science debate or pseudoscience like anti-vaxx.

This Technique previously used the ID T0019.001. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.007: Create Fake Research + +**Summary**: Create fake academic research. Example: fake social science research is often aimed at hot-button social issues such as gender, race and sexuality. Fake science research can target Climate Science debate or pseudoscience like anti-vaxx.

This Technique previously used the ID T0019.001. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.008.md b/generated_pages/techniques/T0085.008.md index c96a267..26f593b 100644 --- a/generated_pages/techniques/T0085.008.md +++ b/generated_pages/techniques/T0085.008.md @@ -2,6 +2,52 @@ **Summary**: Text which has been translated into another language using machine translation tools, such as AI. +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A conspicuous aspect of these accounts is the likely usage of machine-translated Hebrew. The disjointed and linguistically strange comments imply that the CIB’s architects are not Hebrew-speaking and likely translate to Hebrew using online tools. There’s no official way to confirm that a text is translated, but it is evident when the gender for nouns is incorrect, very unusual words or illogical grammar being used usually lead to the conclusion that the comment was not written by a native speaker that is aware of the nuances of the language.”

In this example analysts asserted that accounts were posting content which had been translated via machine (T0085.008: Machine Translated Text), based on indicators such as issues with grammar and gender. | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]

“Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”


In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.007: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.008: Machine Translated Text + +**Summary**: Text which has been translated into another language using machine translation tools, such as AI. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0085 Develop Text-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A conspicuous aspect of these accounts is the likely usage of machine-translated Hebrew. The disjointed and linguistically strange comments imply that the CIB’s architects are not Hebrew-speaking and likely translate to Hebrew using online tools. There’s no official way to confirm that a text is translated, but it is evident when the gender for nouns is incorrect, very unusual words or illogical grammar being used usually lead to the conclusion that the comment was not written by a native speaker that is aware of the nuances of the language.”

In this example analysts asserted that accounts were posting content which had been translated via machine (T0085.008: Machine Translated Text), based on indicators such as issues with grammar and gender. | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]

“Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”


In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.007: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085.008: Machine Translated Text + +**Summary**: Text which has been translated into another language using machine translation tools, such as AI. + **Tactic**: TA06 Develop Content @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0085.md b/generated_pages/techniques/T0085.md index 1167f1f..db667a9 100644 --- a/generated_pages/techniques/T0085.md +++ b/generated_pages/techniques/T0085.md @@ -2,6 +2,50 @@ **Summary**: Creating and editing false or misleading text-based artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085: Develop Text-Based Content + +**Summary**: Creating and editing false or misleading text-based artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0085: Develop Text-Based Content + +**Summary**: Creating and editing false or misleading text-based artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. + **Tactic**: TA06 Develop Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0086.001.md b/generated_pages/techniques/T0086.001.md index 2276ea3..46b83c2 100644 --- a/generated_pages/techniques/T0086.001.md +++ b/generated_pages/techniques/T0086.001.md @@ -2,6 +2,48 @@ **Summary**: Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns. +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.001: Develop Memes + +**Summary**: Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.001: Develop Memes + +**Summary**: Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0086.002.md b/generated_pages/techniques/T0086.002.md index fb1b0c0..10071c8 100644 --- a/generated_pages/techniques/T0086.002.md +++ b/generated_pages/techniques/T0086.002.md @@ -2,6 +2,56 @@ **Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0145.002 AI-Generated Account Imagery](../../generated_pages/techniques/T0145.002.md) | Analysts should use this sub-technique to document use of AI generated imagery in accounts’ profile pictures or other account imagery. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | +| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.

[...]

Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.

Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.

Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.

Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.

The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.


ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples’ faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.002: Develop AI-Generated Images (Deepfakes) + +**Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0145.002 AI-Generated Account Imagery](../../generated_pages/techniques/T0145.002.md) | Analysts should use this sub-technique to document use of AI generated imagery in accounts’ profile pictures or other account imagery. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | +| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.

[...]

Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.

Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.

Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.

Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.

The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.


ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples’ faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.002: Develop AI-Generated Images (Deepfakes) + +**Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. + **Tactic**: TA06 Develop Content @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0086.003.md b/generated_pages/techniques/T0086.003.md index 6753b81..be825d6 100644 --- a/generated_pages/techniques/T0086.003.md +++ b/generated_pages/techniques/T0086.003.md @@ -2,6 +2,48 @@ **Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.003: Deceptively Edit Images (Cheap Fakes) + +**Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.003: Deceptively Edit Images (Cheap Fakes) + +**Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0086.004.md b/generated_pages/techniques/T0086.004.md index b1466fc..a7cfa6b 100644 --- a/generated_pages/techniques/T0086.004.md +++ b/generated_pages/techniques/T0086.004.md @@ -2,6 +2,48 @@ **Summary**: Image files that aggregate positive evidence (Joan Donovan) +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.004: Aggregate Information into Evidence Collages + +**Summary**: Image files that aggregate positive evidence (Joan Donovan) + +**Tactic**: TA06 Develop Content **Parent Technique:** T0086 Develop Image-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086.004: Aggregate Information into Evidence Collages + +**Summary**: Image files that aggregate positive evidence (Joan Donovan) + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0086.md b/generated_pages/techniques/T0086.md index 3b1c1e2..4a02837 100644 --- a/generated_pages/techniques/T0086.md +++ b/generated_pages/techniques/T0086.md @@ -2,6 +2,48 @@ **Summary**: Creating and editing false or misleading visual artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include photographing staged real-life situations, repurposing existing digital images, or using image creation and editing technologies. +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086: Develop Image-Based Content + +**Summary**: Creating and editing false or misleading visual artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include photographing staged real-life situations, repurposing existing digital images, or using image creation and editing technologies. + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0086: Develop Image-Based Content + +**Summary**: Creating and editing false or misleading visual artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include photographing staged real-life situations, repurposing existing digital images, or using image creation and editing technologies. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0087.001.md b/generated_pages/techniques/T0087.001.md index 2f1e680..4294d2c 100644 --- a/generated_pages/techniques/T0087.001.md +++ b/generated_pages/techniques/T0087.001.md @@ -2,6 +2,49 @@ **Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. +**Tactic**: TA06 Develop Content **Parent Technique:** T0087 Develop Video-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0087.001: Develop AI-Generated Videos (Deepfakes) + +**Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0087 Develop Video-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0087.001: Develop AI-Generated Videos (Deepfakes) + +**Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. + **Tactic**: TA06 Develop Content @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0087.002.md b/generated_pages/techniques/T0087.002.md index 6e80e27..b4ac80d 100644 --- a/generated_pages/techniques/T0087.002.md +++ b/generated_pages/techniques/T0087.002.md @@ -2,6 +2,48 @@ **Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. +**Tactic**: TA06 Develop Content **Parent Technique:** T0087 Develop Video-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0087.002: Deceptively Edit Video (Cheap Fakes) + +**Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0087 Develop Video-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0087.002: Deceptively Edit Video (Cheap Fakes) + +**Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0087.md b/generated_pages/techniques/T0087.md index 23d71c0..b8ced2c 100644 --- a/generated_pages/techniques/T0087.md +++ b/generated_pages/techniques/T0087.md @@ -2,6 +2,54 @@ **Summary**: Creating and editing false or misleading video artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include staging videos of purportedly real situations, repurposing existing video artefacts, or using AI-generated video creation and editing technologies (including deepfakes). +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0087: Develop Video-Based Content + +**Summary**: Creating and editing false or misleading video artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include staging videos of purportedly real situations, repurposing existing video artefacts, or using AI-generated video creation and editing technologies (including deepfakes). + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0087: Develop Video-Based Content + +**Summary**: Creating and editing false or misleading video artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include staging videos of purportedly real situations, repurposing existing video artefacts, or using AI-generated video creation and editing technologies (including deepfakes). + **Tactic**: TA06 Develop Content @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0088.001.md b/generated_pages/techniques/T0088.001.md index dc3cc72..9d2a028 100644 --- a/generated_pages/techniques/T0088.001.md +++ b/generated_pages/techniques/T0088.001.md @@ -2,6 +2,54 @@ **Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. +**Tactic**: TA06 Develop Content **Parent Technique:** T0088 Develop Audio-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0088.001: Develop AI-Generated Audio (Deepfakes) + +**Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0088 Develop Audio-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0088.001: Develop AI-Generated Audio (Deepfakes) + +**Summary**: Deepfakes refer to AI-generated falsified photos, videos, or soundbites. An influence operation may use deepfakes to depict an inauthentic situation by synthetically recreating an individual’s face, body, voice, and physical gestures. + **Tactic**: TA06 Develop Content @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0088.002.md b/generated_pages/techniques/T0088.002.md index f093185..3184d83 100644 --- a/generated_pages/techniques/T0088.002.md +++ b/generated_pages/techniques/T0088.002.md @@ -2,6 +2,48 @@ **Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. +**Tactic**: TA06 Develop Content **Parent Technique:** T0088 Develop Audio-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0088.002: Deceptively Edit Audio (Cheap Fakes) + +**Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0088 Develop Audio-Based Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0088.002: Deceptively Edit Audio (Cheap Fakes) + +**Summary**: Cheap fakes utilise less sophisticated measures of altering an image, video, or audio for example, slowing, speeding, or cutting footage to create a false context surrounding an image or event. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0088.md b/generated_pages/techniques/T0088.md index db66269..e30810f 100644 --- a/generated_pages/techniques/T0088.md +++ b/generated_pages/techniques/T0088.md @@ -2,6 +2,50 @@ **Summary**: Creating and editing false or misleading audio artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include creating completely new audio content, repurposing existing audio artefacts (including cheap fakes), or using AI-generated audio creation and editing technologies (including deepfakes). +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0088: Develop Audio-Based Content + +**Summary**: Creating and editing false or misleading audio artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include creating completely new audio content, repurposing existing audio artefacts (including cheap fakes), or using AI-generated audio creation and editing technologies (including deepfakes). + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0088: Develop Audio-Based Content + +**Summary**: Creating and editing false or misleading audio artefacts, often aligned with one or more specific narratives, for use in a disinformation campaign. This may include creating completely new audio content, repurposing existing audio artefacts (including cheap fakes), or using AI-generated audio creation and editing technologies (including deepfakes). + **Tactic**: TA06 Develop Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0089.001.md b/generated_pages/techniques/T0089.001.md index 95ff7a2..5a4d42b 100644 --- a/generated_pages/techniques/T0089.001.md +++ b/generated_pages/techniques/T0089.001.md @@ -2,6 +2,50 @@ **Summary**: Procure authentic documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can be "leaked" during later stages in the operation. +**Tactic**: TA06 Develop Content **Parent Technique:** T0089 Obtain Private Documents + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0089.001: Obtain Authentic Documents + +**Summary**: Procure authentic documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can be "leaked" during later stages in the operation. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0089 Obtain Private Documents + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0089.001: Obtain Authentic Documents + +**Summary**: Procure authentic documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can be "leaked" during later stages in the operation. + **Tactic**: TA06 Develop Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0089.003.md b/generated_pages/techniques/T0089.003.md index 3d6b4b5..2bce961 100644 --- a/generated_pages/techniques/T0089.003.md +++ b/generated_pages/techniques/T0089.003.md @@ -2,6 +2,48 @@ **Summary**: Alter authentic documents (public or non-public) to achieve campaign goals. The altered documents are intended to appear as if they are authentic and can be "leaked" during later stages in the operation. +**Tactic**: TA06 Develop Content **Parent Technique:** T0089 Obtain Private Documents + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0089.003: Alter Authentic Documents + +**Summary**: Alter authentic documents (public or non-public) to achieve campaign goals. The altered documents are intended to appear as if they are authentic and can be "leaked" during later stages in the operation. + +**Tactic**: TA06 Develop Content **Parent Technique:** T0089 Obtain Private Documents + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0089.003: Alter Authentic Documents + +**Summary**: Alter authentic documents (public or non-public) to achieve campaign goals. The altered documents are intended to appear as if they are authentic and can be "leaked" during later stages in the operation. + **Tactic**: TA06 Develop Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0089.md b/generated_pages/techniques/T0089.md index 213e77d..2f542a3 100644 --- a/generated_pages/techniques/T0089.md +++ b/generated_pages/techniques/T0089.md @@ -2,6 +2,50 @@ **Summary**: Procuring documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can include authentic non-public documents, authentic non-public documents have been altered, or inauthentic documents intended to appear as if they are authentic non-public documents. All of these types of documents can be "leaked" during later stages in the operation. +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0089: Obtain Private Documents + +**Summary**: Procuring documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can include authentic non-public documents, authentic non-public documents have been altered, or inauthentic documents intended to appear as if they are authentic non-public documents. All of these types of documents can be "leaked" during later stages in the operation. + +**Tactic**: TA06 Develop Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0089: Obtain Private Documents + +**Summary**: Procuring documents that are not publicly available, by whatever means -- whether legal or illegal, highly-resourced or less so. These documents can include authentic non-public documents, authentic non-public documents have been altered, or inauthentic documents intended to appear as if they are authentic non-public documents. All of these types of documents can be "leaked" during later stages in the operation. + **Tactic**: TA06 Develop Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0091.001.md b/generated_pages/techniques/T0091.001.md index f325fb5..fd6d655 100644 --- a/generated_pages/techniques/T0091.001.md +++ b/generated_pages/techniques/T0091.001.md @@ -2,6 +2,48 @@ **Summary**: Operators recruit paid contractor to support the campaign. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0091 Recruit Malign Actors + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091.001: Recruit Contractors + +**Summary**: Operators recruit paid contractor to support the campaign. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0091 Recruit Malign Actors + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091.001: Recruit Contractors + +**Summary**: Operators recruit paid contractor to support the campaign. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0091.002.md b/generated_pages/techniques/T0091.002.md index 4124eef..56f9859 100644 --- a/generated_pages/techniques/T0091.002.md +++ b/generated_pages/techniques/T0091.002.md @@ -2,6 +2,48 @@ **Summary**: Operators recruit partisans (ideologically-aligned individuals) to support the campaign. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0091 Recruit Malign Actors + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091.002: Recruit Partisans + +**Summary**: Operators recruit partisans (ideologically-aligned individuals) to support the campaign. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0091 Recruit Malign Actors + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091.002: Recruit Partisans + +**Summary**: Operators recruit partisans (ideologically-aligned individuals) to support the campaign. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0091.003.md b/generated_pages/techniques/T0091.003.md index 055ee15..e6587a4 100644 --- a/generated_pages/techniques/T0091.003.md +++ b/generated_pages/techniques/T0091.003.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may hire trolls, or human operators of fake accounts that aim to provoke others by posting and amplifying content about controversial issues. Trolls can serve to discredit an influence operation’s opposition or bring attention to the operation’s cause through debate. Classic trolls refer to regular people who troll for personal reasons, such as attention-seeking or boredom. Classic trolls may advance operation narratives by coincidence but are not directly affiliated with any larger operation. Conversely, hybrid trolls act on behalf of another institution, such as a state or financial organisation, and post content with a specific ideological goal. Hybrid trolls may be highly advanced and institutionalised or less organised and work for a single individual. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0091 Recruit Malign Actors + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091.003: Enlist Troll Accounts + +**Summary**: An influence operation may hire trolls, or human operators of fake accounts that aim to provoke others by posting and amplifying content about controversial issues. Trolls can serve to discredit an influence operation’s opposition or bring attention to the operation’s cause through debate. Classic trolls refer to regular people who troll for personal reasons, such as attention-seeking or boredom. Classic trolls may advance operation narratives by coincidence but are not directly affiliated with any larger operation. Conversely, hybrid trolls act on behalf of another institution, such as a state or financial organisation, and post content with a specific ideological goal. Hybrid trolls may be highly advanced and institutionalised or less organised and work for a single individual. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0091 Recruit Malign Actors + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091.003: Enlist Troll Accounts + +**Summary**: An influence operation may hire trolls, or human operators of fake accounts that aim to provoke others by posting and amplifying content about controversial issues. Trolls can serve to discredit an influence operation’s opposition or bring attention to the operation’s cause through debate. Classic trolls refer to regular people who troll for personal reasons, such as attention-seeking or boredom. Classic trolls may advance operation narratives by coincidence but are not directly affiliated with any larger operation. Conversely, hybrid trolls act on behalf of another institution, such as a state or financial organisation, and post content with a specific ideological goal. Hybrid trolls may be highly advanced and institutionalised or less organised and work for a single individual. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0091.md b/generated_pages/techniques/T0091.md index b2e14f2..79309f1 100644 --- a/generated_pages/techniques/T0091.md +++ b/generated_pages/techniques/T0091.md @@ -2,6 +2,48 @@ **Summary**: Operators recruit bad actors paying recruiting, or exerting control over individuals includes trolls, partisans, and contractors. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091: Recruit Malign Actors + +**Summary**: Operators recruit bad actors paying recruiting, or exerting control over individuals includes trolls, partisans, and contractors. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0091: Recruit Malign Actors + +**Summary**: Operators recruit bad actors paying recruiting, or exerting control over individuals includes trolls, partisans, and contractors. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0092.001.md b/generated_pages/techniques/T0092.001.md index 8a80e62..1f26fe0 100644 --- a/generated_pages/techniques/T0092.001.md +++ b/generated_pages/techniques/T0092.001.md @@ -2,6 +2,48 @@ **Summary**: Influence operations may establish organisations with legitimate or falsified hierarchies, staff, and content to structure operation assets, provide a sense of legitimacy to the operation, or provide institutional backing to operation activities. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0092 Build Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092.001: Create Organisations + +**Summary**: Influence operations may establish organisations with legitimate or falsified hierarchies, staff, and content to structure operation assets, provide a sense of legitimacy to the operation, or provide institutional backing to operation activities. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0092 Build Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092.001: Create Organisations + +**Summary**: Influence operations may establish organisations with legitimate or falsified hierarchies, staff, and content to structure operation assets, provide a sense of legitimacy to the operation, or provide institutional backing to operation activities. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0092.002.md b/generated_pages/techniques/T0092.002.md index b0515e5..05145a2 100644 --- a/generated_pages/techniques/T0092.002.md +++ b/generated_pages/techniques/T0092.002.md @@ -2,6 +2,48 @@ **Summary**: A follow train is a group of people who follow each other on a social media platform, often as a way for an individual or campaign to grow its social media following. Follow trains may be a violation of platform Terms of Service. They are also known as follow-for-follow groups. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0092 Build Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092.002: Use Follow Trains + +**Summary**: A follow train is a group of people who follow each other on a social media platform, often as a way for an individual or campaign to grow its social media following. Follow trains may be a violation of platform Terms of Service. They are also known as follow-for-follow groups. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0092 Build Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092.002: Use Follow Trains + +**Summary**: A follow train is a group of people who follow each other on a social media platform, often as a way for an individual or campaign to grow its social media following. Follow trains may be a violation of platform Terms of Service. They are also known as follow-for-follow groups. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0092.003.md b/generated_pages/techniques/T0092.003.md index a2fafae..b4adc90 100644 --- a/generated_pages/techniques/T0092.003.md +++ b/generated_pages/techniques/T0092.003.md @@ -2,6 +2,48 @@ **Summary**: When there is not an existing community or sub-group that meets a campaign's goals, an influence operation may seek to create a community or sub-group. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0092 Build Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092.003: Create Community or Sub-Group + +**Summary**: When there is not an existing community or sub-group that meets a campaign's goals, an influence operation may seek to create a community or sub-group. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0092 Build Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092.003: Create Community or Sub-Group + +**Summary**: When there is not an existing community or sub-group that meets a campaign's goals, an influence operation may seek to create a community or sub-group. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0092.md b/generated_pages/techniques/T0092.md index ea13a5c..b683aea 100644 --- a/generated_pages/techniques/T0092.md +++ b/generated_pages/techniques/T0092.md @@ -2,6 +2,50 @@ **Summary**: Operators build their own network, creating links between accounts -- whether authentic or inauthentic -- in order amplify and promote narratives and artefacts, and encourage further growth of ther network, as well as the ongoing sharing and engagement with operational content. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092: Build Network + +**Summary**: Operators build their own network, creating links between accounts -- whether authentic or inauthentic -- in order amplify and promote narratives and artefacts, and encourage further growth of ther network, as well as the ongoing sharing and engagement with operational content. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0092: Build Network + +**Summary**: Operators build their own network, creating links between accounts -- whether authentic or inauthentic -- in order amplify and promote narratives and artefacts, and encourage further growth of ther network, as well as the ongoing sharing and engagement with operational content. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0093.001.md b/generated_pages/techniques/T0093.001.md index 549bace..884750c 100644 --- a/generated_pages/techniques/T0093.001.md +++ b/generated_pages/techniques/T0093.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may fund proxies, or external entities that work for the operation. An operation may recruit/train users with existing sympathies towards the operation’s narratives and/or goals as proxies. Funding proxies serves various purposes including: - Diversifying operation locations to complicate attribution - Reducing the workload for direct operation assets +**Tactic**: TA15 Establish Assets **Parent Technique:** T0093 Acquire/Recruit Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0093.001: Fund Proxies + +**Summary**: An influence operation may fund proxies, or external entities that work for the operation. An operation may recruit/train users with existing sympathies towards the operation’s narratives and/or goals as proxies. Funding proxies serves various purposes including: - Diversifying operation locations to complicate attribution - Reducing the workload for direct operation assets + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0093 Acquire/Recruit Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0093.001: Fund Proxies + +**Summary**: An influence operation may fund proxies, or external entities that work for the operation. An operation may recruit/train users with existing sympathies towards the operation’s narratives and/or goals as proxies. Funding proxies serves various purposes including: - Diversifying operation locations to complicate attribution - Reducing the workload for direct operation assets + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0093.002.md b/generated_pages/techniques/T0093.002.md index 6bc510f..6f33cc4 100644 --- a/generated_pages/techniques/T0093.002.md +++ b/generated_pages/techniques/T0093.002.md @@ -2,6 +2,48 @@ **Summary**: A botnet is a group of bots that can function in coordination with each other. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0093 Acquire/Recruit Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0093.002: Acquire Botnets + +**Summary**: A botnet is a group of bots that can function in coordination with each other. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0093 Acquire/Recruit Network + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0093.002: Acquire Botnets + +**Summary**: A botnet is a group of bots that can function in coordination with each other. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0093.md b/generated_pages/techniques/T0093.md index eac143c..7b41564 100644 --- a/generated_pages/techniques/T0093.md +++ b/generated_pages/techniques/T0093.md @@ -2,6 +2,48 @@ **Summary**: Operators acquire an existing network by paying, recruiting, or exerting control over the leaders of the existing network. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0093: Acquire/Recruit Network + +**Summary**: Operators acquire an existing network by paying, recruiting, or exerting control over the leaders of the existing network. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0093: Acquire/Recruit Network + +**Summary**: Operators acquire an existing network by paying, recruiting, or exerting control over the leaders of the existing network. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0094.001.md b/generated_pages/techniques/T0094.001.md index 4152562..4657691 100644 --- a/generated_pages/techniques/T0094.001.md +++ b/generated_pages/techniques/T0094.001.md @@ -2,6 +2,48 @@ **Summary**: When seeking to infiltrate an existing network, an influence operation may identify individuals and groups that might be susceptible to being co-opted or influenced. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0094 Infiltrate Existing Networks + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0094.001: Identify Susceptible Targets in Networks + +**Summary**: When seeking to infiltrate an existing network, an influence operation may identify individuals and groups that might be susceptible to being co-opted or influenced. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0094 Infiltrate Existing Networks + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0094.001: Identify Susceptible Targets in Networks + +**Summary**: When seeking to infiltrate an existing network, an influence operation may identify individuals and groups that might be susceptible to being co-opted or influenced. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0094.002.md b/generated_pages/techniques/T0094.002.md index d69ea5a..bc79c47 100644 --- a/generated_pages/techniques/T0094.002.md +++ b/generated_pages/techniques/T0094.002.md @@ -2,6 +2,48 @@ **Summary**: Butterfly attacks occur when operators pretend to be members of a certain social group, usually a group that struggles for representation. An influence operation may mimic a group to insert controversial statements into the discourse, encourage the spread of operation content, or promote harassment among group members. Unlike astroturfing, butterfly attacks aim to infiltrate and discredit existing grassroots movements, organisations, and media campaigns. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0094 Infiltrate Existing Networks + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0094.002: Utilise Butterfly Attacks + +**Summary**: Butterfly attacks occur when operators pretend to be members of a certain social group, usually a group that struggles for representation. An influence operation may mimic a group to insert controversial statements into the discourse, encourage the spread of operation content, or promote harassment among group members. Unlike astroturfing, butterfly attacks aim to infiltrate and discredit existing grassroots movements, organisations, and media campaigns. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0094 Infiltrate Existing Networks + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0094.002: Utilise Butterfly Attacks + +**Summary**: Butterfly attacks occur when operators pretend to be members of a certain social group, usually a group that struggles for representation. An influence operation may mimic a group to insert controversial statements into the discourse, encourage the spread of operation content, or promote harassment among group members. Unlike astroturfing, butterfly attacks aim to infiltrate and discredit existing grassroots movements, organisations, and media campaigns. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0094.md b/generated_pages/techniques/T0094.md index 6be4a56..2f4ae70 100644 --- a/generated_pages/techniques/T0094.md +++ b/generated_pages/techniques/T0094.md @@ -2,6 +2,48 @@ **Summary**: Operators deceptively insert social assets into existing networks as group members in order to influence the members of the network and the wider information environment that the network impacts. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0094: Infiltrate Existing Networks + +**Summary**: Operators deceptively insert social assets into existing networks as group members in order to influence the members of the network and the wider information environment that the network impacts. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0094: Infiltrate Existing Networks + +**Summary**: Operators deceptively insert social assets into existing networks as group members in order to influence the members of the network and the wider information environment that the network impacts. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0095.md b/generated_pages/techniques/T0095.md index 84e7035..594f734 100644 --- a/generated_pages/techniques/T0095.md +++ b/generated_pages/techniques/T0095.md @@ -2,6 +2,48 @@ **Summary**: An owned media asset refers to an agency or organisation through which an influence operation may create, develop, and host content and narratives. Owned media assets include websites, blogs, social media pages, forums, and other platforms that facilitate the creation and organisation of content. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0095: Develop Owned Media Assets + +**Summary**: An owned media asset refers to an agency or organisation through which an influence operation may create, develop, and host content and narratives. Owned media assets include websites, blogs, social media pages, forums, and other platforms that facilitate the creation and organisation of content. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0095: Develop Owned Media Assets + +**Summary**: An owned media asset refers to an agency or organisation through which an influence operation may create, develop, and host content and narratives. Owned media assets include websites, blogs, social media pages, forums, and other platforms that facilitate the creation and organisation of content. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0096.001.md b/generated_pages/techniques/T0096.001.md index 40ceecd..ab0e483 100644 --- a/generated_pages/techniques/T0096.001.md +++ b/generated_pages/techniques/T0096.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may create an organisation for creating and amplifying campaign artefacts at scale. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0096 Leverage Content Farms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0096.001: Create Content Farms + +**Summary**: An influence operation may create an organisation for creating and amplifying campaign artefacts at scale. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0096 Leverage Content Farms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0096.001: Create Content Farms + +**Summary**: An influence operation may create an organisation for creating and amplifying campaign artefacts at scale. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0096.002.md b/generated_pages/techniques/T0096.002.md index eb83c8f..aba86f0 100644 --- a/generated_pages/techniques/T0096.002.md +++ b/generated_pages/techniques/T0096.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may outsource content creation to external companies to avoid attribution, increase the rate of content creation, or improve content quality, i.e., by employing an organisation that can create content in the target audience’s native language. Employed organisations may include marketing companies for tailored advertisements or external content farms for high volumes of targeted media. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0096 Leverage Content Farms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0096.002: Outsource Content Creation to External Organisations + +**Summary**: An influence operation may outsource content creation to external companies to avoid attribution, increase the rate of content creation, or improve content quality, i.e., by employing an organisation that can create content in the target audience’s native language. Employed organisations may include marketing companies for tailored advertisements or external content farms for high volumes of targeted media. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0096 Leverage Content Farms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0096.002: Outsource Content Creation to External Organisations + +**Summary**: An influence operation may outsource content creation to external companies to avoid attribution, increase the rate of content creation, or improve content quality, i.e., by employing an organisation that can create content in the target audience’s native language. Employed organisations may include marketing companies for tailored advertisements or external content farms for high volumes of targeted media. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0096.md b/generated_pages/techniques/T0096.md index 6ae2c45..f98df79 100644 --- a/generated_pages/techniques/T0096.md +++ b/generated_pages/techniques/T0096.md @@ -2,6 +2,48 @@ **Summary**: Using the services of large-scale content providers for creating and amplifying campaign artefacts at scale. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0096: Leverage Content Farms + +**Summary**: Using the services of large-scale content providers for creating and amplifying campaign artefacts at scale. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0096: Leverage Content Farms + +**Summary**: Using the services of large-scale content providers for creating and amplifying campaign artefacts at scale. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.100.md b/generated_pages/techniques/T0097.100.md index 1e71fbb..f83aa65 100644 --- a/generated_pages/techniques/T0097.100.md +++ b/generated_pages/techniques/T0097.100.md @@ -2,6 +2,54 @@ **Summary**: This sub-technique can be used to indicate that an entity is presenting itself as an individual. If the person is presenting themselves as having one of the personas listed below then these sub-techniques should be used instead, as they indicate both the type of persona they presented and that the entity presented itself as an individual:

T0097.101: Local Persona
T0097.102: Journalist Persona
T0097.103: Activist Persona
T0097.104: Hacktivist Persona
T0097.105: Military Personnel Persona
T0097.106: Recruiter Persona
T0097.107: Researcher Persona
T0097.108: Expert Persona
T0097.109: Romantic Suitor Persona
T0097.110: Party Official Persona
T0097.111: Government Official Persona
T0097.112: Government Employee Persona +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:

- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks.
- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org.
- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers.
- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas.
- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victim’s trust.”


In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.100: Individual Persona + +**Summary**: This sub-technique can be used to indicate that an entity is presenting itself as an individual. If the person is presenting themselves as having one of the personas listed below then these sub-techniques should be used instead, as they indicate both the type of persona they presented and that the entity presented itself as an individual:

T0097.101: Local Persona
T0097.102: Journalist Persona
T0097.103: Activist Persona
T0097.104: Hacktivist Persona
T0097.105: Military Personnel Persona
T0097.106: Recruiter Persona
T0097.107: Researcher Persona
T0097.108: Expert Persona
T0097.109: Romantic Suitor Persona
T0097.110: Party Official Persona
T0097.111: Government Official Persona
T0097.112: Government Employee Persona + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:

- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks.
- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org.
- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers.
- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas.
- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victim’s trust.”


In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.100: Individual Persona + +**Summary**: This sub-technique can be used to indicate that an entity is presenting itself as an individual. If the person is presenting themselves as having one of the personas listed below then these sub-techniques should be used instead, as they indicate both the type of persona they presented and that the entity presented itself as an individual:

T0097.101: Local Persona
T0097.102: Journalist Persona
T0097.103: Activist Persona
T0097.104: Hacktivist Persona
T0097.105: Military Personnel Persona
T0097.106: Recruiter Persona
T0097.107: Researcher Persona
T0097.108: Expert Persona
T0097.109: Romantic Suitor Persona
T0097.110: Party Official Persona
T0097.111: Government Official Persona
T0097.112: Government Employee Persona + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.101.md b/generated_pages/techniques/T0097.101.md index 954db41..270dbd2 100644 --- a/generated_pages/techniques/T0097.101.md +++ b/generated_pages/techniques/T0097.101.md @@ -2,6 +2,61 @@ **Summary**: A person with a local persona presents themselves as living in a particular geography or having local knowledge relevant to a narrative.

While presenting as a local is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as local to a target area. Threat actors can fabricate locals (T0143.002: Fabricated Persona, T0097.101: Local Persona) to add credibility to their narratives, or to misrepresent the real opinions of locals in the area.

People who are legitimate locals (T0143.001: Authentic Persona, T0097.101: Local Persona) can use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a local to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.201 Local Institution Persona](../../generated_pages/techniques/T0097.201.md) | Analysts should use this sub-technique to catalogue cases where an institution is presenting as a local, such as a local news organisation or local business. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”

In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | “Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.

““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants’ difficult lives in Europe,” Meta said. “They also posted about Poland’s strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian government’s handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””


In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):

“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.” | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”

In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts’ profile pictures (T0145.001: Copy Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.101: Local Persona + +**Summary**: A person with a local persona presents themselves as living in a particular geography or having local knowledge relevant to a narrative.

While presenting as a local is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as local to a target area. Threat actors can fabricate locals (T0143.002: Fabricated Persona, T0097.101: Local Persona) to add credibility to their narratives, or to misrepresent the real opinions of locals in the area.

People who are legitimate locals (T0143.001: Authentic Persona, T0097.101: Local Persona) can use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a local to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.201 Local Institution Persona](../../generated_pages/techniques/T0097.201.md) | Analysts should use this sub-technique to catalogue cases where an institution is presenting as a local, such as a local news organisation or local business. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”

In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | “Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.

““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants’ difficult lives in Europe,” Meta said. “They also posted about Poland’s strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian government’s handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””


In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):

“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.” | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”

In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts’ profile pictures (T0145.001: Copy Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.101: Local Persona + +**Summary**: A person with a local persona presents themselves as living in a particular geography or having local knowledge relevant to a narrative.

While presenting as a local is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as local to a target area. Threat actors can fabricate locals (T0143.002: Fabricated Persona, T0097.101: Local Persona) to add credibility to their narratives, or to misrepresent the real opinions of locals in the area.

People who are legitimate locals (T0143.001: Authentic Persona, T0097.101: Local Persona) can use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a local to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -26,4 +81,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.102.md b/generated_pages/techniques/T0097.102.md index 2a64d41..bfb9cf4 100644 --- a/generated_pages/techniques/T0097.102.md +++ b/generated_pages/techniques/T0097.102.md @@ -2,6 +2,62 @@ **Summary**: A person with a journalist persona presents themselves as a reporter or journalist delivering news, conducting interviews, investigations etc.

While presenting as a journalist is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by people presenting as journalists. Threat actors can fabricate journalists to give the appearance of legitimacy, justifying the actor’s requests for interviews, etc (T0143.002: Fabricated Persona, T0097.102: Journalist Persona).

People who have legitimately developed a persona as a journalist (T0143.001: Authentic Persona, T0097.102: Journalist Persona) can use it for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a trusted journalist to provide legitimacy to a false narrative or be tricked into doing so without the journalist’s knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | People with a journalist persona may present themselves as local reporters. | +| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | People with a journalist persona may present as being part of a news organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.

“The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”


In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.

“The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants’ rights, and compared Poland’s treatment of migrants against other countries’. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”


This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).

Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.102: Journalist Persona + +**Summary**: A person with a journalist persona presents themselves as a reporter or journalist delivering news, conducting interviews, investigations etc.

While presenting as a journalist is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by people presenting as journalists. Threat actors can fabricate journalists to give the appearance of legitimacy, justifying the actor’s requests for interviews, etc (T0143.002: Fabricated Persona, T0097.102: Journalist Persona).

People who have legitimately developed a persona as a journalist (T0143.001: Authentic Persona, T0097.102: Journalist Persona) can use it for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a trusted journalist to provide legitimacy to a false narrative or be tricked into doing so without the journalist’s knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | People with a journalist persona may present themselves as local reporters. | +| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | People with a journalist persona may present as being part of a news organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.

“The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”


In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.

“The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants’ rights, and compared Poland’s treatment of migrants against other countries’. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”


This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).

Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery). | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.102: Journalist Persona + +**Summary**: A person with a journalist persona presents themselves as a reporter or journalist delivering news, conducting interviews, investigations etc.

While presenting as a journalist is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by people presenting as journalists. Threat actors can fabricate journalists to give the appearance of legitimacy, justifying the actor’s requests for interviews, etc (T0143.002: Fabricated Persona, T0097.102: Journalist Persona).

People who have legitimately developed a persona as a journalist (T0143.001: Authentic Persona, T0097.102: Journalist Persona) can use it for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a trusted journalist to provide legitimacy to a false narrative or be tricked into doing so without the journalist’s knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -27,4 +83,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.103.md b/generated_pages/techniques/T0097.103.md index acfa543..9e2903e 100644 --- a/generated_pages/techniques/T0097.103.md +++ b/generated_pages/techniques/T0097.103.md @@ -2,6 +2,62 @@ **Summary**: A person with an activist persona presents themselves as an activist; an individual who campaigns for a political cause, organises related events, etc.

While presenting as an activist is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by people presenting as activists. Threat actors can fabricate activists to give the appearance of popular support for an evolving grassroots movement (see T0143.002: Fabricated Persona, T0097.103: Activist Persona).

People who are legitimate activists can use this persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as an activist to provide visibility to a false narrative or be tricked into doing so without their knowledge (T0143.001: Authentic Persona, T0097.103: Activist Persona). +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.104 Hacktivist Persona](../../generated_pages/techniques/T0097.104.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting themselves as someone engaged in activism who uses technical tools and methods, including building technical infrastructure and conducting offensive cyber operations, to achieve their goals. | +| [T0097.207 NGO Persona](../../generated_pages/techniques/T0097.207.md) | People with an activist persona may present as being part of an NGO. | +| [T0097.208 Social Cause Persona](../../generated_pages/techniques/T0097.208.md) | Analysts should use this sub-technique to catalogue cases where an online account is presenting as posting content related to a particular social cause, while not presenting as an individual. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”

In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “The Syria portion of the network [of inauthentic accounts attributed to Russia] included additional sockpuppet accounts. One of these claimed to be a gay rights defender in Syria. Several said they were Syrian journalists. Another account, @SophiaHammer3, said she was born in Syria but currently lives in London. “I’m fond of history and politics. I struggle for justice.” Twitter users had previously observed that Sophia was likely a sockpuppet.”

This behaviour matches T0097.103: Activist Persona because the account presents itself as defending a political cause - in this case gay rights.

Twitter’s technical indicators allowed their analysts to assert that these accounts were “reliably tied to Russian state actors”, meaning the presented personas were entirely fabricated (T0143.002: Fabricated Persona); these accounts are not legitimate gay rights defenders or journalists, they’re assets controlled by Russia publishing narratives beneficial to their agenda. | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.

“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“


In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists’ legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.103: Activist Persona + +**Summary**: A person with an activist persona presents themselves as an activist; an individual who campaigns for a political cause, organises related events, etc.

While presenting as an activist is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by people presenting as activists. Threat actors can fabricate activists to give the appearance of popular support for an evolving grassroots movement (see T0143.002: Fabricated Persona, T0097.103: Activist Persona).

People who are legitimate activists can use this persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as an activist to provide visibility to a false narrative or be tricked into doing so without their knowledge (T0143.001: Authentic Persona, T0097.103: Activist Persona). + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.104 Hacktivist Persona](../../generated_pages/techniques/T0097.104.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting themselves as someone engaged in activism who uses technical tools and methods, including building technical infrastructure and conducting offensive cyber operations, to achieve their goals. | +| [T0097.207 NGO Persona](../../generated_pages/techniques/T0097.207.md) | People with an activist persona may present as being part of an NGO. | +| [T0097.208 Social Cause Persona](../../generated_pages/techniques/T0097.208.md) | Analysts should use this sub-technique to catalogue cases where an online account is presenting as posting content related to a particular social cause, while not presenting as an individual. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”

In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “The Syria portion of the network [of inauthentic accounts attributed to Russia] included additional sockpuppet accounts. One of these claimed to be a gay rights defender in Syria. Several said they were Syrian journalists. Another account, @SophiaHammer3, said she was born in Syria but currently lives in London. “I’m fond of history and politics. I struggle for justice.” Twitter users had previously observed that Sophia was likely a sockpuppet.”

This behaviour matches T0097.103: Activist Persona because the account presents itself as defending a political cause - in this case gay rights.

Twitter’s technical indicators allowed their analysts to assert that these accounts were “reliably tied to Russian state actors”, meaning the presented personas were entirely fabricated (T0143.002: Fabricated Persona); these accounts are not legitimate gay rights defenders or journalists, they’re assets controlled by Russia publishing narratives beneficial to their agenda. | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.

“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“


In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists’ legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.103: Activist Persona + +**Summary**: A person with an activist persona presents themselves as an activist; an individual who campaigns for a political cause, organises related events, etc.

While presenting as an activist is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by people presenting as activists. Threat actors can fabricate activists to give the appearance of popular support for an evolving grassroots movement (see T0143.002: Fabricated Persona, T0097.103: Activist Persona).

People who are legitimate activists can use this persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as an activist to provide visibility to a false narrative or be tricked into doing so without their knowledge (T0143.001: Authentic Persona, T0097.103: Activist Persona). + **Tactic**: TA16 Establish Legitimacy @@ -26,4 +82,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.104.md b/generated_pages/techniques/T0097.104.md index 7d8ff43..0676759 100644 --- a/generated_pages/techniques/T0097.104.md +++ b/generated_pages/techniques/T0097.104.md @@ -2,6 +2,52 @@ **Summary**: A person with a hacktivist persona presents themselves as an activist who conducts offensive cyber operations or builds technical infrastructure for political purposes, rather than the financial motivations commonly attributed to hackers; hacktivists are hacker activists who use their technical knowledge to take political action.

Hacktivists can build technical infrastructure to support other activists, including secure communication channels and surveillance and censorship circumvention. They can also conduct DDOS attacks and other offensive cyber operations, aiming to take down digital assets or gain access to proprietary information. An influence operation may use hacktivist personas to support their operational narratives and legitimise their operational activities.

Fabricated Hacktivists are sometimes referred to as “Faketivists”. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting themselves as someone engaged in activism but doesn’t present themselves as using technical tools and methods to achieve their goals. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00127 Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops](../../generated_pages/incidents/I00127.md) | Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.

[...]

What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.

"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."

To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.

Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.


In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.104: Hacktivist Persona + +**Summary**: A person with a hacktivist persona presents themselves as an activist who conducts offensive cyber operations or builds technical infrastructure for political purposes, rather than the financial motivations commonly attributed to hackers; hacktivists are hacker activists who use their technical knowledge to take political action.

Hacktivists can build technical infrastructure to support other activists, including secure communication channels and surveillance and censorship circumvention. They can also conduct DDOS attacks and other offensive cyber operations, aiming to take down digital assets or gain access to proprietary information. An influence operation may use hacktivist personas to support their operational narratives and legitimise their operational activities.

Fabricated Hacktivists are sometimes referred to as “Faketivists”. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting themselves as someone engaged in activism but doesn’t present themselves as using technical tools and methods to achieve their goals. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00127 Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops](../../generated_pages/incidents/I00127.md) | Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.

[...]

What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.

"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."

To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.

Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.


In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.104: Hacktivist Persona + +**Summary**: A person with a hacktivist persona presents themselves as an activist who conducts offensive cyber operations or builds technical infrastructure for political purposes, rather than the financial motivations commonly attributed to hackers; hacktivists are hacker activists who use their technical knowledge to take political action.

Hacktivists can build technical infrastructure to support other activists, including secure communication channels and surveillance and censorship circumvention. They can also conduct DDOS attacks and other offensive cyber operations, aiming to take down digital assets or gain access to proprietary information. An influence operation may use hacktivist personas to support their operational narratives and legitimise their operational activities.

Fabricated Hacktivists are sometimes referred to as “Faketivists”. + **Tactic**: TA16 Establish Legitimacy @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.105.md b/generated_pages/techniques/T0097.105.md index 18ef424..8b234dc 100644 --- a/generated_pages/techniques/T0097.105.md +++ b/generated_pages/techniques/T0097.105.md @@ -2,6 +2,50 @@ **Summary**: A person with a military personnel persona presents themselves as a serving member or veteran of a military organisation operating in an official capacity on behalf of a government.

While presenting as military personnel is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as military personnel. Threat actors can fabricate military personnel (T0143.002: Fabricated Persona, T0097.105: Military Personnel Persona) to pose as experts on military topics, or to discredit geopolitical adversaries by pretending to be one of their military personnel and spreading discontent.

People who have legitimately developed a military persona (T0143.001: Authentic Persona, T0097.105: Military Personnel Persona) can use it for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a member of the military to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.105: Military Personnel Persona + +**Summary**: A person with a military personnel persona presents themselves as a serving member or veteran of a military organisation operating in an official capacity on behalf of a government.

While presenting as military personnel is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as military personnel. Threat actors can fabricate military personnel (T0143.002: Fabricated Persona, T0097.105: Military Personnel Persona) to pose as experts on military topics, or to discredit geopolitical adversaries by pretending to be one of their military personnel and spreading discontent.

People who have legitimately developed a military persona (T0143.001: Authentic Persona, T0097.105: Military Personnel Persona) can use it for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a member of the military to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.105: Military Personnel Persona + +**Summary**: A person with a military personnel persona presents themselves as a serving member or veteran of a military organisation operating in an official capacity on behalf of a government.

While presenting as military personnel is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as military personnel. Threat actors can fabricate military personnel (T0143.002: Fabricated Persona, T0097.105: Military Personnel Persona) to pose as experts on military topics, or to discredit geopolitical adversaries by pretending to be one of their military personnel and spreading discontent.

People who have legitimately developed a military persona (T0143.001: Authentic Persona, T0097.105: Military Personnel Persona) can use it for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a member of the military to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.106.md b/generated_pages/techniques/T0097.106.md index 91cc651..a9babb5 100644 --- a/generated_pages/techniques/T0097.106.md +++ b/generated_pages/techniques/T0097.106.md @@ -2,6 +2,57 @@ **Summary**: A person with a recruiter persona presents themselves as a potential employer or provider of freelance work.

While presenting as a recruiter is not an indication of inauthentic behaviour, threat actors fabricate recruiters (T0143.002: Fabricated Persona, T0097.106: Recruiter Persona) to justify asking for personal information from their targets or to trick targets into working for the threat actors (without revealing who they are). +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.205 Business Persona](../../generated_pages/techniques/T0097.205.md) | People with a recruiter persona may present as being part of a business which they are recruiting for. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend)., “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend). | +| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.

“State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.

“However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.

“It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.

[...]

“It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”


In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). | +| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.

“The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.

“London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.


In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.106: Recruiter Persona + +**Summary**: A person with a recruiter persona presents themselves as a potential employer or provider of freelance work.

While presenting as a recruiter is not an indication of inauthentic behaviour, threat actors fabricate recruiters (T0143.002: Fabricated Persona, T0097.106: Recruiter Persona) to justify asking for personal information from their targets or to trick targets into working for the threat actors (without revealing who they are). + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.205 Business Persona](../../generated_pages/techniques/T0097.205.md) | People with a recruiter persona may present as being part of a business which they are recruiting for. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend). | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.

“State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.

“However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.

“It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.

[...]

“It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”


In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). | +| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.

“The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.

“London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.


In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.106: Recruiter Persona + +**Summary**: A person with a recruiter persona presents themselves as a potential employer or provider of freelance work.

While presenting as a recruiter is not an indication of inauthentic behaviour, threat actors fabricate recruiters (T0143.002: Fabricated Persona, T0097.106: Recruiter Persona) to justify asking for personal information from their targets or to trick targets into working for the threat actors (without revealing who they are). + **Tactic**: TA16 Establish Legitimacy @@ -24,4 +75,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.107.md b/generated_pages/techniques/T0097.107.md index 19a2c76..0d8da91 100644 --- a/generated_pages/techniques/T0097.107.md +++ b/generated_pages/techniques/T0097.107.md @@ -2,6 +2,54 @@ **Summary**: A person with a researcher persona presents themselves as conducting research (e.g. for academic institutions, or think tanks), or having previously conducted research.

While presenting as a researcher is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as researchers. Threat actors can fabricate researchers (T0143.002: Fabricated Persona, T0097.107: Researcher Persona) to add credibility to their narratives.

People who are legitimate researchers (T0143.001: Authentic Persona, T0097.107: Researcher Persona) can use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a Researcher to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.108 Expert Persona](../../generated_pages/techniques/T0097.108.md) | People who present as researching a given topic are likely to also present as having expertise in the area. | +| [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | People with a researcher persona may present as being part of a think tank. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”

In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.107: Researcher Persona + +**Summary**: A person with a researcher persona presents themselves as conducting research (e.g. for academic institutions, or think tanks), or having previously conducted research.

While presenting as a researcher is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as researchers. Threat actors can fabricate researchers (T0143.002: Fabricated Persona, T0097.107: Researcher Persona) to add credibility to their narratives.

People who are legitimate researchers (T0143.001: Authentic Persona, T0097.107: Researcher Persona) can use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a Researcher to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.108 Expert Persona](../../generated_pages/techniques/T0097.108.md) | People who present as researching a given topic are likely to also present as having expertise in the area. | +| [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | People with a researcher persona may present as being part of a think tank. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”

In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.107: Researcher Persona + +**Summary**: A person with a researcher persona presents themselves as conducting research (e.g. for academic institutions, or think tanks), or having previously conducted research.

While presenting as a researcher is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as researchers. Threat actors can fabricate researchers (T0143.002: Fabricated Persona, T0097.107: Researcher Persona) to add credibility to their narratives.

People who are legitimate researchers (T0143.001: Authentic Persona, T0097.107: Researcher Persona) can use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as a Researcher to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.108.md b/generated_pages/techniques/T0097.108.md index 307ed69..c7e79ad 100644 --- a/generated_pages/techniques/T0097.108.md +++ b/generated_pages/techniques/T0097.108.md @@ -2,6 +2,55 @@ **Summary**: A person with an expert persona presents themselves as having expertise or experience in a field. Commonly the persona’s expertise will be called upon to add credibility to a given narrative.

While presenting as an expert is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as experts. Threat actors can fabricate experts (T0143.002: Fabricated Persona, T0097.107: Researcher Persona) to add credibility to their narratives.

People who are legitimate experts (T0143.001: Authentic Persona, T0097.107: Researcher Persona) can make mistakes, use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as an expert to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.107 Researcher Persona](../../generated_pages/techniques/T0097.107.md) | People who present as experts may also present as conducting or having conducted research into their specialist subject. | +| [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | People with an expert persona may present as being part of a think tank. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.108: Expert Persona + +**Summary**: A person with an expert persona presents themselves as having expertise or experience in a field. Commonly the persona’s expertise will be called upon to add credibility to a given narrative.

While presenting as an expert is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as experts. Threat actors can fabricate experts (T0143.002: Fabricated Persona, T0097.107: Researcher Persona) to add credibility to their narratives.

People who are legitimate experts (T0143.001: Authentic Persona, T0097.107: Researcher Persona) can make mistakes, use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as an expert to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.107 Researcher Persona](../../generated_pages/techniques/T0097.107.md) | People who present as experts may also present as conducting or having conducted research into their specialist subject. | +| [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | People with an expert persona may present as being part of a think tank. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.108: Expert Persona + +**Summary**: A person with an expert persona presents themselves as having expertise or experience in a field. Commonly the persona’s expertise will be called upon to add credibility to a given narrative.

While presenting as an expert is not an indication of inauthentic behaviour,  an influence operation may have its narratives amplified by people presenting as experts. Threat actors can fabricate experts (T0143.002: Fabricated Persona, T0097.107: Researcher Persona) to add credibility to their narratives.

People who are legitimate experts (T0143.001: Authentic Persona, T0097.107: Researcher Persona) can make mistakes, use their persona for malicious purposes, or be exploited by threat actors. For example, someone could take money for using their position as an expert to provide legitimacy to a false narrative or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -23,4 +72,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.109.md b/generated_pages/techniques/T0097.109.md index 0f8a4db..48b9fa1 100644 --- a/generated_pages/techniques/T0097.109.md +++ b/generated_pages/techniques/T0097.109.md @@ -2,6 +2,54 @@ **Summary**: A person with a romantic suitor persona presents themselves as seeking a romantic or physical connection with another person.

While presenting as seeking a romantic or physical connection is not an indication of inauthentic behaviour, threat actors can use dating apps, social media channels or dating websites to fabricate romantic suitors to lure targets they can blackmail, extract information from, deceive or trick into giving them money (T0143.002: Fabricated Persona, T0097.109: Romantic Suitor Persona).

Honeypotting in espionage and Big Butchering in scamming are commonly associated with romantic suitor personas. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0151.017 Dating Platform](../../generated_pages/techniques/T0151.017.md) | Analysts can use this sub-technique for tagging cases where an account has been identified as using a dating platform. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | “In the days leading up to the UK’s [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]

“The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters’ friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”


In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts’ existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). | +| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.

“In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.

“Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.

“In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. 

“In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””


In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.109: Romantic Suitor Persona + +**Summary**: A person with a romantic suitor persona presents themselves as seeking a romantic or physical connection with another person.

While presenting as seeking a romantic or physical connection is not an indication of inauthentic behaviour, threat actors can use dating apps, social media channels or dating websites to fabricate romantic suitors to lure targets they can blackmail, extract information from, deceive or trick into giving them money (T0143.002: Fabricated Persona, T0097.109: Romantic Suitor Persona).

Honeypotting in espionage and Big Butchering in scamming are commonly associated with romantic suitor personas. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0151.017 Dating Platform](../../generated_pages/techniques/T0151.017.md) | Analysts can use this sub-technique for tagging cases where an account has been identified as using a dating platform. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | “In the days leading up to the UK’s [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]

“The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters’ friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”


In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts’ existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). | +| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.

“In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.

“Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.

“In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. 

“In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””


In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.109: Romantic Suitor Persona + +**Summary**: A person with a romantic suitor persona presents themselves as seeking a romantic or physical connection with another person.

While presenting as seeking a romantic or physical connection is not an indication of inauthentic behaviour, threat actors can use dating apps, social media channels or dating websites to fabricate romantic suitors to lure targets they can blackmail, extract information from, deceive or trick into giving them money (T0143.002: Fabricated Persona, T0097.109: Romantic Suitor Persona).

Honeypotting in espionage and Big Butchering in scamming are commonly associated with romantic suitor personas. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.110.md b/generated_pages/techniques/T0097.110.md index 7e1a84f..69d38f5 100644 --- a/generated_pages/techniques/T0097.110.md +++ b/generated_pages/techniques/T0097.110.md @@ -2,6 +2,58 @@ **Summary**: A person who presents as an official member of a political party, such as leaders of political parties, candidates standing to represent constituents, and campaign staff.

Presenting as an official of a political party is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in political parties to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.110: Party Official Persona). They may also impersonate existing officials of political parties (T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Legitimate members of political parties could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.110: Party Official Persona). For example, an electoral candidate could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting as a member of a government. 

Some party officials will also be government officials. For example, in the United Kingdom the head of government is commonly also the head of their political party.

Some party officials won’t be government officials. For example, members of a party standing in an election, or party officials who work outside of government (e.g. campaign staff). | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00065 'Ghostwriter' Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned With Russian Security Interests](../../generated_pages/incidents/I00065.md) | _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_

This example demonstrates how threat actors can use compromised accounts to distribute inauthentic content while exploiting the legitimate account holder’s persona (T0097.110: Party Official Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.005: Compromised Asset). | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “In the campaign’s final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.

“The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professor’s campaign manager.

When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”


In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.110: Party Official Persona + +**Summary**: A person who presents as an official member of a political party, such as leaders of political parties, candidates standing to represent constituents, and campaign staff.

Presenting as an official of a political party is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in political parties to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.110: Party Official Persona). They may also impersonate existing officials of political parties (T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Legitimate members of political parties could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.110: Party Official Persona). For example, an electoral candidate could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting as a member of a government. 

Some party officials will also be government officials. For example, in the United Kingdom the head of government is commonly also the head of their political party.

Some party officials won’t be government officials. For example, members of a party standing in an election, or party officials who work outside of government (e.g. campaign staff). | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00065 'Ghostwriter' Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned With Russian Security Interests](../../generated_pages/incidents/I00065.md) | _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_

This example demonstrates how threat actors can use compromised accounts to distribute inauthentic content while exploiting the legitimate account holder’s persona (T0097.110: Party Official Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.005: Compromised Asset). | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “In the campaign’s final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.

“The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professor’s campaign manager.

When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”


In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.110: Party Official Persona + +**Summary**: A person who presents as an official member of a political party, such as leaders of political parties, candidates standing to represent constituents, and campaign staff.

Presenting as an official of a political party is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in political parties to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.110: Party Official Persona). They may also impersonate existing officials of political parties (T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Legitimate members of political parties could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.110: Party Official Persona). For example, an electoral candidate could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -24,4 +76,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.111.md b/generated_pages/techniques/T0097.111.md index 746811f..1b92454 100644 --- a/generated_pages/techniques/T0097.111.md +++ b/generated_pages/techniques/T0097.111.md @@ -2,6 +2,62 @@ **Summary**: A person who presents as an active or previous government official has the government official persona. These are officials serving in government, such as heads of government departments, leaders of countries, and members of government selected to represent constituents.

Presenting as a government official is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in government to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.111: Government Official Persona). They may also impersonate existing members of government (T0143.003: Impersonated Persona, T0097.111: Government Official Persona).

Legitimate government officials could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.111: Government Official Persona). For example, a government official could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.110 Party Official Persona](../../generated_pages/techniques/T0097.110.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting as a member of a political party. 

Not all government officials are political party officials (such as outside experts brought into government) and not all political party officials are government officials (such as people standing for office who are not yet working in government). | +| [T0097.112 Government Employee Persona](../../generated_pages/techniques/T0097.112.md) | Analysts should use this sub-technique to document people presenting as professionals hired to serve in government institutions and departments, not officials selected to represent constituents, or assigned official roles in government (such as heads of departments). | +| [T0097.206 Government Institution Persona](../../generated_pages/techniques/T0097.206.md) | People presenting as members of a government may also represent a government institution which they are associated with. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”

In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).

The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. | +| [I00085 China’s large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | “Four media companies – Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television – stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.

The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media company’s publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.

The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”


In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” | +| [I00093 China Falsely Denies Disinformation Campaign Targeting Canada’s Prime Minister](../../generated_pages/incidents/I00093.md) | “On October 23, Canada’s Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.

“The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nation’s domestic affairs.”

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

“That is false.

“The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.

“The investigation exposed China’s disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms – including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””


In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.111: Government Official Persona + +**Summary**: A person who presents as an active or previous government official has the government official persona. These are officials serving in government, such as heads of government departments, leaders of countries, and members of government selected to represent constituents.

Presenting as a government official is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in government to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.111: Government Official Persona). They may also impersonate existing members of government (T0143.003: Impersonated Persona, T0097.111: Government Official Persona).

Legitimate government officials could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.111: Government Official Persona). For example, a government official could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.110 Party Official Persona](../../generated_pages/techniques/T0097.110.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting as a member of a political party. 

Not all government officials are political party officials (such as outside experts brought into government) and not all political party officials are government officials (such as people standing for office who are not yet working in government). | +| [T0097.112 Government Employee Persona](../../generated_pages/techniques/T0097.112.md) | Analysts should use this sub-technique to document people presenting as professionals hired to serve in government institutions and departments, not officials selected to represent constituents, or assigned official roles in government (such as heads of departments). | +| [T0097.206 Government Institution Persona](../../generated_pages/techniques/T0097.206.md) | People presenting as members of a government may also represent a government institution which they are associated with. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”

In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).

The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. | +| [I00085 China’s large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | “Four media companies – Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television – stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.

The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media company’s publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.

The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”


In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” | +| [I00093 China Falsely Denies Disinformation Campaign Targeting Canada’s Prime Minister](../../generated_pages/incidents/I00093.md) | “On October 23, Canada’s Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.

“The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nation’s domestic affairs.”

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

“That is false.

“The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.

“The investigation exposed China’s disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms – including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””


In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.111: Government Official Persona + +**Summary**: A person who presents as an active or previous government official has the government official persona. These are officials serving in government, such as heads of government departments, leaders of countries, and members of government selected to represent constituents.

Presenting as a government official is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in government to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.111: Government Official Persona). They may also impersonate existing members of government (T0143.003: Impersonated Persona, T0097.111: Government Official Persona).

Legitimate government officials could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.111: Government Official Persona). For example, a government official could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -26,4 +82,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.112.md b/generated_pages/techniques/T0097.112.md index 0fd30f2..4fb6ffb 100644 --- a/generated_pages/techniques/T0097.112.md +++ b/generated_pages/techniques/T0097.112.md @@ -2,6 +2,52 @@ **Summary**: A person who presents as an active or previous civil servant has the government employee persona. These are professionals hired to serve in government institutions and departments, not officials selected to represent constituents, or assigned official roles in government (such as heads of departments).

Presenting as a government employee is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in government to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.112: Government Employee Persona). They may also impersonate existing government employees (T0143.003: Impersonated Persona, T0097.112: Government Employee Persona).

Legitimate government employees could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.112: Government Employee Persona). For example, a government employee could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | Analysts should use this technique to document people who present as an active or previous government official, such as heads of government departments, leaders of countries, and members of government selected to represent constituents. | +| [T0097.206 Government Institution Persona](../../generated_pages/techniques/T0097.206.md) | People presenting as members of a government may also present a government institution which they are associated with. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.112: Government Employee Persona + +**Summary**: A person who presents as an active or previous civil servant has the government employee persona. These are professionals hired to serve in government institutions and departments, not officials selected to represent constituents, or assigned official roles in government (such as heads of departments).

Presenting as a government employee is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in government to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.112: Government Employee Persona). They may also impersonate existing government employees (T0143.003: Impersonated Persona, T0097.112: Government Employee Persona).

Legitimate government employees could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.112: Government Employee Persona). For example, a government employee could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | Analysts should use this technique to document people who present as an active or previous government official, such as heads of government departments, leaders of countries, and members of government selected to represent constituents. | +| [T0097.206 Government Institution Persona](../../generated_pages/techniques/T0097.206.md) | People presenting as members of a government may also present a government institution which they are associated with. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.112: Government Employee Persona + +**Summary**: A person who presents as an active or previous civil servant has the government employee persona. These are professionals hired to serve in government institutions and departments, not officials selected to represent constituents, or assigned official roles in government (such as heads of departments).

Presenting as a government employee is not an indication of inauthentic behaviour, however threat actors may fabricate individuals who work in government to add credibility to their narratives (T0143.002: Fabricated Persona, T0097.112: Government Employee Persona). They may also impersonate existing government employees (T0143.003: Impersonated Persona, T0097.112: Government Employee Persona).

Legitimate government employees could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.112: Government Employee Persona). For example, a government employee could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.200.md b/generated_pages/techniques/T0097.200.md index 64cdebe..bd2ade4 100644 --- a/generated_pages/techniques/T0097.200.md +++ b/generated_pages/techniques/T0097.200.md @@ -2,6 +2,48 @@ **Summary**: This Technique can be used to indicate that an entity is presenting itself as an institution. If the organisation is presenting itself as having one of the personas listed below then these Techniques should be used instead, as they indicate both that the entity presented itself as an institution, and the type of persona they presented:

T0097.201: Local Institution Persona
T0097.202: News Outlet Persona
T0097.203: Fact Checking Organisation Persona
T0097.204: Think Tank Persona
T0097.205: Business Persona
T0097.206: Government Institution Persona
T0097.207: NGO Persona
T0097.208: Social Cause Persona +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.200: Institutional Persona + +**Summary**: This Technique can be used to indicate that an entity is presenting itself as an institution. If the organisation is presenting itself as having one of the personas listed below then these Techniques should be used instead, as they indicate both that the entity presented itself as an institution, and the type of persona they presented:

T0097.201: Local Institution Persona
T0097.202: News Outlet Persona
T0097.203: Fact Checking Organisation Persona
T0097.204: Think Tank Persona
T0097.205: Business Persona
T0097.206: Government Institution Persona
T0097.207: NGO Persona
T0097.208: Social Cause Persona + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.200: Institutional Persona + +**Summary**: This Technique can be used to indicate that an entity is presenting itself as an institution. If the organisation is presenting itself as having one of the personas listed below then these Techniques should be used instead, as they indicate both that the entity presented itself as an institution, and the type of persona they presented:

T0097.201: Local Institution Persona
T0097.202: News Outlet Persona
T0097.203: Fact Checking Organisation Persona
T0097.204: Think Tank Persona
T0097.205: Business Persona
T0097.206: Government Institution Persona
T0097.207: NGO Persona
T0097.208: Social Cause Persona + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.201.md b/generated_pages/techniques/T0097.201.md index 2630bdb..de39b9b 100644 --- a/generated_pages/techniques/T0097.201.md +++ b/generated_pages/techniques/T0097.201.md @@ -2,6 +2,52 @@ **Summary**: Institutions which present themselves as operating in a particular geography, or as having local knowledge relevant to a narrative, are presenting a local institution persona.

While presenting as a local institution is not an indication of inauthentic behaviour, threat actors may present themselves as such (T0143.002: Fabricated Persona, T0097.201: Local Institution Persona) to add credibility to their narratives, or misrepresent the real opinions of locals in the area.

Legitimate local institutions could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.201: Local Institution Persona). For example, a local institution could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | Institutions presenting as local may also present locals working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.

“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the site’s true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.

“The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.

“In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””


The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.201: Local Institution Persona + +**Summary**: Institutions which present themselves as operating in a particular geography, or as having local knowledge relevant to a narrative, are presenting a local institution persona.

While presenting as a local institution is not an indication of inauthentic behaviour, threat actors may present themselves as such (T0143.002: Fabricated Persona, T0097.201: Local Institution Persona) to add credibility to their narratives, or misrepresent the real opinions of locals in the area.

Legitimate local institutions could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.201: Local Institution Persona). For example, a local institution could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | Institutions presenting as local may also present locals working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.

“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the site’s true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.

“The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.

“In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””


The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.201: Local Institution Persona + +**Summary**: Institutions which present themselves as operating in a particular geography, or as having local knowledge relevant to a narrative, are presenting a local institution persona.

While presenting as a local institution is not an indication of inauthentic behaviour, threat actors may present themselves as such (T0143.002: Fabricated Persona, T0097.201: Local Institution Persona) to add credibility to their narratives, or misrepresent the real opinions of locals in the area.

Legitimate local institutions could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.201: Local Institution Persona). For example, a local institution could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.202.md b/generated_pages/techniques/T0097.202.md index 3f2eef7..9e2780f 100644 --- a/generated_pages/techniques/T0097.202.md +++ b/generated_pages/techniques/T0097.202.md @@ -2,6 +2,68 @@ **Summary**: An institution with a news outlet persona presents itself as an organisation which delivers new information to its target audience.

While presenting as a news outlet is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by news organisations. Threat actors can fabricate news organisations (T0143.002: Fabricated Persona, T0097.202: News Outlet Persona), or they can impersonate existing news outlets (T0143.003: Impersonated Persona, T0097.202: News Outlet Persona).

Legitimate news organisations could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.202: News Outlet Persona). +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | Institutions presenting as news outlets may also present journalists working within the organisation. | +| [T0097.201 Local Institution Persona](../../generated_pages/techniques/T0097.201.md) | Institutions presenting as news outlets may present as being a local news outlet. | +| [T0097.203 Fact Checking Organisation Persona](../../generated_pages/techniques/T0097.203.md) | Institutions presenting as news outlets may also deliver a fact checking service (e.g. The UK’s BBC News has the fact checking service BBC Verify). When an actor presents as the fact checking arm of a news outlet, they are presenting both a News Outlet Persona and a Fact Checking Organisation Persona. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.

“The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”


In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”

In an effort to make the Oriental Review’s stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).

In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | +| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.

“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the site’s true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.

“The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.

“In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””


The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.202: News Outlet Persona + +**Summary**: An institution with a news outlet persona presents itself as an organisation which delivers new information to its target audience.

While presenting as a news outlet is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by news organisations. Threat actors can fabricate news organisations (T0143.002: Fabricated Persona, T0097.202: News Outlet Persona), or they can impersonate existing news outlets (T0143.003: Impersonated Persona, T0097.202: News Outlet Persona).

Legitimate news organisations could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.202: News Outlet Persona). + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | Institutions presenting as news outlets may also present journalists working within the organisation. | +| [T0097.201 Local Institution Persona](../../generated_pages/techniques/T0097.201.md) | Institutions presenting as news outlets may present as being a local news outlet. | +| [T0097.203 Fact Checking Organisation Persona](../../generated_pages/techniques/T0097.203.md) | Institutions presenting as news outlets may also deliver a fact checking service (e.g. The UK’s BBC News has the fact checking service BBC Verify). When an actor presents as the fact checking arm of a news outlet, they are presenting both a News Outlet Persona and a Fact Checking Organisation Persona. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.

“The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”


In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”

In an effort to make the Oriental Review’s stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).

In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | +| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.

“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the site’s true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.

“The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.

“In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””


The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.202: News Outlet Persona + +**Summary**: An institution with a news outlet persona presents itself as an organisation which delivers new information to its target audience.

While presenting as a news outlet is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by news organisations. Threat actors can fabricate news organisations (T0143.002: Fabricated Persona, T0097.202: News Outlet Persona), or they can impersonate existing news outlets (T0143.003: Impersonated Persona, T0097.202: News Outlet Persona).

Legitimate news organisations could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.202: News Outlet Persona). + **Tactic**: TA16 Establish Legitimacy @@ -30,4 +92,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.203.md b/generated_pages/techniques/T0097.203.md index 053fe3e..d5672dd 100644 --- a/generated_pages/techniques/T0097.203.md +++ b/generated_pages/techniques/T0097.203.md @@ -2,6 +2,56 @@ **Summary**: An institution with a fact checking organisation persona presents itself as an organisation which produces reports which assess the validity of others’ reporting / statements.

While presenting as a fact checking organisation is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by fact checking organisations. Threat actors can fabricate fact checking organisations (T0143.002: Fabricated Persona, T0097.202: News Outlet Persona), or they can impersonate existing fact checking outlets (T0143.003: Impersonated Persona, T0097.202: News Outlet Persona).

Legitimate fact checking organisations could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.202: News Outlet Persona). +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | Institutions presenting as fact checking organisations may also present journalists working within the organisation. | +| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | act checking organisations may present as operating as part of a larger news outlet (e.g. The UK’s BBC News has the fact checking service BBC Verify). When an actor presents as the fact checking arm of a news outlet, they are presenting both a News Outlet Persona and a Fact Checking Organisation Persona. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “[Russia’s social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."

“Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”


In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.203: Fact Checking Organisation Persona + +**Summary**: An institution with a fact checking organisation persona presents itself as an organisation which produces reports which assess the validity of others’ reporting / statements.

While presenting as a fact checking organisation is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by fact checking organisations. Threat actors can fabricate fact checking organisations (T0143.002: Fabricated Persona, T0097.202: News Outlet Persona), or they can impersonate existing fact checking outlets (T0143.003: Impersonated Persona, T0097.202: News Outlet Persona).

Legitimate fact checking organisations could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.202: News Outlet Persona). + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) | Institutions presenting as fact checking organisations may also present journalists working within the organisation. | +| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | act checking organisations may present as operating as part of a larger news outlet (e.g. The UK’s BBC News has the fact checking service BBC Verify). When an actor presents as the fact checking arm of a news outlet, they are presenting both a News Outlet Persona and a Fact Checking Organisation Persona. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “[Russia’s social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."

“Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”


In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.203: Fact Checking Organisation Persona + +**Summary**: An institution with a fact checking organisation persona presents itself as an organisation which produces reports which assess the validity of others’ reporting / statements.

While presenting as a fact checking organisation is not an indication of inauthentic behaviour, an influence operation may have its narratives amplified by fact checking organisations. Threat actors can fabricate fact checking organisations (T0143.002: Fabricated Persona, T0097.202: News Outlet Persona), or they can impersonate existing fact checking outlets (T0143.003: Impersonated Persona, T0097.202: News Outlet Persona).

Legitimate fact checking organisations could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.202: News Outlet Persona). + **Tactic**: TA16 Establish Legitimacy @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.204.md b/generated_pages/techniques/T0097.204.md index 08527f1..ddb5c1b 100644 --- a/generated_pages/techniques/T0097.204.md +++ b/generated_pages/techniques/T0097.204.md @@ -2,6 +2,60 @@ **Summary**: An institution with a think tank persona presents itself as a think tank; an organisation that aims to conduct original research and propose new policies or solutions, especially for social and scientific problems.

While presenting as a think tank is not an indication of inauthentic behaviour, think tank personas are commonly used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.204: Think Tank Persona). They may be created to give legitimacy to narratives and allow them to suggest politically beneficial solutions to societal issues.

Legitimate think tanks could have a political bias that they may not be transparent about, they could use their persona for malicious purposes, or they could be exploited by threat actors (T0143.001: Authentic Persona, T0097.204: Think Tank Persona). For example, a think tank could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.107 Researcher Persona](../../generated_pages/techniques/T0097.107.md) | Institutions presenting as think tanks may also present researchers working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “[Russia’s Internet Research Agency, the IRA] pushed narratives with longform blog content. They created media properties, websites designed to produce stories that would resonate with those targeted. It appears, based on the data set provided by Alphabet, that the IRA may have also expanded into think tank-style communiques. One such page, previously unattributed to the IRA but included in the Alphabet data, was GI Analytics, a geopolitics blog with an international masthead that included American authors. This page was promoted via AdWords and YouTube videos; it has strong ties to more traditional Russian propaganda networks, which will be discussed later in this analysis. GI Analytics wrote articles articulating nuanced academic positions on a variety of sophisticated topics. From the site’s About page:

““Our purpose and mission are to provide high-quality analysis at a time when we are faced with a multitude of crises, a collapsing global economy, imperialist wars, environmental disasters, corporate greed, terrorism, deceit, GMO food, a migration crisis and a crackdown on small farmers and ranchers.””


In this example Alphabet’s technical indicators allowed them to assert that GI Analytics, which presented itself as a think tank, was a fabricated institution associated with Russia’s Internet Research Agency (T0097.204: Think Tank Persona, T0143.002: Fabricated Persona). | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | +| [I00083 Fake Think Tanks Fuel Fake News—And the President's Tweets](../../generated_pages/incidents/I00083.md) | “[This article discusses a] longstanding network of bogus "think tanks" raise disinformation to a pseudoscience, and their studies' pull quotes and flashy stats become the "evidence" driving viral, fact-free stories

[...]

“[These inauthentic Think Tanks] tend toward hate: There's the white supremacist National Policy Institute and Jared Taylor's New Century Foundation; the anti-LGBTQ work of the Family Research Council and American College of Pediatricians; and a whole slew of groups fixated on immigration. Three of the biggest---Federation for American Immigration Reform, the Center for Immigration Studies, and NumbersUSA---are intertwined, all connected in their origins to white nationalist John Tanton.

“The Southern Poverty Law Center designates most of these organizations as bona fide hate groups. And yet most---FRC, CIS and FAIR in particular---enjoy relationships with some powerful politicians. Trump himself has met with leaders of the anti-immigration groups, hired people from FAIR and the Family Research Council, and cited the anti-immigration groups' erroneous figures.

“That's because phony think tanks are professional mimics, from the innocuous-sounding names---the Employment Policies Institute practically steals its name from the Economic Policy Institute---to their online presences. "It used to be you could trust a dot-edu or a dot-org," says Heidi Beirich, director of the Southern Poverty Law Center's Intelligence Project. "Now some of the main hate sites are dot-orgs.””


In this example an organisation designated as a hate group is presenting itself as a think tank (T0097.204: Think Tank Persona) in order to boost the perceived legitimacy of narratives. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.204: Think Tank Persona + +**Summary**: An institution with a think tank persona presents itself as a think tank; an organisation that aims to conduct original research and propose new policies or solutions, especially for social and scientific problems.

While presenting as a think tank is not an indication of inauthentic behaviour, think tank personas are commonly used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.204: Think Tank Persona). They may be created to give legitimacy to narratives and allow them to suggest politically beneficial solutions to societal issues.

Legitimate think tanks could have a political bias that they may not be transparent about, they could use their persona for malicious purposes, or they could be exploited by threat actors (T0143.001: Authentic Persona, T0097.204: Think Tank Persona). For example, a think tank could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.107 Researcher Persona](../../generated_pages/techniques/T0097.107.md) | Institutions presenting as think tanks may also present researchers working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “[Russia’s Internet Research Agency, the IRA] pushed narratives with longform blog content. They created media properties, websites designed to produce stories that would resonate with those targeted. It appears, based on the data set provided by Alphabet, that the IRA may have also expanded into think tank-style communiques. One such page, previously unattributed to the IRA but included in the Alphabet data, was GI Analytics, a geopolitics blog with an international masthead that included American authors. This page was promoted via AdWords and YouTube videos; it has strong ties to more traditional Russian propaganda networks, which will be discussed later in this analysis. GI Analytics wrote articles articulating nuanced academic positions on a variety of sophisticated topics. From the site’s About page:

““Our purpose and mission are to provide high-quality analysis at a time when we are faced with a multitude of crises, a collapsing global economy, imperialist wars, environmental disasters, corporate greed, terrorism, deceit, GMO food, a migration crisis and a crackdown on small farmers and ranchers.””


In this example Alphabet’s technical indicators allowed them to assert that GI Analytics, which presented itself as a think tank, was a fabricated institution associated with Russia’s Internet Research Agency (T0097.204: Think Tank Persona, T0143.002: Fabricated Persona). | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | +| [I00083 Fake Think Tanks Fuel Fake News—And the President's Tweets](../../generated_pages/incidents/I00083.md) | “[This article discusses a] longstanding network of bogus "think tanks" raise disinformation to a pseudoscience, and their studies' pull quotes and flashy stats become the "evidence" driving viral, fact-free stories

[...]

“[These inauthentic Think Tanks] tend toward hate: There's the white supremacist National Policy Institute and Jared Taylor's New Century Foundation; the anti-LGBTQ work of the Family Research Council and American College of Pediatricians; and a whole slew of groups fixated on immigration. Three of the biggest---Federation for American Immigration Reform, the Center for Immigration Studies, and NumbersUSA---are intertwined, all connected in their origins to white nationalist John Tanton.

“The Southern Poverty Law Center designates most of these organizations as bona fide hate groups. And yet most---FRC, CIS and FAIR in particular---enjoy relationships with some powerful politicians. Trump himself has met with leaders of the anti-immigration groups, hired people from FAIR and the Family Research Council, and cited the anti-immigration groups' erroneous figures.

“That's because phony think tanks are professional mimics, from the innocuous-sounding names---the Employment Policies Institute practically steals its name from the Economic Policy Institute---to their online presences. "It used to be you could trust a dot-edu or a dot-org," says Heidi Beirich, director of the Southern Poverty Law Center's Intelligence Project. "Now some of the main hate sites are dot-orgs.””


In this example an organisation designated as a hate group is presenting itself as a think tank (T0097.204: Think Tank Persona) in order to boost the perceived legitimacy of narratives. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.204: Think Tank Persona + +**Summary**: An institution with a think tank persona presents itself as a think tank; an organisation that aims to conduct original research and propose new policies or solutions, especially for social and scientific problems.

While presenting as a think tank is not an indication of inauthentic behaviour, think tank personas are commonly used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.204: Think Tank Persona). They may be created to give legitimacy to narratives and allow them to suggest politically beneficial solutions to societal issues.

Legitimate think tanks could have a political bias that they may not be transparent about, they could use their persona for malicious purposes, or they could be exploited by threat actors (T0143.001: Authentic Persona, T0097.204: Think Tank Persona). For example, a think tank could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -25,4 +79,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.205.md b/generated_pages/techniques/T0097.205.md index 61be0fc..a318bc2 100644 --- a/generated_pages/techniques/T0097.205.md +++ b/generated_pages/techniques/T0097.205.md @@ -2,6 +2,53 @@ **Summary**: An institution with a business persona presents itself as a for-profit organisation which provides goods or services for a price.

While presenting as a business is not an indication of inauthentic behaviour, business personas may be used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.205: Business Persona).

Threat actors may also impersonate existing businesses (T0143.003: Impersonated Persona, T0097.205: Business Persona) to exploit their brand or cause reputational damage.

Legitimate businesses could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.205: Business Persona). For example, a business could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00070 Eli Lilly Clarifies It’s Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. | +| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.

“The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.

“London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.


In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.205: Business Persona + +**Summary**: An institution with a business persona presents itself as a for-profit organisation which provides goods or services for a price.

While presenting as a business is not an indication of inauthentic behaviour, business personas may be used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.205: Business Persona).

Threat actors may also impersonate existing businesses (T0143.003: Impersonated Persona, T0097.205: Business Persona) to exploit their brand or cause reputational damage.

Legitimate businesses could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.205: Business Persona). For example, a business could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00070 Eli Lilly Clarifies It’s Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. | +| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.

“The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.

“London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.


In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.205: Business Persona + +**Summary**: An institution with a business persona presents itself as a for-profit organisation which provides goods or services for a price.

While presenting as a business is not an indication of inauthentic behaviour, business personas may be used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.205: Business Persona).

Threat actors may also impersonate existing businesses (T0143.003: Impersonated Persona, T0097.205: Business Persona) to exploit their brand or cause reputational damage.

Legitimate businesses could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.205: Business Persona). For example, a business could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.206.md b/generated_pages/techniques/T0097.206.md index 7c87f12..bae4dc2 100644 --- a/generated_pages/techniques/T0097.206.md +++ b/generated_pages/techniques/T0097.206.md @@ -2,6 +2,54 @@ **Summary**: Institutions which present themselves as governments, or government ministries, are presenting a government institution persona.

While presenting as a government institution is not an indication of inauthentic behaviour, threat actors may impersonate existing government institutions as part of their operation (T0143.003: Impersonated Persona, T0097.206: Government Institution Persona), to add legitimacy to their narratives, or discredit the government.

Legitimate government institutions could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.206: Government Institution Persona). For example, a government institution could be used by elected officials to spread inauthentic narratives. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | Institutions presenting as governments may also present officials working within the organisation. | +| [T0097.112 Government Employee Persona](../../generated_pages/techniques/T0097.112.md) | Institutions presenting as governments may also present employees working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.206: Government Institution Persona + +**Summary**: Institutions which present themselves as governments, or government ministries, are presenting a government institution persona.

While presenting as a government institution is not an indication of inauthentic behaviour, threat actors may impersonate existing government institutions as part of their operation (T0143.003: Impersonated Persona, T0097.206: Government Institution Persona), to add legitimacy to their narratives, or discredit the government.

Legitimate government institutions could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.206: Government Institution Persona). For example, a government institution could be used by elected officials to spread inauthentic narratives. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | Institutions presenting as governments may also present officials working within the organisation. | +| [T0097.112 Government Employee Persona](../../generated_pages/techniques/T0097.112.md) | Institutions presenting as governments may also present employees working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.

[...]

The letter is not dated, and Dmytro Kuleba’s signature seems to be copied from a publicly available letter signed by him in 2021.”


In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.206: Government Institution Persona + +**Summary**: Institutions which present themselves as governments, or government ministries, are presenting a government institution persona.

While presenting as a government institution is not an indication of inauthentic behaviour, threat actors may impersonate existing government institutions as part of their operation (T0143.003: Impersonated Persona, T0097.206: Government Institution Persona), to add legitimacy to their narratives, or discredit the government.

Legitimate government institutions could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.206: Government Institution Persona). For example, a government institution could be used by elected officials to spread inauthentic narratives. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.207.md b/generated_pages/techniques/T0097.207.md index 821cddf..1501761 100644 --- a/generated_pages/techniques/T0097.207.md +++ b/generated_pages/techniques/T0097.207.md @@ -2,6 +2,53 @@ **Summary**: Institutions which present themselves as an NGO (Non-Governmental Organisation), an organisation which provides services or advocates for public policy (while not being directly affiliated with any government), are presenting an NGO persona.

While presenting as an NGO is not an indication of inauthentic behaviour, NGO personas are commonly used by threat actors (such as intelligence services) as a front for their operational activity (T0143.002: Fabricated Persona, T0097.207: NGO Persona). They are created to give legitimacy to the influence operation and potentially infiltrate grassroots movements

Legitimate NGOs could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.207: NGO Persona). For example, an NGO could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | Institutions presenting as activist groups may also present activists working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:

- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks.
- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org.
- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers.
- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas.
- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victim’s trust.”


In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.207: NGO Persona + +**Summary**: Institutions which present themselves as an NGO (Non-Governmental Organisation), an organisation which provides services or advocates for public policy (while not being directly affiliated with any government), are presenting an NGO persona.

While presenting as an NGO is not an indication of inauthentic behaviour, NGO personas are commonly used by threat actors (such as intelligence services) as a front for their operational activity (T0143.002: Fabricated Persona, T0097.207: NGO Persona). They are created to give legitimacy to the influence operation and potentially infiltrate grassroots movements

Legitimate NGOs could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.207: NGO Persona). For example, an NGO could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | Institutions presenting as activist groups may also present activists working within the organisation. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:

- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks.
- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org.
- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers.
- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas.
- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victim’s trust.”


In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.207: NGO Persona + +**Summary**: Institutions which present themselves as an NGO (Non-Governmental Organisation), an organisation which provides services or advocates for public policy (while not being directly affiliated with any government), are presenting an NGO persona.

While presenting as an NGO is not an indication of inauthentic behaviour, NGO personas are commonly used by threat actors (such as intelligence services) as a front for their operational activity (T0143.002: Fabricated Persona, T0097.207: NGO Persona). They are created to give legitimacy to the influence operation and potentially infiltrate grassroots movements

Legitimate NGOs could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.207: NGO Persona). For example, an NGO could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.208.md b/generated_pages/techniques/T0097.208.md index 20d3261..fad5d9c 100644 --- a/generated_pages/techniques/T0097.208.md +++ b/generated_pages/techniques/T0097.208.md @@ -2,6 +2,55 @@ **Summary**: Online accounts which present themselves as focusing on a social cause are presenting the Social Cause Persona. Examples include accounts which post about current affairs, such as discrimination faced by minorities.

While presenting as an account invested in a social cause is not an indication of inauthentic behaviour, such personas have been used by threat actors to exploit peoples’ legitimate emotional investment regarding social causes that matter to them (T0143.002: Fabricated Persona, T0097.208: Social Cause Persona).

Legitimate accounts focused on a social cause could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.208: Social Cause Persona). For example, the account holders could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting themselves as an activist related to a social cause. Accounts with social cause personas do not present themselves as individuals, but may have activists controlling the accounts. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.208: Social Cause Persona + +**Summary**: Online accounts which present themselves as focusing on a social cause are presenting the Social Cause Persona. Examples include accounts which post about current affairs, such as discrimination faced by minorities.

While presenting as an account invested in a social cause is not an indication of inauthentic behaviour, such personas have been used by threat actors to exploit peoples’ legitimate emotional investment regarding social causes that matter to them (T0143.002: Fabricated Persona, T0097.208: Social Cause Persona).

Legitimate accounts focused on a social cause could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.208: Social Cause Persona). For example, the account holders could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0097 Present Persona + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | Analysts should use this sub-technique to catalogue cases where an individual is presenting themselves as an activist related to a social cause. Accounts with social cause personas do not present themselves as individuals, but may have activists controlling the accounts. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097.208: Social Cause Persona + +**Summary**: Online accounts which present themselves as focusing on a social cause are presenting the Social Cause Persona. Examples include accounts which post about current affairs, such as discrimination faced by minorities.

While presenting as an account invested in a social cause is not an indication of inauthentic behaviour, such personas have been used by threat actors to exploit peoples’ legitimate emotional investment regarding social causes that matter to them (T0143.002: Fabricated Persona, T0097.208: Social Cause Persona).

Legitimate accounts focused on a social cause could use their persona for malicious purposes, or be exploited by threat actors (T0143.001: Authentic Persona, T0097.208: Social Cause Persona). For example, the account holders could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge. + **Tactic**: TA16 Establish Legitimacy @@ -23,4 +72,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0097.md b/generated_pages/techniques/T0097.md index 3e3d9d8..048740c 100644 --- a/generated_pages/techniques/T0097.md +++ b/generated_pages/techniques/T0097.md @@ -2,6 +2,48 @@ **Summary**: This Technique contains different types of personas commonly taken on by threat actors during influence operations.

Analysts should use T0097’s sub-techniques to document the type of persona which an account is presenting. For example, an account which describes itself as being a journalist can be tagged with T0097.102: Journalist Persona.

Personas presented by individuals include:

T0097.100: Individual Persona
T0097.101: Local Persona
T0097.102: Journalist Persona
T0097.103: Activist Persona
T0097.104: Hacktivist Persona
T0097.105: Military Personnel Persona
T0097.106: Recruiter Persona
T0097.107: Researcher Persona
T0097.108: Expert Persona
T0097.109: Romantic Suitor Persona
T0097.110: Party Official Persona
T0097.111: Government Official Persona
T0097.112: Government Employee Persona

This Technique also houses institutional personas commonly taken on by threat actors:

T0097.200: Institutional Persona
T0097.201: Local Institution Persona
T0097.202: News Outlet Persona
T0097.203: Fact Checking Organisation Persona
T0097.204: Think Tank Persona
T0097.205: Business Persona
T0097.206: Government Institution Persona
T0097.207: NGO Persona
T0097.208: Social Cause Persona

By using a persona, a threat actor is adding the perceived legitimacy of the persona to their narratives and activities. +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097: Present Persona + +**Summary**: This Technique contains different types of personas commonly taken on by threat actors during influence operations.

Analysts should use T0097’s sub-techniques to document the type of persona which an account is presenting. For example, an account which describes itself as being a journalist can be tagged with T0097.102: Journalist Persona.

Personas presented by individuals include:

T0097.100: Individual Persona
T0097.101: Local Persona
T0097.102: Journalist Persona
T0097.103: Activist Persona
T0097.104: Hacktivist Persona
T0097.105: Military Personnel Persona
T0097.106: Recruiter Persona
T0097.107: Researcher Persona
T0097.108: Expert Persona
T0097.109: Romantic Suitor Persona
T0097.110: Party Official Persona
T0097.111: Government Official Persona
T0097.112: Government Employee Persona

This Technique also houses institutional personas commonly taken on by threat actors:

T0097.200: Institutional Persona
T0097.201: Local Institution Persona
T0097.202: News Outlet Persona
T0097.203: Fact Checking Organisation Persona
T0097.204: Think Tank Persona
T0097.205: Business Persona
T0097.206: Government Institution Persona
T0097.207: NGO Persona
T0097.208: Social Cause Persona

By using a persona, a threat actor is adding the perceived legitimacy of the persona to their narratives and activities. + +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0097: Present Persona + +**Summary**: This Technique contains different types of personas commonly taken on by threat actors during influence operations.

Analysts should use T0097’s sub-techniques to document the type of persona which an account is presenting. For example, an account which describes itself as being a journalist can be tagged with T0097.102: Journalist Persona.

Personas presented by individuals include:

T0097.100: Individual Persona
T0097.101: Local Persona
T0097.102: Journalist Persona
T0097.103: Activist Persona
T0097.104: Hacktivist Persona
T0097.105: Military Personnel Persona
T0097.106: Recruiter Persona
T0097.107: Researcher Persona
T0097.108: Expert Persona
T0097.109: Romantic Suitor Persona
T0097.110: Party Official Persona
T0097.111: Government Official Persona
T0097.112: Government Employee Persona

This Technique also houses institutional personas commonly taken on by threat actors:

T0097.200: Institutional Persona
T0097.201: Local Institution Persona
T0097.202: News Outlet Persona
T0097.203: Fact Checking Organisation Persona
T0097.204: Think Tank Persona
T0097.205: Business Persona
T0097.206: Government Institution Persona
T0097.207: NGO Persona
T0097.208: Social Cause Persona

By using a persona, a threat actor is adding the perceived legitimacy of the persona to their narratives and activities. + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0098.001.md b/generated_pages/techniques/T0098.001.md index 807e47e..db372e4 100644 --- a/generated_pages/techniques/T0098.001.md +++ b/generated_pages/techniques/T0098.001.md @@ -2,6 +2,48 @@ **Summary**: Create Inauthentic News Sites +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0098 Establish Inauthentic News Sites + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0098.001: Create Inauthentic News Sites + +**Summary**: Create Inauthentic News Sites + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0098 Establish Inauthentic News Sites + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0098.001: Create Inauthentic News Sites + +**Summary**: Create Inauthentic News Sites + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0098.002.md b/generated_pages/techniques/T0098.002.md index 52deb74..17da449 100644 --- a/generated_pages/techniques/T0098.002.md +++ b/generated_pages/techniques/T0098.002.md @@ -2,6 +2,48 @@ **Summary**: Leverage Existing Inauthentic News Sites +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0098 Establish Inauthentic News Sites + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0098.002: Leverage Existing Inauthentic News Sites + +**Summary**: Leverage Existing Inauthentic News Sites + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0098 Establish Inauthentic News Sites + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0098.002: Leverage Existing Inauthentic News Sites + +**Summary**: Leverage Existing Inauthentic News Sites + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0098.md b/generated_pages/techniques/T0098.md index f6974d8..8c4f6e1 100644 --- a/generated_pages/techniques/T0098.md +++ b/generated_pages/techniques/T0098.md @@ -2,6 +2,48 @@ **Summary**: Modern computational propaganda makes use of a cadre of imposter news sites spreading globally. These sites, sometimes motivated by concerns other than propaganda--for instance, click-based revenue--often have some superficial markers of authenticity, such as naming and site-design. But many can be quickly exposed with reference to their owenership, reporting history and adverstising details. +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0098: Establish Inauthentic News Sites + +**Summary**: Modern computational propaganda makes use of a cadre of imposter news sites spreading globally. These sites, sometimes motivated by concerns other than propaganda--for instance, click-based revenue--often have some superficial markers of authenticity, such as naming and site-design. But many can be quickly exposed with reference to their owenership, reporting history and adverstising details. + +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0098: Establish Inauthentic News Sites + +**Summary**: Modern computational propaganda makes use of a cadre of imposter news sites spreading globally. These sites, sometimes motivated by concerns other than propaganda--for instance, click-based revenue--often have some superficial markers of authenticity, such as naming and site-design. But many can be quickly exposed with reference to their owenership, reporting history and adverstising details. + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0100.001.md b/generated_pages/techniques/T0100.001.md index 3045317..e0b959e 100644 --- a/generated_pages/techniques/T0100.001.md +++ b/generated_pages/techniques/T0100.001.md @@ -2,6 +2,48 @@ **Summary**: Co-Opt Trusted Individuals +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0100 Co-Opt Trusted Sources + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100.001: Co-Opt Trusted Individuals + +**Summary**: Co-Opt Trusted Individuals + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0100 Co-Opt Trusted Sources + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100.001: Co-Opt Trusted Individuals + +**Summary**: Co-Opt Trusted Individuals + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0100.002.md b/generated_pages/techniques/T0100.002.md index d191a1a..f938648 100644 --- a/generated_pages/techniques/T0100.002.md +++ b/generated_pages/techniques/T0100.002.md @@ -2,6 +2,48 @@ **Summary**: Co-Opt Grassroots Groups +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0100 Co-Opt Trusted Sources + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100.002: Co-Opt Grassroots Groups + +**Summary**: Co-Opt Grassroots Groups + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0100 Co-Opt Trusted Sources + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100.002: Co-Opt Grassroots Groups + +**Summary**: Co-Opt Grassroots Groups + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0100.003.md b/generated_pages/techniques/T0100.003.md index ff9899e..a695877 100644 --- a/generated_pages/techniques/T0100.003.md +++ b/generated_pages/techniques/T0100.003.md @@ -2,6 +2,48 @@ **Summary**: Co-opt Influencers +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0100 Co-Opt Trusted Sources + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100.003: Co-Opt Influencers + +**Summary**: Co-opt Influencers + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0100 Co-Opt Trusted Sources + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100.003: Co-Opt Influencers + +**Summary**: Co-opt Influencers + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0100.md b/generated_pages/techniques/T0100.md index ecffc5b..c88192e 100644 --- a/generated_pages/techniques/T0100.md +++ b/generated_pages/techniques/T0100.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may co-opt trusted sources by infiltrating or repurposing a source to reach a target audience through existing, previously reliable networks. Co-opted trusted sources may include: - National or local new outlets - Research or academic publications - Online blogs or websites +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100: Co-Opt Trusted Sources + +**Summary**: An influence operation may co-opt trusted sources by infiltrating or repurposing a source to reach a target audience through existing, previously reliable networks. Co-opted trusted sources may include: - National or local new outlets - Research or academic publications - Online blogs or websites + +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0100: Co-Opt Trusted Sources + +**Summary**: An influence operation may co-opt trusted sources by infiltrating or repurposing a source to reach a target audience through existing, previously reliable networks. Co-opted trusted sources may include: - National or local new outlets - Research or academic publications - Online blogs or websites + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0101.md b/generated_pages/techniques/T0101.md index 9d20adf..a72f9f6 100644 --- a/generated_pages/techniques/T0101.md +++ b/generated_pages/techniques/T0101.md @@ -2,6 +2,50 @@ **Summary**: Localised content refers to content that appeals to a specific community of individuals, often in defined geographic areas. An operation may create localised content using local language and dialects to resonate with its target audience and blend in with other local news and social media. Localised content may help an operation increase legitimacy, avoid detection, and complicate external attribution. +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0101: Create Localised Content + +**Summary**: Localised content refers to content that appeals to a specific community of individuals, often in defined geographic areas. An operation may create localised content using local language and dialects to resonate with its target audience and blend in with other local news and social media. Localised content may help an operation increase legitimacy, avoid detection, and complicate external attribution. + +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0101: Create Localised Content + +**Summary**: Localised content refers to content that appeals to a specific community of individuals, often in defined geographic areas. An operation may create localised content using local language and dialects to resonate with its target audience and blend in with other local news and social media. Localised content may help an operation increase legitimacy, avoid detection, and complicate external attribution. + **Tactic**: TA05 Microtarget @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0102.001.md b/generated_pages/techniques/T0102.001.md index 17f504a..7775757 100644 --- a/generated_pages/techniques/T0102.001.md +++ b/generated_pages/techniques/T0102.001.md @@ -2,6 +2,48 @@ **Summary**: Use existing Echo Chambers/Filter Bubbles +**Tactic**: TA05 Microtarget **Parent Technique:** T0102 Leverage Echo Chambers/Filter Bubbles + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102.001: Use Existing Echo Chambers/Filter Bubbles + +**Summary**: Use existing Echo Chambers/Filter Bubbles + +**Tactic**: TA05 Microtarget **Parent Technique:** T0102 Leverage Echo Chambers/Filter Bubbles + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102.001: Use Existing Echo Chambers/Filter Bubbles + +**Summary**: Use existing Echo Chambers/Filter Bubbles + **Tactic**: TA05 Microtarget @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0102.002.md b/generated_pages/techniques/T0102.002.md index dc5537c..dd9f98a 100644 --- a/generated_pages/techniques/T0102.002.md +++ b/generated_pages/techniques/T0102.002.md @@ -2,6 +2,48 @@ **Summary**: Create Echo Chambers/Filter Bubbles +**Tactic**: TA05 Microtarget **Parent Technique:** T0102 Leverage Echo Chambers/Filter Bubbles + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102.002: Create Echo Chambers/Filter Bubbles + +**Summary**: Create Echo Chambers/Filter Bubbles + +**Tactic**: TA05 Microtarget **Parent Technique:** T0102 Leverage Echo Chambers/Filter Bubbles + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102.002: Create Echo Chambers/Filter Bubbles + +**Summary**: Create Echo Chambers/Filter Bubbles + **Tactic**: TA05 Microtarget @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0102.003.md b/generated_pages/techniques/T0102.003.md index d224bb3..a7121a5 100644 --- a/generated_pages/techniques/T0102.003.md +++ b/generated_pages/techniques/T0102.003.md @@ -2,6 +2,48 @@ **Summary**: A data void refers to a word or phrase that results in little, manipulative, or low-quality search engine data. Data voids are hard to detect and relatively harmless until exploited by an entity aiming to quickly proliferate false or misleading information during a phenomenon that causes a high number of individuals to query the term or phrase. In the Plan phase, an influence operation may identify data voids for later exploitation in the operation. A 2019 report by Michael Golebiewski identifies five types of data voids. (1) “Breaking news” data voids occur when a keyword gains popularity during a short period of time, allowing an influence operation to publish false content before legitimate news outlets have an opportunity to publish relevant information. (2) An influence operation may create a “strategic new terms” data void by creating their own terms and publishing information online before promoting their keyword to the target audience. (3) An influence operation may publish content on “outdated terms” that have decreased in popularity, capitalising on most search engines’ preferences for recency. (4) “Fragmented concepts” data voids separate connections between similar ideas, isolating segment queries to distinct search engine results. (5) An influence operation may use “problematic queries” that previously resulted in disturbing or inappropriate content to promote messaging until mainstream media recontextualizes the term. +**Tactic**: TA05 Microtarget **Parent Technique:** T0102 Leverage Echo Chambers/Filter Bubbles + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102.003: Exploit Data Voids + +**Summary**: A data void refers to a word or phrase that results in little, manipulative, or low-quality search engine data. Data voids are hard to detect and relatively harmless until exploited by an entity aiming to quickly proliferate false or misleading information during a phenomenon that causes a high number of individuals to query the term or phrase. In the Plan phase, an influence operation may identify data voids for later exploitation in the operation. A 2019 report by Michael Golebiewski identifies five types of data voids. (1) “Breaking news” data voids occur when a keyword gains popularity during a short period of time, allowing an influence operation to publish false content before legitimate news outlets have an opportunity to publish relevant information. (2) An influence operation may create a “strategic new terms” data void by creating their own terms and publishing information online before promoting their keyword to the target audience. (3) An influence operation may publish content on “outdated terms” that have decreased in popularity, capitalising on most search engines’ preferences for recency. (4) “Fragmented concepts” data voids separate connections between similar ideas, isolating segment queries to distinct search engine results. (5) An influence operation may use “problematic queries” that previously resulted in disturbing or inappropriate content to promote messaging until mainstream media recontextualizes the term. + +**Tactic**: TA05 Microtarget **Parent Technique:** T0102 Leverage Echo Chambers/Filter Bubbles + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102.003: Exploit Data Voids + +**Summary**: A data void refers to a word or phrase that results in little, manipulative, or low-quality search engine data. Data voids are hard to detect and relatively harmless until exploited by an entity aiming to quickly proliferate false or misleading information during a phenomenon that causes a high number of individuals to query the term or phrase. In the Plan phase, an influence operation may identify data voids for later exploitation in the operation. A 2019 report by Michael Golebiewski identifies five types of data voids. (1) “Breaking news” data voids occur when a keyword gains popularity during a short period of time, allowing an influence operation to publish false content before legitimate news outlets have an opportunity to publish relevant information. (2) An influence operation may create a “strategic new terms” data void by creating their own terms and publishing information online before promoting their keyword to the target audience. (3) An influence operation may publish content on “outdated terms” that have decreased in popularity, capitalising on most search engines’ preferences for recency. (4) “Fragmented concepts” data voids separate connections between similar ideas, isolating segment queries to distinct search engine results. (5) An influence operation may use “problematic queries” that previously resulted in disturbing or inappropriate content to promote messaging until mainstream media recontextualizes the term. + **Tactic**: TA05 Microtarget @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0102.md b/generated_pages/techniques/T0102.md index 2272982..f566821 100644 --- a/generated_pages/techniques/T0102.md +++ b/generated_pages/techniques/T0102.md @@ -2,6 +2,48 @@ **Summary**: An echo chamber refers to an internet subgroup, often along ideological lines, where individuals only engage with “others with which they are already in agreement.” A filter bubble refers to an algorithm's placement of an individual in content that they agree with or regularly engage with, possibly entrapping the user into a bubble of their own making. An operation may create these isolated areas of the internet by match existing groups, or aggregating individuals into a single target audience based on shared interests, politics, values, demographics, and other characteristics. Echo chambers and filter bubbles help to reinforce similar biases and content to the same target audience members. +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102: Leverage Echo Chambers/Filter Bubbles + +**Summary**: An echo chamber refers to an internet subgroup, often along ideological lines, where individuals only engage with “others with which they are already in agreement.” A filter bubble refers to an algorithm's placement of an individual in content that they agree with or regularly engage with, possibly entrapping the user into a bubble of their own making. An operation may create these isolated areas of the internet by match existing groups, or aggregating individuals into a single target audience based on shared interests, politics, values, demographics, and other characteristics. Echo chambers and filter bubbles help to reinforce similar biases and content to the same target audience members. + +**Tactic**: TA05 Microtarget + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0102: Leverage Echo Chambers/Filter Bubbles + +**Summary**: An echo chamber refers to an internet subgroup, often along ideological lines, where individuals only engage with “others with which they are already in agreement.” A filter bubble refers to an algorithm's placement of an individual in content that they agree with or regularly engage with, possibly entrapping the user into a bubble of their own making. An operation may create these isolated areas of the internet by match existing groups, or aggregating individuals into a single target audience based on shared interests, politics, values, demographics, and other characteristics. Echo chambers and filter bubbles help to reinforce similar biases and content to the same target audience members. + **Tactic**: TA05 Microtarget @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0107.md b/generated_pages/techniques/T0107.md index 1cca96f..ae355a9 100644 --- a/generated_pages/techniques/T0107.md +++ b/generated_pages/techniques/T0107.md @@ -2,6 +2,48 @@ **Summary**: Platforms for searching, sharing, and curating content and media. Examples include Pinterest, Flipboard, etc. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0107: Bookmarking and Content Curation + +**Summary**: Platforms for searching, sharing, and curating content and media. Examples include Pinterest, Flipboard, etc. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0107: Bookmarking and Content Curation + +**Summary**: Platforms for searching, sharing, and curating content and media. Examples include Pinterest, Flipboard, etc. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0109.md b/generated_pages/techniques/T0109.md index 909982c..22de60f 100644 --- a/generated_pages/techniques/T0109.md +++ b/generated_pages/techniques/T0109.md @@ -2,6 +2,48 @@ **Summary**: Platforms for finding, reviewing, and sharing information about brands, products, services, restaurants, travel destinations, etc. Examples include Yelp, TripAdvisor, etc. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0109: Consumer Review Networks + +**Summary**: Platforms for finding, reviewing, and sharing information about brands, products, services, restaurants, travel destinations, etc. Examples include Yelp, TripAdvisor, etc. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0109: Consumer Review Networks + +**Summary**: Platforms for finding, reviewing, and sharing information about brands, products, services, restaurants, travel destinations, etc. Examples include Yelp, TripAdvisor, etc. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0110.md b/generated_pages/techniques/T0110.md index e661206..b7ab329 100644 --- a/generated_pages/techniques/T0110.md +++ b/generated_pages/techniques/T0110.md @@ -2,6 +2,48 @@ **Summary**: Leveraging formal, traditional, diplomatic channels to communicate with foreign governments (written documents, meetings, summits, diplomatic visits, etc). This type of diplomacy is conducted by diplomats of one nation with diplomats and other officials of another nation or international organisation. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0110: Formal Diplomatic Channels + +**Summary**: Leveraging formal, traditional, diplomatic channels to communicate with foreign governments (written documents, meetings, summits, diplomatic visits, etc). This type of diplomacy is conducted by diplomats of one nation with diplomats and other officials of another nation or international organisation. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0110: Formal Diplomatic Channels + +**Summary**: Leveraging formal, traditional, diplomatic channels to communicate with foreign governments (written documents, meetings, summits, diplomatic visits, etc). This type of diplomacy is conducted by diplomats of one nation with diplomats and other officials of another nation or international organisation. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0111.001.md b/generated_pages/techniques/T0111.001.md index fec9f4c..73f81a3 100644 --- a/generated_pages/techniques/T0111.001.md +++ b/generated_pages/techniques/T0111.001.md @@ -2,6 +2,48 @@ **Summary**: TV +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0111 Traditional Media + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111.001: TV + +**Summary**: TV + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0111 Traditional Media + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111.001: TV + +**Summary**: TV + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0111.002.md b/generated_pages/techniques/T0111.002.md index b5265c1..6b9531c 100644 --- a/generated_pages/techniques/T0111.002.md +++ b/generated_pages/techniques/T0111.002.md @@ -2,6 +2,48 @@ **Summary**: Newspaper +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0111 Traditional Media + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111.002: Newspaper + +**Summary**: Newspaper + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0111 Traditional Media + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111.002: Newspaper + +**Summary**: Newspaper + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0111.003.md b/generated_pages/techniques/T0111.003.md index 3798be6..da462da 100644 --- a/generated_pages/techniques/T0111.003.md +++ b/generated_pages/techniques/T0111.003.md @@ -2,6 +2,48 @@ **Summary**: Radio +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0111 Traditional Media + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111.003: Radio + +**Summary**: Radio + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0111 Traditional Media + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111.003: Radio + +**Summary**: Radio + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0111.md b/generated_pages/techniques/T0111.md index d979371..e2ece83 100644 --- a/generated_pages/techniques/T0111.md +++ b/generated_pages/techniques/T0111.md @@ -2,6 +2,48 @@ **Summary**: Examples include TV, Newspaper, Radio, etc. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111: Traditional Media + +**Summary**: Examples include TV, Newspaper, Radio, etc. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0111: Traditional Media + +**Summary**: Examples include TV, Newspaper, Radio, etc. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0113.md b/generated_pages/techniques/T0113.md index 56a5032..c9ca17e 100644 --- a/generated_pages/techniques/T0113.md +++ b/generated_pages/techniques/T0113.md @@ -2,6 +2,48 @@ **Summary**: Commercial analytic firms collect data on target audience activities and evaluate the data to detect trends, such as content receiving high click-rates. An influence operation may employ commercial analytic firms to facilitate external collection on its target audience, complicating attribution efforts and better tailoring the content to audience preferences. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0113: Employ Commercial Analytic Firms + +**Summary**: Commercial analytic firms collect data on target audience activities and evaluate the data to detect trends, such as content receiving high click-rates. An influence operation may employ commercial analytic firms to facilitate external collection on its target audience, complicating attribution efforts and better tailoring the content to audience preferences. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0113: Employ Commercial Analytic Firms + +**Summary**: Commercial analytic firms collect data on target audience activities and evaluate the data to detect trends, such as content receiving high click-rates. An influence operation may employ commercial analytic firms to facilitate external collection on its target audience, complicating attribution efforts and better tailoring the content to audience preferences. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0114.001.md b/generated_pages/techniques/T0114.001.md index ca73142..dcb3aef 100644 --- a/generated_pages/techniques/T0114.001.md +++ b/generated_pages/techniques/T0114.001.md @@ -2,6 +2,48 @@ **Summary**: Social Media +**Tactic**: TA09 Deliver Content **Parent Technique:** T0114 Deliver Ads + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0114.001: Social Media + +**Summary**: Social Media + +**Tactic**: TA09 Deliver Content **Parent Technique:** T0114 Deliver Ads + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0114.001: Social Media + +**Summary**: Social Media + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0114.002.md b/generated_pages/techniques/T0114.002.md index 305121a..00c7da5 100644 --- a/generated_pages/techniques/T0114.002.md +++ b/generated_pages/techniques/T0114.002.md @@ -2,6 +2,48 @@ **Summary**: Examples include TV, Radio, Newspaper, billboards +**Tactic**: TA09 Deliver Content **Parent Technique:** T0114 Deliver Ads + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0114.002: Traditional Media + +**Summary**: Examples include TV, Radio, Newspaper, billboards + +**Tactic**: TA09 Deliver Content **Parent Technique:** T0114 Deliver Ads + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0114.002: Traditional Media + +**Summary**: Examples include TV, Radio, Newspaper, billboards + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0114.md b/generated_pages/techniques/T0114.md index b630bad..45ca073 100644 --- a/generated_pages/techniques/T0114.md +++ b/generated_pages/techniques/T0114.md @@ -2,6 +2,50 @@ **Summary**: Delivering content via any form of paid media or advertising. +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0114: Deliver Ads + +**Summary**: Delivering content via any form of paid media or advertising. + +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0114: Deliver Ads + +**Summary**: Delivering content via any form of paid media or advertising. + **Tactic**: TA09 Deliver Content @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0115.001.md b/generated_pages/techniques/T0115.001.md index 7d4e570..401649e 100644 --- a/generated_pages/techniques/T0115.001.md +++ b/generated_pages/techniques/T0115.001.md @@ -2,6 +2,48 @@ **Summary**: Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns. +**Tactic**: TA09 Deliver Content **Parent Technique:** T0115 Post Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115.001: Share Memes + +**Summary**: Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns. + +**Tactic**: TA09 Deliver Content **Parent Technique:** T0115 Post Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115.001: Share Memes + +**Summary**: Memes are one of the most important single artefact types in all of computational propaganda. Memes in this framework denotes the narrow image-based definition. But that naming is no accident, as these items have most of the important properties of Dawkins' original conception as a self-replicating unit of culture. Memes pull together reference and commentary; image and narrative; emotion and message. Memes are a powerful tool and the heart of modern influence campaigns. + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0115.002.md b/generated_pages/techniques/T0115.002.md index d804182..7189d8a 100644 --- a/generated_pages/techniques/T0115.002.md +++ b/generated_pages/techniques/T0115.002.md @@ -2,6 +2,48 @@ **Summary**: Post Violative Content to Provoke Takedown and Backlash. +**Tactic**: TA09 Deliver Content **Parent Technique:** T0115 Post Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115.002: Post Violative Content to Provoke Takedown and Backlash + +**Summary**: Post Violative Content to Provoke Takedown and Backlash. + +**Tactic**: TA09 Deliver Content **Parent Technique:** T0115 Post Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115.002: Post Violative Content to Provoke Takedown and Backlash + +**Summary**: Post Violative Content to Provoke Takedown and Backlash. + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0115.003.md b/generated_pages/techniques/T0115.003.md index 0bfe8c0..ec7a53c 100644 --- a/generated_pages/techniques/T0115.003.md +++ b/generated_pages/techniques/T0115.003.md @@ -2,6 +2,48 @@ **Summary**: Direct posting refers to a method of posting content via a one-way messaging service, where the recipient cannot directly respond to the poster’s messaging. An influence operation may post directly to promote operation narratives to the target audience without allowing opportunities for fact-checking or disagreement, creating a false sense of support for the narrative. +**Tactic**: TA09 Deliver Content **Parent Technique:** T0115 Post Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115.003: One-Way Direct Posting + +**Summary**: Direct posting refers to a method of posting content via a one-way messaging service, where the recipient cannot directly respond to the poster’s messaging. An influence operation may post directly to promote operation narratives to the target audience without allowing opportunities for fact-checking or disagreement, creating a false sense of support for the narrative. + +**Tactic**: TA09 Deliver Content **Parent Technique:** T0115 Post Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115.003: One-Way Direct Posting + +**Summary**: Direct posting refers to a method of posting content via a one-way messaging service, where the recipient cannot directly respond to the poster’s messaging. An influence operation may post directly to promote operation narratives to the target audience without allowing opportunities for fact-checking or disagreement, creating a false sense of support for the narrative. + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0115.md b/generated_pages/techniques/T0115.md index fbad643..4f8cbaf 100644 --- a/generated_pages/techniques/T0115.md +++ b/generated_pages/techniques/T0115.md @@ -2,6 +2,51 @@ **Summary**: Delivering content by posting via owned media (assets that the operator controls). +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115: Post Content + +**Summary**: Delivering content by posting via owned media (assets that the operator controls). + +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0115: Post Content + +**Summary**: Delivering content by posting via owned media (assets that the operator controls). + **Tactic**: TA09 Deliver Content @@ -22,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0116.001.md b/generated_pages/techniques/T0116.001.md index 4e64fa5..7e7a73d 100644 --- a/generated_pages/techniques/T0116.001.md +++ b/generated_pages/techniques/T0116.001.md @@ -2,6 +2,48 @@ **Summary**: Use government-paid social media commenters, astroturfers, chat bots (programmed to reply to specific key words/hashtags) influence online conversations, product reviews, web-site comment forums. +**Tactic**: TA09 Deliver Content **Parent Technique:** T0116 Comment or Reply on Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0116.001: Post Inauthentic Social Media Comment + +**Summary**: Use government-paid social media commenters, astroturfers, chat bots (programmed to reply to specific key words/hashtags) influence online conversations, product reviews, web-site comment forums. + +**Tactic**: TA09 Deliver Content **Parent Technique:** T0116 Comment or Reply on Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0116.001: Post Inauthentic Social Media Comment + +**Summary**: Use government-paid social media commenters, astroturfers, chat bots (programmed to reply to specific key words/hashtags) influence online conversations, product reviews, web-site comment forums. + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0116.md b/generated_pages/techniques/T0116.md index 00f2959..401bae3 100644 --- a/generated_pages/techniques/T0116.md +++ b/generated_pages/techniques/T0116.md @@ -2,6 +2,48 @@ **Summary**: Delivering content by replying or commenting via owned media (assets that the operator controls). +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0116: Comment or Reply on Content + +**Summary**: Delivering content by replying or commenting via owned media (assets that the operator controls). + +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0116: Comment or Reply on Content + +**Summary**: Delivering content by replying or commenting via owned media (assets that the operator controls). + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0117.md b/generated_pages/techniques/T0117.md index 3b7eaf9..14ff02a 100644 --- a/generated_pages/techniques/T0117.md +++ b/generated_pages/techniques/T0117.md @@ -2,6 +2,48 @@ **Summary**: Deliver content by attracting the attention of traditional media (earned media). +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0117: Attract Traditional Media + +**Summary**: Deliver content by attracting the attention of traditional media (earned media). + +**Tactic**: TA09 Deliver Content + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0117: Attract Traditional Media + +**Summary**: Deliver content by attracting the attention of traditional media (earned media). + **Tactic**: TA09 Deliver Content @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0118.md b/generated_pages/techniques/T0118.md index e75aa5c..b87ca39 100644 --- a/generated_pages/techniques/T0118.md +++ b/generated_pages/techniques/T0118.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may amplify existing narratives that align with its narratives to support operation objectives. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0118: Amplify Existing Narrative + +**Summary**: An influence operation may amplify existing narratives that align with its narratives to support operation objectives. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0118: Amplify Existing Narrative + +**Summary**: An influence operation may amplify existing narratives that align with its narratives to support operation objectives. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0119.001.md b/generated_pages/techniques/T0119.001.md index ccf365b..d3646c0 100644 --- a/generated_pages/techniques/T0119.001.md +++ b/generated_pages/techniques/T0119.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may post content across groups to spread narratives and content to new communities within the target audiences or to new target audiences. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0119 Cross-Posting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119.001: Post across Groups + +**Summary**: An influence operation may post content across groups to spread narratives and content to new communities within the target audiences or to new target audiences. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0119 Cross-Posting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119.001: Post across Groups + +**Summary**: An influence operation may post content across groups to spread narratives and content to new communities within the target audiences or to new target audiences. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0119.002.md b/generated_pages/techniques/T0119.002.md index 24d39da..3098986 100644 --- a/generated_pages/techniques/T0119.002.md +++ b/generated_pages/techniques/T0119.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may post content across platforms to spread narratives and content to new communities within the target audiences or to new target audiences. Posting across platforms can also remove opposition and context, helping the narrative spread with less opposition on the cross-posted platform. +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0119 Cross-Posting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119.002: Post across Platform + +**Summary**: An influence operation may post content across platforms to spread narratives and content to new communities within the target audiences or to new target audiences. Posting across platforms can also remove opposition and context, helping the narrative spread with less opposition on the cross-posted platform. + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0119 Cross-Posting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119.002: Post across Platform + +**Summary**: An influence operation may post content across platforms to spread narratives and content to new communities within the target audiences or to new target audiences. Posting across platforms can also remove opposition and context, helping the narrative spread with less opposition on the cross-posted platform. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0119.003.md b/generated_pages/techniques/T0119.003.md index 5ac6c81..cd5e00f 100644 --- a/generated_pages/techniques/T0119.003.md +++ b/generated_pages/techniques/T0119.003.md @@ -2,6 +2,48 @@ **Summary**: Post Across Disciplines +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0119 Cross-Posting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119.003: Post across Disciplines + +**Summary**: Post Across Disciplines + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0119 Cross-Posting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119.003: Post across Disciplines + +**Summary**: Post Across Disciplines + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0119.md b/generated_pages/techniques/T0119.md index c1a3d06..6a75ff2 100644 --- a/generated_pages/techniques/T0119.md +++ b/generated_pages/techniques/T0119.md @@ -2,6 +2,48 @@ **Summary**: Cross-posting refers to posting the same message to multiple internet discussions, social media platforms or accounts, or news groups at one time. An influence operation may post content online in multiple communities and platforms to increase the chances of content exposure to the target audience. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119: Cross-Posting + +**Summary**: Cross-posting refers to posting the same message to multiple internet discussions, social media platforms or accounts, or news groups at one time. An influence operation may post content online in multiple communities and platforms to increase the chances of content exposure to the target audience. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0119: Cross-Posting + +**Summary**: Cross-posting refers to posting the same message to multiple internet discussions, social media platforms or accounts, or news groups at one time. An influence operation may post content online in multiple communities and platforms to increase the chances of content exposure to the target audience. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0120.001.md b/generated_pages/techniques/T0120.001.md index d302a81..5d66b27 100644 --- a/generated_pages/techniques/T0120.001.md +++ b/generated_pages/techniques/T0120.001.md @@ -2,6 +2,48 @@ **Summary**: Use Affiliate Marketing Programmes +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0120 Incentivize Sharing + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0120.001: Use Affiliate Marketing Programmes + +**Summary**: Use Affiliate Marketing Programmes + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0120 Incentivize Sharing + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0120.001: Use Affiliate Marketing Programmes + +**Summary**: Use Affiliate Marketing Programmes + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0120.002.md b/generated_pages/techniques/T0120.002.md index bfe276e..8ba684a 100644 --- a/generated_pages/techniques/T0120.002.md +++ b/generated_pages/techniques/T0120.002.md @@ -2,6 +2,48 @@ **Summary**: Use Contests and Prizes +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0120 Incentivize Sharing + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0120.002: Use Contests and Prizes + +**Summary**: Use Contests and Prizes + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0120 Incentivize Sharing + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0120.002: Use Contests and Prizes + +**Summary**: Use Contests and Prizes + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0120.md b/generated_pages/techniques/T0120.md index ac3dfa6..37e5ef1 100644 --- a/generated_pages/techniques/T0120.md +++ b/generated_pages/techniques/T0120.md @@ -2,6 +2,48 @@ **Summary**: Incentivizing content sharing refers to actions that encourage users to share content themselves, reducing the need for the operation itself to post and promote its own content. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0120: Incentivize Sharing + +**Summary**: Incentivizing content sharing refers to actions that encourage users to share content themselves, reducing the need for the operation itself to post and promote its own content. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0120: Incentivize Sharing + +**Summary**: Incentivizing content sharing refers to actions that encourage users to share content themselves, reducing the need for the operation itself to post and promote its own content. + **Tactic**: TA17 Maximise Exposure @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0121.001.md b/generated_pages/techniques/T0121.001.md index 1a743ae..a2aa45b 100644 --- a/generated_pages/techniques/T0121.001.md +++ b/generated_pages/techniques/T0121.001.md @@ -2,6 +2,52 @@ **Summary**: Bypassing content blocking refers to actions taken to circumvent network security measures that prevent users from accessing certain servers, resources, or other online spheres. An influence operation may bypass content blocking to proliferate its content on restricted areas of the internet. Common strategies for bypassing content blocking include: - Altering IP addresses to avoid IP filtering - Using a Virtual Private Network (VPN) to avoid IP filtering - Using a Content Delivery Network (CDN) to avoid IP filtering - Enabling encryption to bypass packet inspection blocking - Manipulating text to avoid filtering by keywords - Posting content on multiple platforms to avoid platform-specific removals - Using local facilities or modified DNS servers to avoid DNS filtering +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0121 Manipulate Platform Algorithm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreon’s stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:

In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.

Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.

Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the author’s email to explore other financing alternatives.

[...]

Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.

Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.


In spite of Patreon’s stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).

Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0121.001: Bypass Content Blocking + +**Summary**: Bypassing content blocking refers to actions taken to circumvent network security measures that prevent users from accessing certain servers, resources, or other online spheres. An influence operation may bypass content blocking to proliferate its content on restricted areas of the internet. Common strategies for bypassing content blocking include: - Altering IP addresses to avoid IP filtering - Using a Virtual Private Network (VPN) to avoid IP filtering - Using a Content Delivery Network (CDN) to avoid IP filtering - Enabling encryption to bypass packet inspection blocking - Manipulating text to avoid filtering by keywords - Posting content on multiple platforms to avoid platform-specific removals - Using local facilities or modified DNS servers to avoid DNS filtering + +**Tactic**: TA17 Maximise Exposure **Parent Technique:** T0121 Manipulate Platform Algorithm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreon’s stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:

In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.

Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.

Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the author’s email to explore other financing alternatives.

[...]

Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.

Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.


In spite of Patreon’s stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).

Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0121.001: Bypass Content Blocking + +**Summary**: Bypassing content blocking refers to actions taken to circumvent network security measures that prevent users from accessing certain servers, resources, or other online spheres. An influence operation may bypass content blocking to proliferate its content on restricted areas of the internet. Common strategies for bypassing content blocking include: - Altering IP addresses to avoid IP filtering - Using a Virtual Private Network (VPN) to avoid IP filtering - Using a Content Delivery Network (CDN) to avoid IP filtering - Enabling encryption to bypass packet inspection blocking - Manipulating text to avoid filtering by keywords - Posting content on multiple platforms to avoid platform-specific removals - Using local facilities or modified DNS servers to avoid DNS filtering + **Tactic**: TA17 Maximise Exposure @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0121.md b/generated_pages/techniques/T0121.md index f27ae1f..74d341f 100644 --- a/generated_pages/techniques/T0121.md +++ b/generated_pages/techniques/T0121.md @@ -2,6 +2,50 @@ **Summary**: Manipulating a platform algorithm refers to conducting activity on a platform in a way that intentionally targets its underlying algorithm. After analysing a platform’s algorithm (see: Select Platforms), an influence operation may use a platform in a way that increases its content exposure, avoids content removal, or otherwise benefits the operation’s strategy. For example, an influence operation may use bots to amplify its posts so that the platform’s algorithm recognises engagement with operation content and further promotes the content on user timelines. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0121: Manipulate Platform Algorithm + +**Summary**: Manipulating a platform algorithm refers to conducting activity on a platform in a way that intentionally targets its underlying algorithm. After analysing a platform’s algorithm (see: Select Platforms), an influence operation may use a platform in a way that increases its content exposure, avoids content removal, or otherwise benefits the operation’s strategy. For example, an influence operation may use bots to amplify its posts so that the platform’s algorithm recognises engagement with operation content and further promotes the content on user timelines. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0121: Manipulate Platform Algorithm + +**Summary**: Manipulating a platform algorithm refers to conducting activity on a platform in a way that intentionally targets its underlying algorithm. After analysing a platform’s algorithm (see: Select Platforms), an influence operation may use a platform in a way that increases its content exposure, avoids content removal, or otherwise benefits the operation’s strategy. For example, an influence operation may use bots to amplify its posts so that the platform’s algorithm recognises engagement with operation content and further promotes the content on user timelines. + **Tactic**: TA17 Maximise Exposure @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0122.md b/generated_pages/techniques/T0122.md index 4b0d69b..2a5870f 100644 --- a/generated_pages/techniques/T0122.md +++ b/generated_pages/techniques/T0122.md @@ -2,6 +2,57 @@ **Summary**: Direct users to alternative platforms refers to encouraging users to move from the platform on which they initially viewed operation content and engage with content on alternate information channels, including separate social media channels and inauthentic websites. An operation may drive users to alternative platforms to diversify its information channels and ensure the target audience knows where to access operation content if the initial platform suspends, flags, or otherwise removes original operation assets and content. +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0122: Direct Users to Alternative Platforms + +**Summary**: Direct users to alternative platforms refers to encouraging users to move from the platform on which they initially viewed operation content and engage with content on alternate information channels, including separate social media channels and inauthentic websites. An operation may drive users to alternative platforms to diversify its information channels and ensure the target audience knows where to access operation content if the initial platform suspends, flags, or otherwise removes original operation assets and content. + +**Tactic**: TA17 Maximise Exposure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0122: Direct Users to Alternative Platforms + +**Summary**: Direct users to alternative platforms refers to encouraging users to move from the platform on which they initially viewed operation content and engage with content on alternate information channels, including separate social media channels and inauthentic websites. An operation may drive users to alternative platforms to diversify its information channels and ensure the target audience knows where to access operation content if the initial platform suspends, flags, or otherwise removes original operation assets and content. + **Tactic**: TA17 Maximise Exposure @@ -26,4 +77,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0123.001.md b/generated_pages/techniques/T0123.001.md index 5c6e2da..a1d9a8b 100644 --- a/generated_pages/techniques/T0123.001.md +++ b/generated_pages/techniques/T0123.001.md @@ -2,6 +2,48 @@ **Summary**: Deleting opposing content refers to the removal of content that conflicts with operational narratives from selected platforms. An influence operation may delete opposing content to censor contradictory information from the target audience, allowing operation narratives to take priority in the information space. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.001: Delete Opposing Content + +**Summary**: Deleting opposing content refers to the removal of content that conflicts with operational narratives from selected platforms. An influence operation may delete opposing content to censor contradictory information from the target audience, allowing operation narratives to take priority in the information space. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.001: Delete Opposing Content + +**Summary**: Deleting opposing content refers to the removal of content that conflicts with operational narratives from selected platforms. An influence operation may delete opposing content to censor contradictory information from the target audience, allowing operation narratives to take priority in the information space. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0123.002.md b/generated_pages/techniques/T0123.002.md index 6d11497..700e325 100644 --- a/generated_pages/techniques/T0123.002.md +++ b/generated_pages/techniques/T0123.002.md @@ -2,6 +2,48 @@ **Summary**: Content blocking refers to actions taken to restrict internet access or render certain areas of the internet inaccessible. An influence operation may restrict content based on both network and content attributes. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.002: Block Content + +**Summary**: Content blocking refers to actions taken to restrict internet access or render certain areas of the internet inaccessible. An influence operation may restrict content based on both network and content attributes. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.002: Block Content + +**Summary**: Content blocking refers to actions taken to restrict internet access or render certain areas of the internet inaccessible. An influence operation may restrict content based on both network and content attributes. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0123.003.md b/generated_pages/techniques/T0123.003.md index 1821b71..53661d3 100644 --- a/generated_pages/techniques/T0123.003.md +++ b/generated_pages/techniques/T0123.003.md @@ -2,6 +2,48 @@ **Summary**: Destroying information generation capabilities refers to actions taken to limit, degrade, or otherwise incapacitate an actor’s ability to generate conflicting information. An influence operation may destroy an actor’s information generation capabilities by physically dismantling the information infrastructure, disconnecting resources needed for information generation, or redirecting information generation personnel. An operation may destroy an adversary’s information generation capabilities to limit conflicting content exposure to the target audience and crowd the information space with its own narratives. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.003: Destroy Information Generation Capabilities + +**Summary**: Destroying information generation capabilities refers to actions taken to limit, degrade, or otherwise incapacitate an actor’s ability to generate conflicting information. An influence operation may destroy an actor’s information generation capabilities by physically dismantling the information infrastructure, disconnecting resources needed for information generation, or redirecting information generation personnel. An operation may destroy an adversary’s information generation capabilities to limit conflicting content exposure to the target audience and crowd the information space with its own narratives. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.003: Destroy Information Generation Capabilities + +**Summary**: Destroying information generation capabilities refers to actions taken to limit, degrade, or otherwise incapacitate an actor’s ability to generate conflicting information. An influence operation may destroy an actor’s information generation capabilities by physically dismantling the information infrastructure, disconnecting resources needed for information generation, or redirecting information generation personnel. An operation may destroy an adversary’s information generation capabilities to limit conflicting content exposure to the target audience and crowd the information space with its own narratives. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0123.004.md b/generated_pages/techniques/T0123.004.md index 89a2413..ef9da78 100644 --- a/generated_pages/techniques/T0123.004.md +++ b/generated_pages/techniques/T0123.004.md @@ -2,6 +2,48 @@ **Summary**: A server redirect, also known as a URL redirect, occurs when a server automatically forwards a user from one URL to another using server-side or client-side scripting languages. An influence operation may conduct a server redirect to divert target audience members from one website to another without their knowledge. The redirected website may pose as a legitimate source, host malware, or otherwise aid operation objectives. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.004: Conduct Server Redirect + +**Summary**: A server redirect, also known as a URL redirect, occurs when a server automatically forwards a user from one URL to another using server-side or client-side scripting languages. An influence operation may conduct a server redirect to divert target audience members from one website to another without their knowledge. The redirected website may pose as a legitimate source, host malware, or otherwise aid operation objectives. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0123 Control Information Environment through Offensive Cyberspace Operations + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123.004: Conduct Server Redirect + +**Summary**: A server redirect, also known as a URL redirect, occurs when a server automatically forwards a user from one URL to another using server-side or client-side scripting languages. An influence operation may conduct a server redirect to divert target audience members from one website to another without their knowledge. The redirected website may pose as a legitimate source, host malware, or otherwise aid operation objectives. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0123.md b/generated_pages/techniques/T0123.md index 14729d4..9b9e8ab 100644 --- a/generated_pages/techniques/T0123.md +++ b/generated_pages/techniques/T0123.md @@ -2,6 +2,48 @@ **Summary**: Controlling the information environment through offensive cyberspace operations uses cyber tools and techniques to alter the trajectory of content in the information space to either prioritise operation messaging or block opposition messaging. +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123: Control Information Environment through Offensive Cyberspace Operations + +**Summary**: Controlling the information environment through offensive cyberspace operations uses cyber tools and techniques to alter the trajectory of content in the information space to either prioritise operation messaging or block opposition messaging. + +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0123: Control Information Environment through Offensive Cyberspace Operations + +**Summary**: Controlling the information environment through offensive cyberspace operations uses cyber tools and techniques to alter the trajectory of content in the information space to either prioritise operation messaging or block opposition messaging. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0124.001.md b/generated_pages/techniques/T0124.001.md index 6aaac30..7a7804c 100644 --- a/generated_pages/techniques/T0124.001.md +++ b/generated_pages/techniques/T0124.001.md @@ -2,6 +2,50 @@ **Summary**: Reporting opposing content refers to notifying and providing an instance of a violation of a platform’s guidelines and policies for conduct on the platform. In addition to simply reporting the content, an operation may leverage copyright regulations to trick social media and web platforms into removing opposing content by manipulating the content to appear in violation of copyright laws. Reporting opposing content facilitates the suppression of contradictory information and allows operation narratives to take priority in the information space. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0124 Suppress Opposition + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.

“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“


In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists’ legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124.001: Report Non-Violative Opposing Content + +**Summary**: Reporting opposing content refers to notifying and providing an instance of a violation of a platform’s guidelines and policies for conduct on the platform. In addition to simply reporting the content, an operation may leverage copyright regulations to trick social media and web platforms into removing opposing content by manipulating the content to appear in violation of copyright laws. Reporting opposing content facilitates the suppression of contradictory information and allows operation narratives to take priority in the information space. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0124 Suppress Opposition + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.

“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“


In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists’ legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124.001: Report Non-Violative Opposing Content + +**Summary**: Reporting opposing content refers to notifying and providing an instance of a violation of a platform’s guidelines and policies for conduct on the platform. In addition to simply reporting the content, an operation may leverage copyright regulations to trick social media and web platforms into removing opposing content by manipulating the content to appear in violation of copyright laws. Reporting opposing content facilitates the suppression of contradictory information and allows operation narratives to take priority in the information space. + **Tactic**: TA18 Drive Online Harms @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0124.002.md b/generated_pages/techniques/T0124.002.md index b639d4f..e450735 100644 --- a/generated_pages/techniques/T0124.002.md +++ b/generated_pages/techniques/T0124.002.md @@ -2,6 +2,48 @@ **Summary**: Goad people into actions that violate terms of service or will lead to having their content or accounts taken down. +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0124 Suppress Opposition + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124.002: Goad People into Harmful Action (Stop Hitting Yourself) + +**Summary**: Goad people into actions that violate terms of service or will lead to having their content or accounts taken down. + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0124 Suppress Opposition + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124.002: Goad People into Harmful Action (Stop Hitting Yourself) + +**Summary**: Goad people into actions that violate terms of service or will lead to having their content or accounts taken down. + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0124.003.md b/generated_pages/techniques/T0124.003.md index caf5065..182ea3e 100644 --- a/generated_pages/techniques/T0124.003.md +++ b/generated_pages/techniques/T0124.003.md @@ -2,6 +2,48 @@ **Summary**: Exploit Platform TOS/Content Moderation +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0124 Suppress Opposition + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124.003: Exploit Platform TOS/Content Moderation + +**Summary**: Exploit Platform TOS/Content Moderation + +**Tactic**: TA18 Drive Online Harms **Parent Technique:** T0124 Suppress Opposition + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124.003: Exploit Platform TOS/Content Moderation + +**Summary**: Exploit Platform TOS/Content Moderation + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0124.md b/generated_pages/techniques/T0124.md index 8755083..db6943f 100644 --- a/generated_pages/techniques/T0124.md +++ b/generated_pages/techniques/T0124.md @@ -2,6 +2,49 @@ **Summary**: Operators can suppress the opposition by exploiting platform content moderation tools and processes like reporting non-violative content to platforms for takedown and goading opposition actors into taking actions that result in platform action or target audience disapproval. +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124: Suppress Opposition + +**Summary**: Operators can suppress the opposition by exploiting platform content moderation tools and processes like reporting non-violative content to platforms for takedown and goading opposition actors into taking actions that result in platform action or target audience disapproval. + +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0124: Suppress Opposition + +**Summary**: Operators can suppress the opposition by exploiting platform content moderation tools and processes like reporting non-violative content to platforms for takedown and goading opposition actors into taking actions that result in platform action or target audience disapproval. + **Tactic**: TA18 Drive Online Harms @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0125.md b/generated_pages/techniques/T0125.md index 6582a65..792083a 100644 --- a/generated_pages/techniques/T0125.md +++ b/generated_pages/techniques/T0125.md @@ -2,6 +2,48 @@ **Summary**: Platform filtering refers to the decontextualization of information as claims cross platforms (from Joan Donovan https://www.hks.harvard.edu/publications/disinformation-design-use-evidence-collages-and-platform-filtering-media-manipulation) +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0125: Platform Filtering + +**Summary**: Platform filtering refers to the decontextualization of information as claims cross platforms (from Joan Donovan https://www.hks.harvard.edu/publications/disinformation-design-use-evidence-collages-and-platform-filtering-media-manipulation) + +**Tactic**: TA18 Drive Online Harms + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0125: Platform Filtering + +**Summary**: Platform filtering refers to the decontextualization of information as claims cross platforms (from Joan Donovan https://www.hks.harvard.edu/publications/disinformation-design-use-evidence-collages-and-platform-filtering-media-manipulation) + **Tactic**: TA18 Drive Online Harms @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0126.001.md b/generated_pages/techniques/T0126.001.md index 7dc7885..2e705ab 100644 --- a/generated_pages/techniques/T0126.001.md +++ b/generated_pages/techniques/T0126.001.md @@ -2,6 +2,50 @@ **Summary**: Call to action to attend an event +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0126 Encourage Attendance at Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0126.001: Call to Action to Attend + +**Summary**: Call to action to attend an event + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0126 Encourage Attendance at Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “A few press investigations have alluded to the [Russia’s Internet Research Agency]’s job ads. The extent of the human asset recruitment strategy is revealed in the organic data set. It is expansive, and was clearly a priority. Posts encouraging Americans to perform various types of tasks for IRA handlers appeared in Black, Left, and Right-targeted groups, though they were most numerous in the Black community. They included:

- Requests for contact with preachers from Black churches (Black_Baptist_Church)
- Offers of free counsellingcounseling to people with sexual addiction (Army of Jesus)
- Soliciting volunteers to hand out fliers
- Soliciting volunteers to teach self-defense classes
- Offering free self-defense classes (Black Fist/Fit Black)
- Requests for followers to attend political rallies
- Requests for photographers to document protests
- Requests for speakers at protests
- Requests to protest the Westborough Baptist Church (LGBT United)
- Job offers for designers to help design fliers, sites, Facebook sticker packs
- Requests for female followers to send photos for a calendar
- Requests for followers to send photos to be shared to the Page (Back the Badge)
- Soliciting videos for a YouTube contest called “Pee on Hillary”
- Encouraging people to apply to be part of a Black reality TV show
- Posting a wide variety of job ads (write for BlackMattersUS and others)
- Requests for lawyers to volunteer to assist with immigration cases”


This behaviour matches T0097.106: Recruiter Persona because the threat actors are presenting tasks for their target audience to complete in the style of a job posting (even though some of the tasks were presented as voluntary / unpaid efforts), including calls for people to attend political rallies (T0126.001: Call to Action to Attend). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0126.001: Call to Action to Attend + +**Summary**: Call to action to attend an event + **Tactic**: TA10 Drive Offline Activity @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0126.002.md b/generated_pages/techniques/T0126.002.md index d17f810..36dca06 100644 --- a/generated_pages/techniques/T0126.002.md +++ b/generated_pages/techniques/T0126.002.md @@ -2,6 +2,50 @@ **Summary**: Facilitate logistics or support for travel, food, housing, etc. +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0126 Encourage Attendance at Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0126.002: Facilitate Logistics or Support for Attendance + +**Summary**: Facilitate logistics or support for travel, food, housing, etc. + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0126 Encourage Attendance at Events + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0126.002: Facilitate Logistics or Support for Attendance + +**Summary**: Facilitate logistics or support for travel, food, housing, etc. + **Tactic**: TA10 Drive Offline Activity @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0126.md b/generated_pages/techniques/T0126.md index 2d26a46..692cbd9 100644 --- a/generated_pages/techniques/T0126.md +++ b/generated_pages/techniques/T0126.md @@ -2,6 +2,48 @@ **Summary**: Operation encourages attendance at existing real world event. +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0126: Encourage Attendance at Events + +**Summary**: Operation encourages attendance at existing real world event. + +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0126: Encourage Attendance at Events + +**Summary**: Operation encourages attendance at existing real world event. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0127.001.md b/generated_pages/techniques/T0127.001.md index ede653f..8f15384 100644 --- a/generated_pages/techniques/T0127.001.md +++ b/generated_pages/techniques/T0127.001.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may directly Conduct Physical Violence to achieve campaign goals. +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0127 Physical Violence + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0127.001: Conduct Physical Violence + +**Summary**: An influence operation may directly Conduct Physical Violence to achieve campaign goals. + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0127 Physical Violence + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0127.001: Conduct Physical Violence + +**Summary**: An influence operation may directly Conduct Physical Violence to achieve campaign goals. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0127.002.md b/generated_pages/techniques/T0127.002.md index 1e94ad8..abc4f67 100644 --- a/generated_pages/techniques/T0127.002.md +++ b/generated_pages/techniques/T0127.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may Encourage others to engage in Physical Violence to achieve campaign goals. +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0127 Physical Violence + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0127.002: Encourage Physical Violence + +**Summary**: An influence operation may Encourage others to engage in Physical Violence to achieve campaign goals. + +**Tactic**: TA10 Drive Offline Activity **Parent Technique:** T0127 Physical Violence + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0127.002: Encourage Physical Violence + +**Summary**: An influence operation may Encourage others to engage in Physical Violence to achieve campaign goals. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0127.md b/generated_pages/techniques/T0127.md index 3f777a4..9fb010a 100644 --- a/generated_pages/techniques/T0127.md +++ b/generated_pages/techniques/T0127.md @@ -2,6 +2,48 @@ **Summary**: Physical violence refers to the use of force to injure, abuse, damage, or destroy. An influence operation may conduct or encourage physical violence to discourage opponents from promoting conflicting content or draw attention to operation narratives using shock value. +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0127: Physical Violence + +**Summary**: Physical violence refers to the use of force to injure, abuse, damage, or destroy. An influence operation may conduct or encourage physical violence to discourage opponents from promoting conflicting content or draw attention to operation narratives using shock value. + +**Tactic**: TA10 Drive Offline Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0127: Physical Violence + +**Summary**: Physical violence refers to the use of force to injure, abuse, damage, or destroy. An influence operation may conduct or encourage physical violence to discourage opponents from promoting conflicting content or draw attention to operation narratives using shock value. + **Tactic**: TA10 Drive Offline Activity @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0128.001.md b/generated_pages/techniques/T0128.001.md index 7c8d7dd..4fb85cb 100644 --- a/generated_pages/techniques/T0128.001.md +++ b/generated_pages/techniques/T0128.001.md @@ -2,6 +2,48 @@ **Summary**: An operation may use pseudonyms, or fake names, to mask the identity of operational accounts, channels, pages etc., publish anonymous content, or otherwise use falsified personas to conceal the identity of the operation. An operation may coordinate pseudonyms across multiple platforms, for example, by writing an article under a pseudonym and then posting a link to the article on social media on an account, channel, or page with the same falsified name. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.001: Use Pseudonyms + +**Summary**: An operation may use pseudonyms, or fake names, to mask the identity of operational accounts, channels, pages etc., publish anonymous content, or otherwise use falsified personas to conceal the identity of the operation. An operation may coordinate pseudonyms across multiple platforms, for example, by writing an article under a pseudonym and then posting a link to the article on social media on an account, channel, or page with the same falsified name. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.001: Use Pseudonyms + +**Summary**: An operation may use pseudonyms, or fake names, to mask the identity of operational accounts, channels, pages etc., publish anonymous content, or otherwise use falsified personas to conceal the identity of the operation. An operation may coordinate pseudonyms across multiple platforms, for example, by writing an article under a pseudonym and then posting a link to the article on social media on an account, channel, or page with the same falsified name. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0128.002.md b/generated_pages/techniques/T0128.002.md index e1f0d32..ef20f86 100644 --- a/generated_pages/techniques/T0128.002.md +++ b/generated_pages/techniques/T0128.002.md @@ -2,6 +2,48 @@ **Summary**: Concealing network identity aims to hide the existence an influence operation’s network completely. Unlike concealing sponsorship, concealing network identity denies the existence of any sort of organisation. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.002: Conceal Network Identity + +**Summary**: Concealing network identity aims to hide the existence an influence operation’s network completely. Unlike concealing sponsorship, concealing network identity denies the existence of any sort of organisation. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.002: Conceal Network Identity + +**Summary**: Concealing network identity aims to hide the existence an influence operation’s network completely. Unlike concealing sponsorship, concealing network identity denies the existence of any sort of organisation. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0128.003.md b/generated_pages/techniques/T0128.003.md index eb039b5..aa7f010 100644 --- a/generated_pages/techniques/T0128.003.md +++ b/generated_pages/techniques/T0128.003.md @@ -2,6 +2,48 @@ **Summary**: Distancing reputable individuals from the operation occurs when enlisted individuals, such as celebrities or subject matter experts, actively disengage themselves from operation activities and messaging. Individuals may distance themselves from the operation by deleting old posts or statements, unfollowing operation information assets, or otherwise detaching themselves from the operation’s timeline. An influence operation may want reputable individuals to distance themselves from the operation to reduce operation exposure, particularly if the operation aims to remove all evidence. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.003: Distance Reputable Individuals from Operation + +**Summary**: Distancing reputable individuals from the operation occurs when enlisted individuals, such as celebrities or subject matter experts, actively disengage themselves from operation activities and messaging. Individuals may distance themselves from the operation by deleting old posts or statements, unfollowing operation information assets, or otherwise detaching themselves from the operation’s timeline. An influence operation may want reputable individuals to distance themselves from the operation to reduce operation exposure, particularly if the operation aims to remove all evidence. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.003: Distance Reputable Individuals from Operation + +**Summary**: Distancing reputable individuals from the operation occurs when enlisted individuals, such as celebrities or subject matter experts, actively disengage themselves from operation activities and messaging. Individuals may distance themselves from the operation by deleting old posts or statements, unfollowing operation information assets, or otherwise detaching themselves from the operation’s timeline. An influence operation may want reputable individuals to distance themselves from the operation to reduce operation exposure, particularly if the operation aims to remove all evidence. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0128.004.md b/generated_pages/techniques/T0128.004.md index c9ef33a..1d68d8a 100644 --- a/generated_pages/techniques/T0128.004.md +++ b/generated_pages/techniques/T0128.004.md @@ -2,6 +2,48 @@ **Summary**: Laundering occurs when an influence operation acquires control of previously legitimate information assets such as accounts, channels, pages etc. from third parties through sale or exchange and often in contravention of terms of use. Influence operations use laundered assets to reach target audience members from within an existing information community and to complicate attribution. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.004: Launder Information Assets + +**Summary**: Laundering occurs when an influence operation acquires control of previously legitimate information assets such as accounts, channels, pages etc. from third parties through sale or exchange and often in contravention of terms of use. Influence operations use laundered assets to reach target audience members from within an existing information community and to complicate attribution. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.004: Launder Information Assets + +**Summary**: Laundering occurs when an influence operation acquires control of previously legitimate information assets such as accounts, channels, pages etc. from third parties through sale or exchange and often in contravention of terms of use. Influence operations use laundered assets to reach target audience members from within an existing information community and to complicate attribution. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0128.005.md b/generated_pages/techniques/T0128.005.md index ba55ffb..fc03288 100644 --- a/generated_pages/techniques/T0128.005.md +++ b/generated_pages/techniques/T0128.005.md @@ -2,6 +2,48 @@ **Summary**: Changing names or brand names of information assets such as accounts, channels, pages etc. An operation may change the names or brand names of its assets throughout an operation to avoid detection or alter the names of newly acquired or repurposed assets to fit operational narratives. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.005: Change Names of Information Assets + +**Summary**: Changing names or brand names of information assets such as accounts, channels, pages etc. An operation may change the names or brand names of its assets throughout an operation to avoid detection or alter the names of newly acquired or repurposed assets to fit operational narratives. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0128 Conceal Information Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128.005: Change Names of Information Assets + +**Summary**: Changing names or brand names of information assets such as accounts, channels, pages etc. An operation may change the names or brand names of its assets throughout an operation to avoid detection or alter the names of newly acquired or repurposed assets to fit operational narratives. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0128.md b/generated_pages/techniques/T0128.md index 3f4ca59..68f7b5d 100644 --- a/generated_pages/techniques/T0128.md +++ b/generated_pages/techniques/T0128.md @@ -2,6 +2,48 @@ **Summary**: Conceal the identity or provenance of campaign information assets such as accounts, channels, pages etc. to avoid takedown and attribution. +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128: Conceal Information Assets + +**Summary**: Conceal the identity or provenance of campaign information assets such as accounts, channels, pages etc. to avoid takedown and attribution. + +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0128: Conceal Information Assets + +**Summary**: Conceal the identity or provenance of campaign information assets such as accounts, channels, pages etc. to avoid takedown and attribution. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.001.md b/generated_pages/techniques/T0129.001.md index a81cd9f..514c075 100644 --- a/generated_pages/techniques/T0129.001.md +++ b/generated_pages/techniques/T0129.001.md @@ -2,6 +2,48 @@ **Summary**: Concealing network identity aims to hide the existence an influence operation’s network completely. Unlike concealing sponsorship, concealing network identity denies the existence of any sort of organisation. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.001: Conceal Network Identity + +**Summary**: Concealing network identity aims to hide the existence an influence operation’s network completely. Unlike concealing sponsorship, concealing network identity denies the existence of any sort of organisation. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.001: Conceal Network Identity + +**Summary**: Concealing network identity aims to hide the existence an influence operation’s network completely. Unlike concealing sponsorship, concealing network identity denies the existence of any sort of organisation. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.002.md b/generated_pages/techniques/T0129.002.md index 98376a8..71dda1d 100644 --- a/generated_pages/techniques/T0129.002.md +++ b/generated_pages/techniques/T0129.002.md @@ -2,6 +2,48 @@ **Summary**: An influence operation may mix its own operation content with legitimate news or external unrelated content to disguise operational objectives, narratives, or existence. For example, an operation may generate "lifestyle" or "cuisine" content alongside regular operation content. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.002: Generate Content Unrelated to Narrative + +**Summary**: An influence operation may mix its own operation content with legitimate news or external unrelated content to disguise operational objectives, narratives, or existence. For example, an operation may generate "lifestyle" or "cuisine" content alongside regular operation content. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.002: Generate Content Unrelated to Narrative + +**Summary**: An influence operation may mix its own operation content with legitimate news or external unrelated content to disguise operational objectives, narratives, or existence. For example, an operation may generate "lifestyle" or "cuisine" content alongside regular operation content. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.003.md b/generated_pages/techniques/T0129.003.md index dc283e4..8154a3a 100644 --- a/generated_pages/techniques/T0129.003.md +++ b/generated_pages/techniques/T0129.003.md @@ -2,6 +2,48 @@ **Summary**: Breaking association with content occurs when an influence operation actively separates itself from its own content. An influence operation may break association with content by unfollowing, unliking, or unsharing its content, removing attribution from its content, or otherwise taking actions that distance the operation from its messaging. An influence operation may break association with its content to complicate attribution or regain credibility for a new operation. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.003: Break Association with Content + +**Summary**: Breaking association with content occurs when an influence operation actively separates itself from its own content. An influence operation may break association with content by unfollowing, unliking, or unsharing its content, removing attribution from its content, or otherwise taking actions that distance the operation from its messaging. An influence operation may break association with its content to complicate attribution or regain credibility for a new operation. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.003: Break Association with Content + +**Summary**: Breaking association with content occurs when an influence operation actively separates itself from its own content. An influence operation may break association with content by unfollowing, unliking, or unsharing its content, removing attribution from its content, or otherwise taking actions that distance the operation from its messaging. An influence operation may break association with its content to complicate attribution or regain credibility for a new operation. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.004.md b/generated_pages/techniques/T0129.004.md index 5bba55d..c53b083 100644 --- a/generated_pages/techniques/T0129.004.md +++ b/generated_pages/techniques/T0129.004.md @@ -2,6 +2,48 @@ **Summary**: URL deletion occurs when an influence operation completely removes its website registration, rendering the URL inaccessible. An influence operation may delete its URLs to complicate attribution or remove online documentation that the operation ever occurred. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.004: Delete URLs + +**Summary**: URL deletion occurs when an influence operation completely removes its website registration, rendering the URL inaccessible. An influence operation may delete its URLs to complicate attribution or remove online documentation that the operation ever occurred. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.004: Delete URLs + +**Summary**: URL deletion occurs when an influence operation completely removes its website registration, rendering the URL inaccessible. An influence operation may delete its URLs to complicate attribution or remove online documentation that the operation ever occurred. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.005.md b/generated_pages/techniques/T0129.005.md index 246eefd..377e538 100644 --- a/generated_pages/techniques/T0129.005.md +++ b/generated_pages/techniques/T0129.005.md @@ -2,6 +2,50 @@ **Summary**: Coordinate on encrypted/ closed networks +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.005: Coordinate on Encrypted/Closed Networks + +**Summary**: Coordinate on encrypted/ closed networks + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.005: Coordinate on Encrypted/Closed Networks + +**Summary**: Coordinate on encrypted/ closed networks + **Tactic**: TA11 Persist in the Information Environment @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.006.md b/generated_pages/techniques/T0129.006.md index 795c846..14e252a 100644 --- a/generated_pages/techniques/T0129.006.md +++ b/generated_pages/techniques/T0129.006.md @@ -2,6 +2,53 @@ **Summary**: Without "smoking gun" proof (and even with proof), incident creator can or will deny involvement. This technique also leverages the attacker advantages outlined in "Demand insurmountable proof", specifically the asymmetric disadvantage for truth-tellers in a "firehose of misinformation" environment. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “Only three of the Russian operatives identified by local hires of the campaign responded to requests for comment. All acknowledged visiting Madagascar last year, but only one admitted working as a pollster on behalf of the president.

“The others said they were simply tourists. Pyotr Korolyov, described as a sociologist on one spreadsheet, spent much of the summer of 2018 and fall hunched over a computer, deep in polling data at La Résidence Ankerana, a hotel the Russians used as their headquarters, until he was hospitalized with the measles, according to one person who worked with him.

“In an email exchange, Mr. Korolyov confirmed that he had come down with the measles, but rejected playing a role in a Russian operation. He did defend the idea of one, though.

““Russia should influence elections around the world, the same way the United States influences elections,” he wrote. “Sooner or later Russia will return to global politics as a global player,” he added. “And the American establishment will just have to accept that.””


This behaviour matches T0129.006: Deny Involvement because the actors contacted by journalists denied that they had participated in election interference (in spite of the evidence to the contrary). | +| [I00093 China Falsely Denies Disinformation Campaign Targeting Canada’s Prime Minister](../../generated_pages/incidents/I00093.md) | “On October 23, Canada’s Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.

“The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nation’s domestic affairs.”

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

“That is false.

“The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.

“The investigation exposed China’s disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms – including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””


In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.006: Deny Involvement + +**Summary**: Without "smoking gun" proof (and even with proof), incident creator can or will deny involvement. This technique also leverages the attacker advantages outlined in "Demand insurmountable proof", specifically the asymmetric disadvantage for truth-tellers in a "firehose of misinformation" environment. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “Only three of the Russian operatives identified by local hires of the campaign responded to requests for comment. All acknowledged visiting Madagascar last year, but only one admitted working as a pollster on behalf of the president.

“The others said they were simply tourists. Pyotr Korolyov, described as a sociologist on one spreadsheet, spent much of the summer of 2018 and fall hunched over a computer, deep in polling data at La Résidence Ankerana, a hotel the Russians used as their headquarters, until he was hospitalized with the measles, according to one person who worked with him.

“In an email exchange, Mr. Korolyov confirmed that he had come down with the measles, but rejected playing a role in a Russian operation. He did defend the idea of one, though.

““Russia should influence elections around the world, the same way the United States influences elections,” he wrote. “Sooner or later Russia will return to global politics as a global player,” he added. “And the American establishment will just have to accept that.””


This behaviour matches T0129.006: Deny Involvement because the actors contacted by journalists denied that they had participated in election interference (in spite of the evidence to the contrary). | +| [I00093 China Falsely Denies Disinformation Campaign Targeting Canada’s Prime Minister](../../generated_pages/incidents/I00093.md) | “On October 23, Canada’s Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.

“The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nation’s domestic affairs.”

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

“That is false.

“The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.

“The investigation exposed China’s disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms – including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””


In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.006: Deny Involvement + +**Summary**: Without "smoking gun" proof (and even with proof), incident creator can or will deny involvement. This technique also leverages the attacker advantages outlined in "Demand insurmountable proof", specifically the asymmetric disadvantage for truth-tellers in a "firehose of misinformation" environment. + **Tactic**: TA11 Persist in the Information Environment @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.007.md b/generated_pages/techniques/T0129.007.md index 05ba34f..7660510 100644 --- a/generated_pages/techniques/T0129.007.md +++ b/generated_pages/techniques/T0129.007.md @@ -2,6 +2,48 @@ **Summary**: Deleting accounts and account activity occurs when an influence operation removes its online social media assets, including social media accounts, posts, likes, comments, and other online artefacts. An influence operation may delete its accounts and account activity to complicate attribution or remove online documentation that the operation ever occurred. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.007: Delete Accounts/Account Activity + +**Summary**: Deleting accounts and account activity occurs when an influence operation removes its online social media assets, including social media accounts, posts, likes, comments, and other online artefacts. An influence operation may delete its accounts and account activity to complicate attribution or remove online documentation that the operation ever occurred. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.007: Delete Accounts/Account Activity + +**Summary**: Deleting accounts and account activity occurs when an influence operation removes its online social media assets, including social media accounts, posts, likes, comments, and other online artefacts. An influence operation may delete its accounts and account activity to complicate attribution or remove online documentation that the operation ever occurred. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.009.md b/generated_pages/techniques/T0129.009.md index 3d9ab86..740a16f 100644 --- a/generated_pages/techniques/T0129.009.md +++ b/generated_pages/techniques/T0129.009.md @@ -2,6 +2,48 @@ **Summary**: Removing post origins refers to the elimination of evidence that indicates the initial source of operation content, often to complicate attribution. An influence operation may remove post origins by deleting watermarks, renaming files, or removing embedded links in its content. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.009: Remove Post Origins + +**Summary**: Removing post origins refers to the elimination of evidence that indicates the initial source of operation content, often to complicate attribution. An influence operation may remove post origins by deleting watermarks, renaming files, or removing embedded links in its content. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.009: Remove Post Origins + +**Summary**: Removing post origins refers to the elimination of evidence that indicates the initial source of operation content, often to complicate attribution. An influence operation may remove post origins by deleting watermarks, renaming files, or removing embedded links in its content. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.010.md b/generated_pages/techniques/T0129.010.md index 8e6cdb6..9e01f53 100644 --- a/generated_pages/techniques/T0129.010.md +++ b/generated_pages/techniques/T0129.010.md @@ -2,6 +2,48 @@ **Summary**: Misattributed activity refers to incorrectly attributed operation activity. For example, a state sponsored influence operation may conduct operation activity in a way that mimics another state so that external entities misattribute activity to the incorrect state. An operation may misattribute their activities to complicate attribution, avoid detection, or frame an adversary for negative behaviour. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.010: Misattribute Activity + +**Summary**: Misattributed activity refers to incorrectly attributed operation activity. For example, a state sponsored influence operation may conduct operation activity in a way that mimics another state so that external entities misattribute activity to the incorrect state. An operation may misattribute their activities to complicate attribution, avoid detection, or frame an adversary for negative behaviour. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0129 Conceal Operational Activity + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129.010: Misattribute Activity + +**Summary**: Misattributed activity refers to incorrectly attributed operation activity. For example, a state sponsored influence operation may conduct operation activity in a way that mimics another state so that external entities misattribute activity to the incorrect state. An operation may misattribute their activities to complicate attribution, avoid detection, or frame an adversary for negative behaviour. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0129.md b/generated_pages/techniques/T0129.md index 86389c8..5d52171 100644 --- a/generated_pages/techniques/T0129.md +++ b/generated_pages/techniques/T0129.md @@ -2,6 +2,48 @@ **Summary**: Conceal the campaign's operational activity to avoid takedown and attribution. +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129: Conceal Operational Activity + +**Summary**: Conceal the campaign's operational activity to avoid takedown and attribution. + +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0129: Conceal Operational Activity + +**Summary**: Conceal the campaign's operational activity to avoid takedown and attribution. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0130.001.md b/generated_pages/techniques/T0130.001.md index 26843ba..651fa54 100644 --- a/generated_pages/techniques/T0130.001.md +++ b/generated_pages/techniques/T0130.001.md @@ -2,6 +2,48 @@ **Summary**: Concealing sponsorship aims to mislead or obscure the identity of the hidden sponsor behind an operation rather than entity publicly running the operation. Operations that conceal sponsorship may maintain visible falsified groups, news outlets, non-profits, or other organisations, but seek to mislead or obscure the identity sponsoring, funding, or otherwise supporting these entities. Influence operations may use a variety of techniques to mask the location of their social media accounts to complicate attribution and conceal evidence of foreign interference. Operation accounts may set their location to a false place, often the location of the operation’s target audience, and post in the region’s language +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.001: Conceal Sponsorship + +**Summary**: Concealing sponsorship aims to mislead or obscure the identity of the hidden sponsor behind an operation rather than entity publicly running the operation. Operations that conceal sponsorship may maintain visible falsified groups, news outlets, non-profits, or other organisations, but seek to mislead or obscure the identity sponsoring, funding, or otherwise supporting these entities. Influence operations may use a variety of techniques to mask the location of their social media accounts to complicate attribution and conceal evidence of foreign interference. Operation accounts may set their location to a false place, often the location of the operation’s target audience, and post in the region’s language + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.001: Conceal Sponsorship + +**Summary**: Concealing sponsorship aims to mislead or obscure the identity of the hidden sponsor behind an operation rather than entity publicly running the operation. Operations that conceal sponsorship may maintain visible falsified groups, news outlets, non-profits, or other organisations, but seek to mislead or obscure the identity sponsoring, funding, or otherwise supporting these entities. Influence operations may use a variety of techniques to mask the location of their social media accounts to complicate attribution and conceal evidence of foreign interference. Operation accounts may set their location to a false place, often the location of the operation’s target audience, and post in the region’s language + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0130.002.md b/generated_pages/techniques/T0130.002.md index 9df2a43..5d2e135 100644 --- a/generated_pages/techniques/T0130.002.md +++ b/generated_pages/techniques/T0130.002.md @@ -2,6 +2,48 @@ **Summary**: Hosting refers to services through which storage and computing resources are provided to an individual or organisation for the accommodation and maintenance of one or more websites and related services. Services may include web hosting, file sharing, and email distribution. Bulletproof hosting refers to services provided by an entity, such as a domain hosting or web hosting firm, that allows its customer considerable leniency in use of the service. An influence operation may utilise bulletproof hosting to maintain continuity of service for suspicious, illegal, or disruptive operation activities that stricter hosting services would limit, report, or suspend. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.002: Utilise Bulletproof Hosting + +**Summary**: Hosting refers to services through which storage and computing resources are provided to an individual or organisation for the accommodation and maintenance of one or more websites and related services. Services may include web hosting, file sharing, and email distribution. Bulletproof hosting refers to services provided by an entity, such as a domain hosting or web hosting firm, that allows its customer considerable leniency in use of the service. An influence operation may utilise bulletproof hosting to maintain continuity of service for suspicious, illegal, or disruptive operation activities that stricter hosting services would limit, report, or suspend. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.002: Utilise Bulletproof Hosting + +**Summary**: Hosting refers to services through which storage and computing resources are provided to an individual or organisation for the accommodation and maintenance of one or more websites and related services. Services may include web hosting, file sharing, and email distribution. Bulletproof hosting refers to services provided by an entity, such as a domain hosting or web hosting firm, that allows its customer considerable leniency in use of the service. An influence operation may utilise bulletproof hosting to maintain continuity of service for suspicious, illegal, or disruptive operation activities that stricter hosting services would limit, report, or suspend. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0130.003.md b/generated_pages/techniques/T0130.003.md index ed3926d..5c7000b 100644 --- a/generated_pages/techniques/T0130.003.md +++ b/generated_pages/techniques/T0130.003.md @@ -2,6 +2,48 @@ **Summary**: Use Shell Organisations to conceal sponsorship. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.003: Use Shell Organisations + +**Summary**: Use Shell Organisations to conceal sponsorship. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.003: Use Shell Organisations + +**Summary**: Use Shell Organisations to conceal sponsorship. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0130.004.md b/generated_pages/techniques/T0130.004.md index ea6f7ee..c179b42 100644 --- a/generated_pages/techniques/T0130.004.md +++ b/generated_pages/techniques/T0130.004.md @@ -2,6 +2,48 @@ **Summary**: Use Cryptocurrency to conceal sponsorship. Examples include Bitcoin, Monero, and Etherium. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.004: Use Cryptocurrency + +**Summary**: Use Cryptocurrency to conceal sponsorship. Examples include Bitcoin, Monero, and Etherium. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.004: Use Cryptocurrency + +**Summary**: Use Cryptocurrency to conceal sponsorship. Examples include Bitcoin, Monero, and Etherium. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0130.005.md b/generated_pages/techniques/T0130.005.md index 89198c5..62866b7 100644 --- a/generated_pages/techniques/T0130.005.md +++ b/generated_pages/techniques/T0130.005.md @@ -2,6 +2,48 @@ **Summary**: Obfuscate Payment +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.005: Obfuscate Payment + +**Summary**: Obfuscate Payment + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0130 Conceal Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130.005: Obfuscate Payment + +**Summary**: Obfuscate Payment + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0130.md b/generated_pages/techniques/T0130.md index ff8d368..6d9fc60 100644 --- a/generated_pages/techniques/T0130.md +++ b/generated_pages/techniques/T0130.md @@ -2,6 +2,48 @@ **Summary**: Conceal the campaign's infrastructure to avoid takedown and attribution. +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130: Conceal Infrastructure + +**Summary**: Conceal the campaign's infrastructure to avoid takedown and attribution. + +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0130: Conceal Infrastructure + +**Summary**: Conceal the campaign's infrastructure to avoid takedown and attribution. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0131.001.md b/generated_pages/techniques/T0131.001.md index ec15e38..6b7ce85 100644 --- a/generated_pages/techniques/T0131.001.md +++ b/generated_pages/techniques/T0131.001.md @@ -2,6 +2,48 @@ **Summary**: Make incident content visible for a long time, e.g. by exploiting platform terms of service, or placing it where it's hard to remove or unlikely to be removed. +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0131 Exploit TOS/Content Moderation + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0131.001: Legacy Web Content + +**Summary**: Make incident content visible for a long time, e.g. by exploiting platform terms of service, or placing it where it's hard to remove or unlikely to be removed. + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0131 Exploit TOS/Content Moderation + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0131.001: Legacy Web Content + +**Summary**: Make incident content visible for a long time, e.g. by exploiting platform terms of service, or placing it where it's hard to remove or unlikely to be removed. + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0131.002.md b/generated_pages/techniques/T0131.002.md index 3afbd3f..ab0ec02 100644 --- a/generated_pages/techniques/T0131.002.md +++ b/generated_pages/techniques/T0131.002.md @@ -2,6 +2,48 @@ **Summary**: Post Borderline Content +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0131 Exploit TOS/Content Moderation + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0131.002: Post Borderline Content + +**Summary**: Post Borderline Content + +**Tactic**: TA11 Persist in the Information Environment **Parent Technique:** T0131 Exploit TOS/Content Moderation + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0131.002: Post Borderline Content + +**Summary**: Post Borderline Content + **Tactic**: TA11 Persist in the Information Environment @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0131.md b/generated_pages/techniques/T0131.md index 2efff0d..ea422b4 100644 --- a/generated_pages/techniques/T0131.md +++ b/generated_pages/techniques/T0131.md @@ -2,6 +2,50 @@ **Summary**: Exploiting weaknesses in platforms' terms of service and content moderation policies to avoid takedowns and platform actions. +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”

In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).

The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0131: Exploit TOS/Content Moderation + +**Summary**: Exploiting weaknesses in platforms' terms of service and content moderation policies to avoid takedowns and platform actions. + +**Tactic**: TA11 Persist in the Information Environment + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”

In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).

The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0131: Exploit TOS/Content Moderation + +**Summary**: Exploiting weaknesses in platforms' terms of service and content moderation policies to avoid takedowns and platform actions. + **Tactic**: TA11 Persist in the Information Environment @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0132.001.md b/generated_pages/techniques/T0132.001.md index 60d2425..c9299a2 100644 --- a/generated_pages/techniques/T0132.001.md +++ b/generated_pages/techniques/T0132.001.md @@ -2,6 +2,48 @@ **Summary**: Measure the performance individuals in achieving campaign goals +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0132 Measure Performance + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132.001: People Focused + +**Summary**: Measure the performance individuals in achieving campaign goals + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0132 Measure Performance + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132.001: People Focused + +**Summary**: Measure the performance individuals in achieving campaign goals + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0132.002.md b/generated_pages/techniques/T0132.002.md index 0f0d6a8..407f049 100644 --- a/generated_pages/techniques/T0132.002.md +++ b/generated_pages/techniques/T0132.002.md @@ -2,6 +2,48 @@ **Summary**: Measure the performance of campaign content +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0132 Measure Performance + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132.002: Content Focused + +**Summary**: Measure the performance of campaign content + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0132 Measure Performance + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132.002: Content Focused + +**Summary**: Measure the performance of campaign content + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0132.003.md b/generated_pages/techniques/T0132.003.md index ae8437a..557a251 100644 --- a/generated_pages/techniques/T0132.003.md +++ b/generated_pages/techniques/T0132.003.md @@ -2,6 +2,48 @@ **Summary**: View Focused +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0132 Measure Performance + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132.003: View Focused + +**Summary**: View Focused + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0132 Measure Performance + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132.003: View Focused + +**Summary**: View Focused + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0132.md b/generated_pages/techniques/T0132.md index b77d51f..21b00ed 100644 --- a/generated_pages/techniques/T0132.md +++ b/generated_pages/techniques/T0132.md @@ -2,6 +2,48 @@ **Summary**: A metric used to determine the accomplishment of actions. “Are the actions being executed as planned?” +**Tactic**: TA12 Assess Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132: Measure Performance + +**Summary**: A metric used to determine the accomplishment of actions. “Are the actions being executed as planned?” + +**Tactic**: TA12 Assess Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0132: Measure Performance + +**Summary**: A metric used to determine the accomplishment of actions. “Are the actions being executed as planned?” + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0133.001.md b/generated_pages/techniques/T0133.001.md index d5c7067..49cfe8f 100644 --- a/generated_pages/techniques/T0133.001.md +++ b/generated_pages/techniques/T0133.001.md @@ -2,6 +2,48 @@ **Summary**: Monitor and evaluate behaviour changes from misinformation incidents. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.001: Behaviour Changes + +**Summary**: Monitor and evaluate behaviour changes from misinformation incidents. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.001: Behaviour Changes + +**Summary**: Monitor and evaluate behaviour changes from misinformation incidents. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0133.002.md b/generated_pages/techniques/T0133.002.md index 9267183..6ff0e6e 100644 --- a/generated_pages/techniques/T0133.002.md +++ b/generated_pages/techniques/T0133.002.md @@ -2,6 +2,48 @@ **Summary**: Measure current system state with respect to the effectiveness of campaign content. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.002: Content + +**Summary**: Measure current system state with respect to the effectiveness of campaign content. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.002: Content + +**Summary**: Measure current system state with respect to the effectiveness of campaign content. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0133.003.md b/generated_pages/techniques/T0133.003.md index 4743e1a..2561f4e 100644 --- a/generated_pages/techniques/T0133.003.md +++ b/generated_pages/techniques/T0133.003.md @@ -2,6 +2,48 @@ **Summary**: Measure current system state with respect to the effectiveness of influencing awareness. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.003: Awareness + +**Summary**: Measure current system state with respect to the effectiveness of influencing awareness. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.003: Awareness + +**Summary**: Measure current system state with respect to the effectiveness of influencing awareness. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0133.004.md b/generated_pages/techniques/T0133.004.md index 577d0ff..c3bf056 100644 --- a/generated_pages/techniques/T0133.004.md +++ b/generated_pages/techniques/T0133.004.md @@ -2,6 +2,48 @@ **Summary**: Measure current system state with respect to the effectiveness of influencing knowledge. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.004: Knowledge + +**Summary**: Measure current system state with respect to the effectiveness of influencing knowledge. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.004: Knowledge + +**Summary**: Measure current system state with respect to the effectiveness of influencing knowledge. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0133.005.md b/generated_pages/techniques/T0133.005.md index 8a910c0..0dadca7 100644 --- a/generated_pages/techniques/T0133.005.md +++ b/generated_pages/techniques/T0133.005.md @@ -2,6 +2,48 @@ **Summary**: Measure current system state with respect to the effectiveness of influencing action/attitude. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.005: Action/Attitude + +**Summary**: Measure current system state with respect to the effectiveness of influencing action/attitude. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0133 Measure Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133.005: Action/Attitude + +**Summary**: Measure current system state with respect to the effectiveness of influencing action/attitude. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0133.md b/generated_pages/techniques/T0133.md index 8bffc3c..b555e32 100644 --- a/generated_pages/techniques/T0133.md +++ b/generated_pages/techniques/T0133.md @@ -2,6 +2,48 @@ **Summary**: A metric used to measure a current system state. “Are we on track to achieve the intended new system state within the planned timescale?” +**Tactic**: TA12 Assess Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133: Measure Effectiveness + +**Summary**: A metric used to measure a current system state. “Are we on track to achieve the intended new system state within the planned timescale?” + +**Tactic**: TA12 Assess Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0133: Measure Effectiveness + +**Summary**: A metric used to measure a current system state. “Are we on track to achieve the intended new system state within the planned timescale?” + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0134.001.md b/generated_pages/techniques/T0134.001.md index d9f42e7..e9d0a59 100644 --- a/generated_pages/techniques/T0134.001.md +++ b/generated_pages/techniques/T0134.001.md @@ -2,6 +2,48 @@ **Summary**: Monitor and evaluate message reach in misinformation incidents. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0134 Measure Effectiveness Indicators (or KPIs) + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0134.001: Message Reach + +**Summary**: Monitor and evaluate message reach in misinformation incidents. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0134 Measure Effectiveness Indicators (or KPIs) + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0134.001: Message Reach + +**Summary**: Monitor and evaluate message reach in misinformation incidents. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0134.002.md b/generated_pages/techniques/T0134.002.md index cbeb022..2a6871b 100644 --- a/generated_pages/techniques/T0134.002.md +++ b/generated_pages/techniques/T0134.002.md @@ -2,6 +2,48 @@ **Summary**: Monitor and evaluate social media engagement in misinformation incidents. +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0134 Measure Effectiveness Indicators (or KPIs) + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0134.002: Social Media Engagement + +**Summary**: Monitor and evaluate social media engagement in misinformation incidents. + +**Tactic**: TA12 Assess Effectiveness **Parent Technique:** T0134 Measure Effectiveness Indicators (or KPIs) + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0134.002: Social Media Engagement + +**Summary**: Monitor and evaluate social media engagement in misinformation incidents. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0134.md b/generated_pages/techniques/T0134.md index 96c2e6d..013c5af 100644 --- a/generated_pages/techniques/T0134.md +++ b/generated_pages/techniques/T0134.md @@ -2,6 +2,48 @@ **Summary**: Ensuring that Key Performance Indicators are identified and tracked, so that the performance and effectiveness of campaigns, and elements of campaigns, can be measured, during and after their execution. +**Tactic**: TA12 Assess Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0134: Measure Effectiveness Indicators (or KPIs) + +**Summary**: Ensuring that Key Performance Indicators are identified and tracked, so that the performance and effectiveness of campaigns, and elements of campaigns, can be measured, during and after their execution. + +**Tactic**: TA12 Assess Effectiveness + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0134: Measure Effectiveness Indicators (or KPIs) + +**Summary**: Ensuring that Key Performance Indicators are identified and tracked, so that the performance and effectiveness of campaigns, and elements of campaigns, can be measured, during and after their execution. + **Tactic**: TA12 Assess Effectiveness @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0135.001.md b/generated_pages/techniques/T0135.001.md index 9d80b86..4d636b8 100644 --- a/generated_pages/techniques/T0135.001.md +++ b/generated_pages/techniques/T0135.001.md @@ -2,6 +2,48 @@ **Summary**: Denigrate, disparage, or discredit an opponent. This is a common tactical objective in political campaigns with a larger strategic goal. It differs from efforts to harm a target through defamation. If there is no ulterior motive and the sole aim is to cause harm to the target, then choose sub-technique “Defame” of technique “Cause Harm” instead. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.001: Smear + +**Summary**: Denigrate, disparage, or discredit an opponent. This is a common tactical objective in political campaigns with a larger strategic goal. It differs from efforts to harm a target through defamation. If there is no ulterior motive and the sole aim is to cause harm to the target, then choose sub-technique “Defame” of technique “Cause Harm” instead. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.001: Smear + +**Summary**: Denigrate, disparage, or discredit an opponent. This is a common tactical objective in political campaigns with a larger strategic goal. It differs from efforts to harm a target through defamation. If there is no ulterior motive and the sole aim is to cause harm to the target, then choose sub-technique “Defame” of technique “Cause Harm” instead. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0135.002.md b/generated_pages/techniques/T0135.002.md index f6e8610..990c08e 100644 --- a/generated_pages/techniques/T0135.002.md +++ b/generated_pages/techniques/T0135.002.md @@ -2,6 +2,48 @@ **Summary**: Prevent the successful outcome of a policy, operation, or initiative. Actors conduct influence operations to stymie or foil proposals, plans, or courses of action which are not in their interest. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.002: Thwart + +**Summary**: Prevent the successful outcome of a policy, operation, or initiative. Actors conduct influence operations to stymie or foil proposals, plans, or courses of action which are not in their interest. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.002: Thwart + +**Summary**: Prevent the successful outcome of a policy, operation, or initiative. Actors conduct influence operations to stymie or foil proposals, plans, or courses of action which are not in their interest. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0135.003.md b/generated_pages/techniques/T0135.003.md index 447c142..0e865a0 100644 --- a/generated_pages/techniques/T0135.003.md +++ b/generated_pages/techniques/T0135.003.md @@ -2,6 +2,48 @@ **Summary**: Sabotage, destroy, or damage a system, process, or relationship. The classic example is the Soviet strategy of “active measures” involving deniable covert activities such as political influence, the use of front organisations, the orchestration of domestic unrest, and the spread of disinformation. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.003: Subvert + +**Summary**: Sabotage, destroy, or damage a system, process, or relationship. The classic example is the Soviet strategy of “active measures” involving deniable covert activities such as political influence, the use of front organisations, the orchestration of domestic unrest, and the spread of disinformation. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.003: Subvert + +**Summary**: Sabotage, destroy, or damage a system, process, or relationship. The classic example is the Soviet strategy of “active measures” involving deniable covert activities such as political influence, the use of front organisations, the orchestration of domestic unrest, and the spread of disinformation. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0135.004.md b/generated_pages/techniques/T0135.004.md index 9ba87fa..fc2f923 100644 --- a/generated_pages/techniques/T0135.004.md +++ b/generated_pages/techniques/T0135.004.md @@ -2,6 +2,48 @@ **Summary**: To cause a target audience to divide into two completely opposing groups. This is a special case of subversion. To divide and conquer is an age-old approach to subverting and overcoming an enemy. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.004: Polarise + +**Summary**: To cause a target audience to divide into two completely opposing groups. This is a special case of subversion. To divide and conquer is an age-old approach to subverting and overcoming an enemy. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0135 Undermine + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135.004: Polarise + +**Summary**: To cause a target audience to divide into two completely opposing groups. This is a special case of subversion. To divide and conquer is an age-old approach to subverting and overcoming an enemy. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0135.md b/generated_pages/techniques/T0135.md index d43338b..e570005 100644 --- a/generated_pages/techniques/T0135.md +++ b/generated_pages/techniques/T0135.md @@ -2,6 +2,48 @@ **Summary**: Weaken, debilitate, or subvert a target or their actions. An influence operation may be designed to disparage an opponent; sabotage an opponent’s systems or processes; compromise an opponent’s relationships or support system; impair an opponent’s capability; or thwart an opponent’s initiative. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135: Undermine + +**Summary**: Weaken, debilitate, or subvert a target or their actions. An influence operation may be designed to disparage an opponent; sabotage an opponent’s systems or processes; compromise an opponent’s relationships or support system; impair an opponent’s capability; or thwart an opponent’s initiative. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0135: Undermine + +**Summary**: Weaken, debilitate, or subvert a target or their actions. An influence operation may be designed to disparage an opponent; sabotage an opponent’s systems or processes; compromise an opponent’s relationships or support system; impair an opponent’s capability; or thwart an opponent’s initiative. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.001.md b/generated_pages/techniques/T0136.001.md index 431bcd8..82a284f 100644 --- a/generated_pages/techniques/T0136.001.md +++ b/generated_pages/techniques/T0136.001.md @@ -2,6 +2,48 @@ **Summary**: Preserve a positive perception in the public’s mind following an accusation or adverse event. When accused of a wrongful act, an actor may engage in denial, counter accusations, whataboutism, or conspiracy theories to distract public attention and attempt to maintain a positive image. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.001: Defend Reputaton + +**Summary**: Preserve a positive perception in the public’s mind following an accusation or adverse event. When accused of a wrongful act, an actor may engage in denial, counter accusations, whataboutism, or conspiracy theories to distract public attention and attempt to maintain a positive image. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.001: Defend Reputaton + +**Summary**: Preserve a positive perception in the public’s mind following an accusation or adverse event. When accused of a wrongful act, an actor may engage in denial, counter accusations, whataboutism, or conspiracy theories to distract public attention and attempt to maintain a positive image. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.002.md b/generated_pages/techniques/T0136.002.md index eb73cb1..f25a9f1 100644 --- a/generated_pages/techniques/T0136.002.md +++ b/generated_pages/techniques/T0136.002.md @@ -2,6 +2,48 @@ **Summary**: To convince others to exonerate you of a perceived wrongdoing. When an actor finds it untenable to deny doing something, they may attempt to exonerate themselves with disinformation which claims the action was reasonable. This is a special case of “Defend Reputation”. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.002: Justify Action + +**Summary**: To convince others to exonerate you of a perceived wrongdoing. When an actor finds it untenable to deny doing something, they may attempt to exonerate themselves with disinformation which claims the action was reasonable. This is a special case of “Defend Reputation”. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.002: Justify Action + +**Summary**: To convince others to exonerate you of a perceived wrongdoing. When an actor finds it untenable to deny doing something, they may attempt to exonerate themselves with disinformation which claims the action was reasonable. This is a special case of “Defend Reputation”. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.003.md b/generated_pages/techniques/T0136.003.md index 8d48070..607c243 100644 --- a/generated_pages/techniques/T0136.003.md +++ b/generated_pages/techniques/T0136.003.md @@ -2,6 +2,48 @@ **Summary**: Raise the morale of those who support the organisation or group. Invigorate constituents with zeal for the mission or activity. Terrorist groups, political movements, and cults may indoctrinate their supporters with ideologies that are based on warped versions of religion or cause harm to others. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.003: Energise Supporters + +**Summary**: Raise the morale of those who support the organisation or group. Invigorate constituents with zeal for the mission or activity. Terrorist groups, political movements, and cults may indoctrinate their supporters with ideologies that are based on warped versions of religion or cause harm to others. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.003: Energise Supporters + +**Summary**: Raise the morale of those who support the organisation or group. Invigorate constituents with zeal for the mission or activity. Terrorist groups, political movements, and cults may indoctrinate their supporters with ideologies that are based on warped versions of religion or cause harm to others. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.004.md b/generated_pages/techniques/T0136.004.md index 207890f..5bfdf8d 100644 --- a/generated_pages/techniques/T0136.004.md +++ b/generated_pages/techniques/T0136.004.md @@ -2,6 +2,48 @@ **Summary**: Elevate the estimation of the actor in the public’s mind. Improve their image or standing. Public relations professionals use persuasive overt communications to achieve this goal; manipulators use covert disinformation. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.004: Boost Reputation + +**Summary**: Elevate the estimation of the actor in the public’s mind. Improve their image or standing. Public relations professionals use persuasive overt communications to achieve this goal; manipulators use covert disinformation. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.004: Boost Reputation + +**Summary**: Elevate the estimation of the actor in the public’s mind. Improve their image or standing. Public relations professionals use persuasive overt communications to achieve this goal; manipulators use covert disinformation. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.005.md b/generated_pages/techniques/T0136.005.md index e55c57d..d6bdae4 100644 --- a/generated_pages/techniques/T0136.005.md +++ b/generated_pages/techniques/T0136.005.md @@ -2,6 +2,48 @@ **Summary**: Elevate or fortify the public backing for a policy, operation, or idea. Domestic and foreign actors can use artificial means to fabricate or amplify public support for a proposal or action. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.005: Cultvate Support for Initiative + +**Summary**: Elevate or fortify the public backing for a policy, operation, or idea. Domestic and foreign actors can use artificial means to fabricate or amplify public support for a proposal or action. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.005: Cultvate Support for Initiative + +**Summary**: Elevate or fortify the public backing for a policy, operation, or idea. Domestic and foreign actors can use artificial means to fabricate or amplify public support for a proposal or action. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.006.md b/generated_pages/techniques/T0136.006.md index c4b7384..c458795 100644 --- a/generated_pages/techniques/T0136.006.md +++ b/generated_pages/techniques/T0136.006.md @@ -2,6 +2,48 @@ **Summary**: Elevate or fortify the public backing for a partner. Governments may interfere in other countries’ elections by covertly favouring a party or candidate aligned with their interests. They may also mount an influence operation to bolster the reputation of an ally under attack. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.006: Cultivate Support for Ally + +**Summary**: Elevate or fortify the public backing for a partner. Governments may interfere in other countries’ elections by covertly favouring a party or candidate aligned with their interests. They may also mount an influence operation to bolster the reputation of an ally under attack. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.006: Cultivate Support for Ally + +**Summary**: Elevate or fortify the public backing for a partner. Governments may interfere in other countries’ elections by covertly favouring a party or candidate aligned with their interests. They may also mount an influence operation to bolster the reputation of an ally under attack. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.007.md b/generated_pages/techniques/T0136.007.md index 59d542e..04123e3 100644 --- a/generated_pages/techniques/T0136.007.md +++ b/generated_pages/techniques/T0136.007.md @@ -2,6 +2,48 @@ **Summary**: Motivate followers to join or subscribe as members of the team. Organisations may mount recruitment drives that use propaganda to entice sympathisers to sign up. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.007: Recruit Members + +**Summary**: Motivate followers to join or subscribe as members of the team. Organisations may mount recruitment drives that use propaganda to entice sympathisers to sign up. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.007: Recruit Members + +**Summary**: Motivate followers to join or subscribe as members of the team. Organisations may mount recruitment drives that use propaganda to entice sympathisers to sign up. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.008.md b/generated_pages/techniques/T0136.008.md index e49262e..4693b20 100644 --- a/generated_pages/techniques/T0136.008.md +++ b/generated_pages/techniques/T0136.008.md @@ -2,6 +2,48 @@ **Summary**: Improve personal standing within a community. Gain fame, approbation, or notoriety. Conspiracy theorists, those with special access, and ideologues can gain prominence in a community by propagating disinformation, leaking confidential documents, or spreading hate. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.008: Increase Prestige + +**Summary**: Improve personal standing within a community. Gain fame, approbation, or notoriety. Conspiracy theorists, those with special access, and ideologues can gain prominence in a community by propagating disinformation, leaking confidential documents, or spreading hate. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0136 Cultivate Support + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136.008: Increase Prestige + +**Summary**: Improve personal standing within a community. Gain fame, approbation, or notoriety. Conspiracy theorists, those with special access, and ideologues can gain prominence in a community by propagating disinformation, leaking confidential documents, or spreading hate. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0136.md b/generated_pages/techniques/T0136.md index 576dd09..603fd63 100644 --- a/generated_pages/techniques/T0136.md +++ b/generated_pages/techniques/T0136.md @@ -2,6 +2,48 @@ **Summary**: Grow or maintain the base of support for the actor, ally, or action. This includes hard core recruitment, managing alliances, and generating or maintaining sympathy among a wider audience, including reputation management and public relations. Sub-techniques assume support for actor (self) unless otherwise specified. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136: Cultivate Support + +**Summary**: Grow or maintain the base of support for the actor, ally, or action. This includes hard core recruitment, managing alliances, and generating or maintaining sympathy among a wider audience, including reputation management and public relations. Sub-techniques assume support for actor (self) unless otherwise specified. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0136: Cultivate Support + +**Summary**: Grow or maintain the base of support for the actor, ally, or action. This includes hard core recruitment, managing alliances, and generating or maintaining sympathy among a wider audience, including reputation management and public relations. Sub-techniques assume support for actor (self) unless otherwise specified. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.001.md b/generated_pages/techniques/T0137.001.md index 7a5bd74..1e88134 100644 --- a/generated_pages/techniques/T0137.001.md +++ b/generated_pages/techniques/T0137.001.md @@ -2,6 +2,48 @@ **Summary**: Earn income from digital advertisements published alongside inauthentic content. Conspiratorial, false, or provocative content drives internet traffic. Content owners earn money from impressions of, or clicks on, or conversions of ads published on their websites, social media profiles, or streaming services, or ads published when their content appears in search engine results. Fraudsters simulate impressions, clicks, and conversions, or they spin up inauthentic sites or social media profiles just to generate ad revenue. Conspiracy theorists and political operators generate ad revenue as a byproduct of their operation or as a means of sustaining their campaign. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.001: Generate Ad Revenue + +**Summary**: Earn income from digital advertisements published alongside inauthentic content. Conspiratorial, false, or provocative content drives internet traffic. Content owners earn money from impressions of, or clicks on, or conversions of ads published on their websites, social media profiles, or streaming services, or ads published when their content appears in search engine results. Fraudsters simulate impressions, clicks, and conversions, or they spin up inauthentic sites or social media profiles just to generate ad revenue. Conspiracy theorists and political operators generate ad revenue as a byproduct of their operation or as a means of sustaining their campaign. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.001: Generate Ad Revenue + +**Summary**: Earn income from digital advertisements published alongside inauthentic content. Conspiratorial, false, or provocative content drives internet traffic. Content owners earn money from impressions of, or clicks on, or conversions of ads published on their websites, social media profiles, or streaming services, or ads published when their content appears in search engine results. Fraudsters simulate impressions, clicks, and conversions, or they spin up inauthentic sites or social media profiles just to generate ad revenue. Conspiracy theorists and political operators generate ad revenue as a byproduct of their operation or as a means of sustaining their campaign. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.002.md b/generated_pages/techniques/T0137.002.md index c145562..1e5a974 100644 --- a/generated_pages/techniques/T0137.002.md +++ b/generated_pages/techniques/T0137.002.md @@ -2,6 +2,48 @@ **Summary**: Defraud a target or trick a target into doing something that benefits the attacker. A typical scam is where a fraudster convinces a target to pay for something without the intention of ever delivering anything in return. Alternatively, the fraudster may promise benefits which never materialise, such as a fake cure. Criminals often exploit a fear or crisis or generate a sense of urgency. They may use deepfakes to impersonate authority figures or individuals in distress. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.002: Scam + +**Summary**: Defraud a target or trick a target into doing something that benefits the attacker. A typical scam is where a fraudster convinces a target to pay for something without the intention of ever delivering anything in return. Alternatively, the fraudster may promise benefits which never materialise, such as a fake cure. Criminals often exploit a fear or crisis or generate a sense of urgency. They may use deepfakes to impersonate authority figures or individuals in distress. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.002: Scam + +**Summary**: Defraud a target or trick a target into doing something that benefits the attacker. A typical scam is where a fraudster convinces a target to pay for something without the intention of ever delivering anything in return. Alternatively, the fraudster may promise benefits which never materialise, such as a fake cure. Criminals often exploit a fear or crisis or generate a sense of urgency. They may use deepfakes to impersonate authority figures or individuals in distress. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.003.md b/generated_pages/techniques/T0137.003.md index b0db415..41997a4 100644 --- a/generated_pages/techniques/T0137.003.md +++ b/generated_pages/techniques/T0137.003.md @@ -2,6 +2,48 @@ **Summary**: Solicit donations for a cause. Popular conspiracy theorists can attract financial contributions from their followers. Fighting back against the establishment is a popular crowdfunding narrative. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.003: Raise Funds + +**Summary**: Solicit donations for a cause. Popular conspiracy theorists can attract financial contributions from their followers. Fighting back against the establishment is a popular crowdfunding narrative. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.003: Raise Funds + +**Summary**: Solicit donations for a cause. Popular conspiracy theorists can attract financial contributions from their followers. Fighting back against the establishment is a popular crowdfunding narrative. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.004.md b/generated_pages/techniques/T0137.004.md index 9b21c07..95e7018 100644 --- a/generated_pages/techniques/T0137.004.md +++ b/generated_pages/techniques/T0137.004.md @@ -2,6 +2,48 @@ **Summary**: Offer products for sale under false pretences. Campaigns may hijack or create causes built on disinformation to sell promotional merchandise. Or charlatans may amplify victims’ unfounded fears to sell them items of questionable utility such as supplements or survival gear. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.004: Sell Items under False Pretences + +**Summary**: Offer products for sale under false pretences. Campaigns may hijack or create causes built on disinformation to sell promotional merchandise. Or charlatans may amplify victims’ unfounded fears to sell them items of questionable utility such as supplements or survival gear. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.004: Sell Items under False Pretences + +**Summary**: Offer products for sale under false pretences. Campaigns may hijack or create causes built on disinformation to sell promotional merchandise. Or charlatans may amplify victims’ unfounded fears to sell them items of questionable utility such as supplements or survival gear. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.005.md b/generated_pages/techniques/T0137.005.md index f48192e..dc31629 100644 --- a/generated_pages/techniques/T0137.005.md +++ b/generated_pages/techniques/T0137.005.md @@ -2,6 +2,48 @@ **Summary**: Coerce money or favours from a target by threatening to expose or corrupt information. Ransomware criminals typically demand money. Intelligence agencies demand national secrets. Sexual predators demand favours. The leverage may be critical, sensitive, or embarrassing information. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.005: Extort + +**Summary**: Coerce money or favours from a target by threatening to expose or corrupt information. Ransomware criminals typically demand money. Intelligence agencies demand national secrets. Sexual predators demand favours. The leverage may be critical, sensitive, or embarrassing information. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.005: Extort + +**Summary**: Coerce money or favours from a target by threatening to expose or corrupt information. Ransomware criminals typically demand money. Intelligence agencies demand national secrets. Sexual predators demand favours. The leverage may be critical, sensitive, or embarrassing information. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.006.md b/generated_pages/techniques/T0137.006.md index caef976..358b71e 100644 --- a/generated_pages/techniques/T0137.006.md +++ b/generated_pages/techniques/T0137.006.md @@ -2,6 +2,48 @@ **Summary**: Artificially inflate or deflate the price of stocks or other financial instruments and then trade on these to make profit. The most common securities fraud schemes are called “pump and dump” and “poop and scoop”. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.006: Manipulate Stocks + +**Summary**: Artificially inflate or deflate the price of stocks or other financial instruments and then trade on these to make profit. The most common securities fraud schemes are called “pump and dump” and “poop and scoop”. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0137 Make Money + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137.006: Manipulate Stocks + +**Summary**: Artificially inflate or deflate the price of stocks or other financial instruments and then trade on these to make profit. The most common securities fraud schemes are called “pump and dump” and “poop and scoop”. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0137.md b/generated_pages/techniques/T0137.md index 944e104..a27eb5d 100644 --- a/generated_pages/techniques/T0137.md +++ b/generated_pages/techniques/T0137.md @@ -2,6 +2,50 @@ **Summary**: Profit from disinformation, conspiracy theories, or online harm. In some cases, the sole objective is financial gain, in other cases the objective is both financial and political. Making money may also be a way to sustain a political campaign. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “But while Russia’s efforts [at election interference] in the United States fit Moscow’s campaign to upend Western democracy and rattle Mr. Putin’s geopolitical rivals, the undertaking in Madagascar often seemed to have a much simpler objective: profit.

“Before the election, a Russian company that local officials and foreign diplomats say is controlled by Mr. Prigozhin acquired a major stake in a government-run company that mines chromium, a mineral valued for its use in stainless steel. The acquisition set off protests by workers complaining of unpaid wages, cancelledcanceled benefits and foreign intrusion into a sector that had been a source of national pride for Madagascar.

“It repeated a pattern in which Russia has swooped into African nations, hoping to reshape their politics for material gain. In the Central African Republic, a former Russian intelligence officer is the top security adviser to the country’s president, while companies linked to Mr. Prigozhin have spread across the nation, snapping up diamonds in both legal and illegal ways, according to government officials, warlords in the diamond trade and registration documents showing Mr. Prigozhin’s growing military and commercial footprint.

[...] “The [operation switched from supporting the incumbent candidate on realising he would lose the election]. After the Russians pirouetted to help Mr. Rajoelina — their former opponent — win the election, Mr. Prigozhin’s company was able to negotiate with the new government to keep control of the chromium mining operation, despite the worker protests, and Mr. Prigozhin’s political operatives remain stationed in the capital to this day.”


This behaviour matches T0137: Make Money because analysts have asserted that the identified influence operation was in part motivated by a goal to generate profit | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137: Make Money + +**Summary**: Profit from disinformation, conspiracy theories, or online harm. In some cases, the sole objective is financial gain, in other cases the objective is both financial and political. Making money may also be a way to sustain a political campaign. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “But while Russia’s efforts [at election interference] in the United States fit Moscow’s campaign to upend Western democracy and rattle Mr. Putin’s geopolitical rivals, the undertaking in Madagascar often seemed to have a much simpler objective: profit.

“Before the election, a Russian company that local officials and foreign diplomats say is controlled by Mr. Prigozhin acquired a major stake in a government-run company that mines chromium, a mineral valued for its use in stainless steel. The acquisition set off protests by workers complaining of unpaid wages, cancelledcanceled benefits and foreign intrusion into a sector that had been a source of national pride for Madagascar.

“It repeated a pattern in which Russia has swooped into African nations, hoping to reshape their politics for material gain. In the Central African Republic, a former Russian intelligence officer is the top security adviser to the country’s president, while companies linked to Mr. Prigozhin have spread across the nation, snapping up diamonds in both legal and illegal ways, according to government officials, warlords in the diamond trade and registration documents showing Mr. Prigozhin’s growing military and commercial footprint.

[...] “The [operation switched from supporting the incumbent candidate on realising he would lose the election]. After the Russians pirouetted to help Mr. Rajoelina — their former opponent — win the election, Mr. Prigozhin’s company was able to negotiate with the new government to keep control of the chromium mining operation, despite the worker protests, and Mr. Prigozhin’s political operatives remain stationed in the capital to this day.”


This behaviour matches T0137: Make Money because analysts have asserted that the identified influence operation was in part motivated by a goal to generate profit | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0137: Make Money + +**Summary**: Profit from disinformation, conspiracy theories, or online harm. In some cases, the sole objective is financial gain, in other cases the objective is both financial and political. Making money may also be a way to sustain a political campaign. + **Tactic**: TA02 Plan Objectives @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0138.001.md b/generated_pages/techniques/T0138.001.md index bc1c449..5b9a7b8 100644 --- a/generated_pages/techniques/T0138.001.md +++ b/generated_pages/techniques/T0138.001.md @@ -2,6 +2,48 @@ **Summary**: Inspire, animate, or exhort a target to act. An actor can use propaganda, disinformation, or conspiracy theories to stimulate a target to act in its interest. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0138 Motivate to Act + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138.001: Encourage + +**Summary**: Inspire, animate, or exhort a target to act. An actor can use propaganda, disinformation, or conspiracy theories to stimulate a target to act in its interest. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0138 Motivate to Act + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138.001: Encourage + +**Summary**: Inspire, animate, or exhort a target to act. An actor can use propaganda, disinformation, or conspiracy theories to stimulate a target to act in its interest. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0138.002.md b/generated_pages/techniques/T0138.002.md index b5fb258..fbf55e3 100644 --- a/generated_pages/techniques/T0138.002.md +++ b/generated_pages/techniques/T0138.002.md @@ -2,6 +2,48 @@ **Summary**: Instigate, incite, or arouse a target to act. Social media manipulators exploit moral outrage to propel targets to spread hate, take to the streets to protest, or engage in acts of violence. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0138 Motivate to Act + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138.002: Provoke + +**Summary**: Instigate, incite, or arouse a target to act. Social media manipulators exploit moral outrage to propel targets to spread hate, take to the streets to protest, or engage in acts of violence. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0138 Motivate to Act + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138.002: Provoke + +**Summary**: Instigate, incite, or arouse a target to act. Social media manipulators exploit moral outrage to propel targets to spread hate, take to the streets to protest, or engage in acts of violence. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0138.003.md b/generated_pages/techniques/T0138.003.md index d7b7fde..1908f44 100644 --- a/generated_pages/techniques/T0138.003.md +++ b/generated_pages/techniques/T0138.003.md @@ -2,6 +2,48 @@ **Summary**: Force target to take an action or to stop taking an action it has already started. Actors can use the threat of reputational damage alongside military or economic threats to compel a target. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0138 Motivate to Act + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138.003: Compel + +**Summary**: Force target to take an action or to stop taking an action it has already started. Actors can use the threat of reputational damage alongside military or economic threats to compel a target. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0138 Motivate to Act + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138.003: Compel + +**Summary**: Force target to take an action or to stop taking an action it has already started. Actors can use the threat of reputational damage alongside military or economic threats to compel a target. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0138.md b/generated_pages/techniques/T0138.md index d9b20f4..08dee43 100644 --- a/generated_pages/techniques/T0138.md +++ b/generated_pages/techniques/T0138.md @@ -2,6 +2,48 @@ **Summary**: Persuade, impel, or provoke the target to behave in a specific manner favourable to the attacker. Some common behaviours are joining, subscribing, voting, buying, demonstrating, fighting, retreating, resigning, boycotting. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138: Motivate to Act + +**Summary**: Persuade, impel, or provoke the target to behave in a specific manner favourable to the attacker. Some common behaviours are joining, subscribing, voting, buying, demonstrating, fighting, retreating, resigning, boycotting. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0138: Motivate to Act + +**Summary**: Persuade, impel, or provoke the target to behave in a specific manner favourable to the attacker. Some common behaviours are joining, subscribing, voting, buying, demonstrating, fighting, retreating, resigning, boycotting. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0139.001.md b/generated_pages/techniques/T0139.001.md index 3eeb709..c0362d3 100644 --- a/generated_pages/techniques/T0139.001.md +++ b/generated_pages/techniques/T0139.001.md @@ -2,6 +2,48 @@ **Summary**: To make a target disinclined or reluctant to act. Manipulators use disinformation to cause targets to question the utility, legality, or morality of taking an action. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0139 Dissuade from Acting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139.001: Discourage + +**Summary**: To make a target disinclined or reluctant to act. Manipulators use disinformation to cause targets to question the utility, legality, or morality of taking an action. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0139 Dissuade from Acting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139.001: Discourage + +**Summary**: To make a target disinclined or reluctant to act. Manipulators use disinformation to cause targets to question the utility, legality, or morality of taking an action. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0139.002.md b/generated_pages/techniques/T0139.002.md index ededcb3..d81481d 100644 --- a/generated_pages/techniques/T0139.002.md +++ b/generated_pages/techniques/T0139.002.md @@ -2,6 +2,50 @@ **Summary**: Intimidate or incentivise target into remaining silent or prevent target from speaking out. A threat actor may cow a target into silence as a special case of deterrence. Or they may buy the target’s silence. Or they may repress or restrict the target’s speech. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0139 Dissuade from Acting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00085 China’s large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | “Four media companies – Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television – stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.

The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media company’s publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.

The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”


In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139.002: Silence + +**Summary**: Intimidate or incentivise target into remaining silent or prevent target from speaking out. A threat actor may cow a target into silence as a special case of deterrence. Or they may buy the target’s silence. Or they may repress or restrict the target’s speech. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0139 Dissuade from Acting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00085 China’s large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | “Four media companies – Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television – stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.

The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media company’s publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.

The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”


In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139.002: Silence + +**Summary**: Intimidate or incentivise target into remaining silent or prevent target from speaking out. A threat actor may cow a target into silence as a special case of deterrence. Or they may buy the target’s silence. Or they may repress or restrict the target’s speech. + **Tactic**: TA02 Plan Objectives @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0139.003.md b/generated_pages/techniques/T0139.003.md index 7a6494c..8bb32d7 100644 --- a/generated_pages/techniques/T0139.003.md +++ b/generated_pages/techniques/T0139.003.md @@ -2,6 +2,48 @@ **Summary**: Prevent target from taking an action for fear of the consequences. Deterrence occurs in the mind of the target, who fears they will be worse off if they take an action than if they don’t. When making threats, aggressors may bluff, feign irrationality, or engage in brinksmanship. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0139 Dissuade from Acting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139.003: Deter + +**Summary**: Prevent target from taking an action for fear of the consequences. Deterrence occurs in the mind of the target, who fears they will be worse off if they take an action than if they don’t. When making threats, aggressors may bluff, feign irrationality, or engage in brinksmanship. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0139 Dissuade from Acting + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139.003: Deter + +**Summary**: Prevent target from taking an action for fear of the consequences. Deterrence occurs in the mind of the target, who fears they will be worse off if they take an action than if they don’t. When making threats, aggressors may bluff, feign irrationality, or engage in brinksmanship. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0139.md b/generated_pages/techniques/T0139.md index fae5188..cb920be 100644 --- a/generated_pages/techniques/T0139.md +++ b/generated_pages/techniques/T0139.md @@ -2,6 +2,48 @@ **Summary**: Discourage, deter, or inhibit the target from actions which would be unfavourable to the attacker. The actor may want the target to refrain from voting, buying, fighting, or supplying. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139: Dissuade from Acting + +**Summary**: Discourage, deter, or inhibit the target from actions which would be unfavourable to the attacker. The actor may want the target to refrain from voting, buying, fighting, or supplying. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0139: Dissuade from Acting + +**Summary**: Discourage, deter, or inhibit the target from actions which would be unfavourable to the attacker. The actor may want the target to refrain from voting, buying, fighting, or supplying. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0140.001.md b/generated_pages/techniques/T0140.001.md index 5b1be51..b132293 100644 --- a/generated_pages/techniques/T0140.001.md +++ b/generated_pages/techniques/T0140.001.md @@ -2,6 +2,48 @@ **Summary**: Attempt to damage the target’s personal reputation by impugning their character. This can range from subtle attempts to misrepresent or insinuate, to obvious attempts to denigrate or disparage, to blatant attempts to malign or vilify. Slander applies to oral expression. Libel applies to written or pictorial material. Defamation is often carried out by online trolls. The sole aim here is to cause harm to the target. If the threat actor uses defamation as a means of undermining the target, then choose sub-technique “Smear” of technique “Undermine” instead. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0140 Cause Harm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140.001: Defame + +**Summary**: Attempt to damage the target’s personal reputation by impugning their character. This can range from subtle attempts to misrepresent or insinuate, to obvious attempts to denigrate or disparage, to blatant attempts to malign or vilify. Slander applies to oral expression. Libel applies to written or pictorial material. Defamation is often carried out by online trolls. The sole aim here is to cause harm to the target. If the threat actor uses defamation as a means of undermining the target, then choose sub-technique “Smear” of technique “Undermine” instead. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0140 Cause Harm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140.001: Defame + +**Summary**: Attempt to damage the target’s personal reputation by impugning their character. This can range from subtle attempts to misrepresent or insinuate, to obvious attempts to denigrate or disparage, to blatant attempts to malign or vilify. Slander applies to oral expression. Libel applies to written or pictorial material. Defamation is often carried out by online trolls. The sole aim here is to cause harm to the target. If the threat actor uses defamation as a means of undermining the target, then choose sub-technique “Smear” of technique “Undermine” instead. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0140.002.md b/generated_pages/techniques/T0140.002.md index cf564c6..a65bdb3 100644 --- a/generated_pages/techniques/T0140.002.md +++ b/generated_pages/techniques/T0140.002.md @@ -2,6 +2,48 @@ **Summary**: Coerce, bully, or frighten the target. An influence operation may use intimidation to compel the target to act against their will. Or the goal may be to frighten or even terrify the target into silence or submission. In some cases, the goal is simply to make the victim suffer. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0140 Cause Harm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140.002: Intimidate + +**Summary**: Coerce, bully, or frighten the target. An influence operation may use intimidation to compel the target to act against their will. Or the goal may be to frighten or even terrify the target into silence or submission. In some cases, the goal is simply to make the victim suffer. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0140 Cause Harm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140.002: Intimidate + +**Summary**: Coerce, bully, or frighten the target. An influence operation may use intimidation to compel the target to act against their will. Or the goal may be to frighten or even terrify the target into silence or submission. In some cases, the goal is simply to make the victim suffer. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0140.003.md b/generated_pages/techniques/T0140.003.md index 7c660d8..b8e0589 100644 --- a/generated_pages/techniques/T0140.003.md +++ b/generated_pages/techniques/T0140.003.md @@ -2,6 +2,48 @@ **Summary**: Publish and/or propagate demeaning, derisive, or humiliating content targeting an individual or group of individuals with the intent to cause emotional, psychological, or physical distress. Hate speech can cause harm directly or incite others to harm the target. It often aims to stigmatise the target by singling out immutable characteristics such as colour, race, religion, national or ethnic origin, gender, gender identity, sexual orientation, age, disease, or mental or physical disability. Thus, promoting hatred online may involve racism, antisemitism, Islamophobia, xenophobia, sexism, misogyny, homophobia, transphobia, ageism, ableism, or any combination thereof. Motivations for hate speech range from group preservation to ideological superiority to the unbridled infliction of suffering. +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0140 Cause Harm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140.003: Spread Hate + +**Summary**: Publish and/or propagate demeaning, derisive, or humiliating content targeting an individual or group of individuals with the intent to cause emotional, psychological, or physical distress. Hate speech can cause harm directly or incite others to harm the target. It often aims to stigmatise the target by singling out immutable characteristics such as colour, race, religion, national or ethnic origin, gender, gender identity, sexual orientation, age, disease, or mental or physical disability. Thus, promoting hatred online may involve racism, antisemitism, Islamophobia, xenophobia, sexism, misogyny, homophobia, transphobia, ageism, ableism, or any combination thereof. Motivations for hate speech range from group preservation to ideological superiority to the unbridled infliction of suffering. + +**Tactic**: TA02 Plan Objectives **Parent Technique:** T0140 Cause Harm + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140.003: Spread Hate + +**Summary**: Publish and/or propagate demeaning, derisive, or humiliating content targeting an individual or group of individuals with the intent to cause emotional, psychological, or physical distress. Hate speech can cause harm directly or incite others to harm the target. It often aims to stigmatise the target by singling out immutable characteristics such as colour, race, religion, national or ethnic origin, gender, gender identity, sexual orientation, age, disease, or mental or physical disability. Thus, promoting hatred online may involve racism, antisemitism, Islamophobia, xenophobia, sexism, misogyny, homophobia, transphobia, ageism, ableism, or any combination thereof. Motivations for hate speech range from group preservation to ideological superiority to the unbridled infliction of suffering. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0140.md b/generated_pages/techniques/T0140.md index 613fd71..513e51c 100644 --- a/generated_pages/techniques/T0140.md +++ b/generated_pages/techniques/T0140.md @@ -2,6 +2,48 @@ **Summary**: Persecute, malign, or inflict pain upon a target. The objective of a campaign may be to cause fear or emotional distress in a target. In some cases, harm is instrumental to achieving a primary objective, as in coercion, repression, or intimidation. In other cases, harm may be inflicted for the satisfaction of the perpetrator, as in revenge or sadistic cruelty. +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140: Cause Harm + +**Summary**: Persecute, malign, or inflict pain upon a target. The objective of a campaign may be to cause fear or emotional distress in a target. In some cases, harm is instrumental to achieving a primary objective, as in coercion, repression, or intimidation. In other cases, harm may be inflicted for the satisfaction of the perpetrator, as in revenge or sadistic cruelty. + +**Tactic**: TA02 Plan Objectives + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0140: Cause Harm + +**Summary**: Persecute, malign, or inflict pain upon a target. The objective of a campaign may be to cause fear or emotional distress in a target. In some cases, harm is instrumental to achieving a primary objective, as in coercion, repression, or intimidation. In other cases, harm may be inflicted for the satisfaction of the perpetrator, as in revenge or sadistic cruelty. + **Tactic**: TA02 Plan Objectives @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0143.001.md b/generated_pages/techniques/T0143.001.md index cc15a4e..96b02f5 100644 --- a/generated_pages/techniques/T0143.001.md +++ b/generated_pages/techniques/T0143.001.md @@ -2,6 +2,64 @@ **Summary**: An individual or institution presenting a persona that legitimately matches who or what they are is presenting an authentic persona.

For example, an account which presents as being managed by a member of a country’s military, and is legitimately managed by that person, would be presenting an authentic persona (T0143.001: Authentic Persona, T0097.105: Military Personnel).

Sometimes people can authentically present themselves as who they are while still participating in malicious/inauthentic activity; a legitimate journalist (T0143.001: Authentic Persona, T0097.102: Journalist Persona) may accept bribes to promote products, or they could be tricked by threat actors into sharing an operation’s narrative. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “The largest account [in the network of inauthentic accounts attributed to Russia] had 11,542 followers but only 8 had over 1,000 followers, and 11 had under ten. The accounts in aggregate had only 79,807 engagements across the entire tweet corpus, and appear to have been linked to the operations primarily via technical indicators rather than amplification or conversation between them. A few of the bios from accounts in the set claim to be journalists. Two profiles, belonging to an American activist and a Russian academic, were definitively real people; we do not have sufficient visibility into the technical indicators that led to their inclusion in the network and thus do not include them in our discussion.”

In this example the Stanford Internet Observatory has been provided data on two networks which, according to Twitter, showed signs of being affiliated with Russia’s Internet Research Agency (IRA). Two accounts investigated by Stanford were real people presenting their authentic personas, matching T0143.001: Authentic Persona.

Stanford didn’t have access to the technical indicators associating these accounts with the IRA, so they did not include data associated with these accounts for assessment. Analysts with access to platform logs may be able to uncover indicators of suspicious behaviour in accounts presenting authentic personas, using attribution methods unavailable to analysts working with open source data. | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”

In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).

The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. | +| [I00085 China’s large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | “Four media companies – Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television – stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.

The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media company’s publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.

The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”


In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” | +| [I00093 China Falsely Denies Disinformation Campaign Targeting Canada’s Prime Minister](../../generated_pages/incidents/I00093.md) | “On October 23, Canada’s Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.

“The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nation’s domestic affairs.”

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

“That is false.

“The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.

“The investigation exposed China’s disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms – including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””


In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.001: Authentic Persona + +**Summary**: An individual or institution presenting a persona that legitimately matches who or what they are is presenting an authentic persona.

For example, an account which presents as being managed by a member of a country’s military, and is legitimately managed by that person, would be presenting an authentic persona (T0143.001: Authentic Persona, T0097.105: Military Personnel).

Sometimes people can authentically present themselves as who they are while still participating in malicious/inauthentic activity; a legitimate journalist (T0143.001: Authentic Persona, T0097.102: Journalist Persona) may accept bribes to promote products, or they could be tricked by threat actors into sharing an operation’s narrative. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “The largest account [in the network of inauthentic accounts attributed to Russia] had 11,542 followers but only 8 had over 1,000 followers, and 11 had under ten. The accounts in aggregate had only 79,807 engagements across the entire tweet corpus, and appear to have been linked to the operations primarily via technical indicators rather than amplification or conversation between them. A few of the bios from accounts in the set claim to be journalists. Two profiles, belonging to an American activist and a Russian academic, were definitively real people; we do not have sufficient visibility into the technical indicators that led to their inclusion in the network and thus do not include them in our discussion.”

In this example the Stanford Internet Observatory has been provided data on two networks which, according to Twitter, showed signs of being affiliated with Russia’s Internet Research Agency (IRA). Two accounts investigated by Stanford were real people presenting their authentic personas, matching T0143.001: Authentic Persona.

Stanford didn’t have access to the technical indicators associating these accounts with the IRA, so they did not include data associated with these accounts for assessment. Analysts with access to platform logs may be able to uncover indicators of suspicious behaviour in accounts presenting authentic personas, using attribution methods unavailable to analysts working with open source data. | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.

“Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.

“The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.

“It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”


Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.

We can’t know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”

In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).

The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. | +| [I00085 China’s large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | “Four media companies – Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television – stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.

The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media company’s publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.

The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”


In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” | +| [I00093 China Falsely Denies Disinformation Campaign Targeting Canada’s Prime Minister](../../generated_pages/incidents/I00093.md) | “On October 23, Canada’s Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.

“The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account Asset, T0150.001: Newly Created Asset, T0150.005: Compromised Asset).

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nation’s domestic affairs.”

“A Chinese Embassy in Canada spokesperson dismissed Canada’s accusation as baseless.

“That is false.

“The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.

“The investigation exposed China’s disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms – including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””


In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.001: Authentic Persona + +**Summary**: An individual or institution presenting a persona that legitimately matches who or what they are is presenting an authentic persona.

For example, an account which presents as being managed by a member of a country’s military, and is legitimately managed by that person, would be presenting an authentic persona (T0143.001: Authentic Persona, T0097.105: Military Personnel).

Sometimes people can authentically present themselves as who they are while still participating in malicious/inauthentic activity; a legitimate journalist (T0143.001: Authentic Persona, T0097.102: Journalist Persona) may accept bribes to promote products, or they could be tricked by threat actors into sharing an operation’s narrative. + **Tactic**: TA16 Establish Legitimacy @@ -27,4 +85,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0143.002.md b/generated_pages/techniques/T0143.002.md index 95e4ec3..b8a86bf 100644 --- a/generated_pages/techniques/T0143.002.md +++ b/generated_pages/techniques/T0143.002.md @@ -2,6 +2,77 @@ **Summary**: An individual or institution pretending to have a persona without any legitimate claim to that persona is presenting a fabricated persona, such as a person who presents themselves as a member of a country’s military without having worked in any capacity with the military (T0143.002: Fabricated Persona, T0097.105: Military Personnel).

Sometimes real people can present entirely fabricated personas; they can use real names and photos on social media while also pretending to have credentials or traits they don’t have in real life. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”

In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”

In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”

In an effort to make the Oriental Review’s stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).

In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | “Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.

““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants’ difficult lives in Europe,” Meta said. “They also posted about Poland’s strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian government’s handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””


In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | +| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.

“In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.

“Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.

“In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. 

“In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””


In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). | +| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.

“State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.

“However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.

“It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.

[...]

“It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”


In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). | +| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.

“The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.

“London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.


In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik.

We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.

[...]

The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.

[...]

All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address.


In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.002: Fabricated Persona + +**Summary**: An individual or institution pretending to have a persona without any legitimate claim to that persona is presenting a fabricated persona, such as a person who presents themselves as a member of a country’s military without having worked in any capacity with the military (T0143.002: Fabricated Persona, T0097.105: Military Personnel).

Sometimes real people can present entirely fabricated personas; they can use real names and photos on social media while also pretending to have credentials or traits they don’t have in real life. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”

In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) | +| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]”

In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”

In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”

In an effort to make the Oriental Review’s stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).

In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). | +| [I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.

“This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”


Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.

Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).

This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | “Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.

““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants’ difficult lives in Europe,” Meta said. “They also posted about Poland’s strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian government’s handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””


In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | +| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.

“In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.

“Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.

“In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. 

“In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””


In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). | +| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.

“State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.

“However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.

“It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.

[...]

“It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”


In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). | +| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.

“The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.

“London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.


In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | +| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik.

We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.

[...]

The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.

[...]

All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address.


In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.002: Fabricated Persona + +**Summary**: An individual or institution pretending to have a persona without any legitimate claim to that persona is presenting a fabricated persona, such as a person who presents themselves as a member of a country’s military without having worked in any capacity with the military (T0143.002: Fabricated Persona, T0097.105: Military Personnel).

Sometimes real people can present entirely fabricated personas; they can use real names and photos on social media while also pretending to have credentials or traits they don’t have in real life. + **Tactic**: TA16 Establish Legitimacy @@ -35,4 +106,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0143.003.md b/generated_pages/techniques/T0143.003.md index c2c6ffe..41de338 100644 --- a/generated_pages/techniques/T0143.003.md +++ b/generated_pages/techniques/T0143.003.md @@ -2,6 +2,81 @@ **Summary**: Threat actors may impersonate existing individuals or institutions to conceal their network identity, add legitimacy to content, or harm the impersonated target’s reputation. This Technique covers situations where an actor presents themselves as another existing individual or institution.

This Technique was previously called Prepare Assets Impersonating Legitimate Entities and used the ID T0099. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097 Present Persona](../../generated_pages/techniques/T0097.md) | Analysts can use the sub-techniques of T0097: Presented Persona to categorise the type of impersonation. For example, a document developed by a threat actor which falsely presented as a letter from a government department could be documented using T0085.004: Develop Document, T0143.003: Impersonated Persona, and T0097.206: Government Institution Persona. | +| [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) | Actors may take existing accounts’ profile pictures as part of their impersonation efforts. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | “In the days leading up to the UK’s [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]

“The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters’ friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”


In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts’ existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:

- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks.
- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org.
- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers.
- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas.
- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victim’s trust.”


In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). | +| [I00070 Eli Lilly Clarifies It’s Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “In the campaign’s final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.

“The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professor’s campaign manager.

When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”


In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.

“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“


In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists’ legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”

In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts’ profile pictures (T0145.001: Copy Account Imagery). | +| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.

“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the site’s true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.

“The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.

“In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””


The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | +| [I00127 Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops](../../generated_pages/incidents/I00127.md) | Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.

[...]

What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.

"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."

To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.

Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.


In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.003: Impersonated Persona + +**Summary**: Threat actors may impersonate existing individuals or institutions to conceal their network identity, add legitimacy to content, or harm the impersonated target’s reputation. This Technique covers situations where an actor presents themselves as another existing individual or institution.

This Technique was previously called Prepare Assets Impersonating Legitimate Entities and used the ID T0099. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097 Present Persona](../../generated_pages/techniques/T0097.md) | Analysts can use the sub-techniques of T0097: Presented Persona to categorise the type of impersonation. For example, a document developed by a threat actor which falsely presented as a letter from a government department could be documented using T0085.004: Develop Document, T0143.003: Impersonated Persona, and T0097.206: Government Institution Persona. | +| [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) | Actors may take existing accounts’ profile pictures as part of their impersonation efforts. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | “In the days leading up to the UK’s [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]

“The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters’ friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”


In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts’ existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | “[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:

- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks.
- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org.
- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers.
- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas.
- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victim’s trust.”


In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). | +| [I00070 Eli Lilly Clarifies It’s Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | “In the campaign’s final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.

“The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professor’s campaign manager.

When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”


In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.

“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“


In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists’ legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”

In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts’ profile pictures (T0145.001: Copy Account Imagery). | +| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.

“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the site’s true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.

“The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.

“In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””


The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | +| [I00127 Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops](../../generated_pages/incidents/I00127.md) | Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.

[...]

What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.

"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."

To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.

Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.


In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.003: Impersonated Persona + +**Summary**: Threat actors may impersonate existing individuals or institutions to conceal their network identity, add legitimacy to content, or harm the impersonated target’s reputation. This Technique covers situations where an actor presents themselves as another existing individual or institution.

This Technique was previously called Prepare Assets Impersonating Legitimate Entities and used the ID T0099. + **Tactic**: TA16 Establish Legitimacy @@ -36,4 +111,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0143.004.md b/generated_pages/techniques/T0143.004.md index eaf232f..9fed2c2 100644 --- a/generated_pages/techniques/T0143.004.md +++ b/generated_pages/techniques/T0143.004.md @@ -2,6 +2,54 @@ **Summary**: Parody is a form of artistic expression that imitates the style or characteristics of a particular work, genre, or individual in a humorous or satirical way, often to comment on or critique the original work or subject matter. People may present as parodies to create humour or make a point by exaggerating or altering elements of the original, while still maintaining recognizable elements.

The use of parody is not an indication of inauthentic or malicious behaviour; parody allows people to present ideas or criticisms in a comedic or exaggerated manner, softening the impact of sensitive or contentious topics. Because parody is often protected as a form of free speech or artistic expression, it provides a legal and social framework for discussing controversial issues.

However, parody personas may be perceived as authentic personas, leading to people mistakenly believing that a parody account’s statements represent the real opinions of a parodied target. Threat actors may also use the guise of parody to spread campaign content. Parody personas may disclaim that they are operating as a parody, however this is not always the case, and is not always given prominence. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097 Present Persona](../../generated_pages/techniques/T0097.md) | Analysts can use the sub-techniques of T0097: Presented Persona to categorise the type of parody. For example, an account presenting as a parody of a business could be documented using T0097.205: Business Persona and T0143.003: Parody Persona. | +| [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) | Actors may take existing accounts’ profile pictures as part of their parody efforts. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00067 Understanding Information disorder](../../generated_pages/incidents/I00067.md) | “A 2019 case in the US involved a Republican political operative who created a parody site designed to look like Joe Biden’s official website as the former vice president was campaigning to be the Democratic nominee for the 2020 presidential election. With a URL of joebiden[.]info, the parody site was indexed by Google higher than Biden’s official site, joebiden[.]com, when he launched his campaign in April 2019. The operative, who previously had created content for Donald Trump, said he did not create the site for the Trump campaign directly.

“The opening line on the parody site reads: “Uncle Joe is back and ready to take a hands-on approach to America’s problems!” It is full of images of Biden kissing and hugging young girls and women. At the bottom of the page a statement reads: “This site is political commentary and parody of Joe Biden’s Presidential campaign website. This is not Joe Biden’s actual website. It is intended for entertainment and political commentary only.””


In this example a website was created which claimed to be a parody of Joe Biden’s official website (T0143.004: Parody Persona).

Although the website was a parody, it ranked higher than Joe Biden’s real website on Google search. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.004: Parody Persona + +**Summary**: Parody is a form of artistic expression that imitates the style or characteristics of a particular work, genre, or individual in a humorous or satirical way, often to comment on or critique the original work or subject matter. People may present as parodies to create humour or make a point by exaggerating or altering elements of the original, while still maintaining recognizable elements.

The use of parody is not an indication of inauthentic or malicious behaviour; parody allows people to present ideas or criticisms in a comedic or exaggerated manner, softening the impact of sensitive or contentious topics. Because parody is often protected as a form of free speech or artistic expression, it provides a legal and social framework for discussing controversial issues.

However, parody personas may be perceived as authentic personas, leading to people mistakenly believing that a parody account’s statements represent the real opinions of a parodied target. Threat actors may also use the guise of parody to spread campaign content. Parody personas may disclaim that they are operating as a parody, however this is not always the case, and is not always given prominence. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0143 Persona Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097 Present Persona](../../generated_pages/techniques/T0097.md) | Analysts can use the sub-techniques of T0097: Presented Persona to categorise the type of parody. For example, an account presenting as a parody of a business could be documented using T0097.205: Business Persona and T0143.003: Parody Persona. | +| [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) | Actors may take existing accounts’ profile pictures as part of their parody efforts. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00067 Understanding Information disorder](../../generated_pages/incidents/I00067.md) | “A 2019 case in the US involved a Republican political operative who created a parody site designed to look like Joe Biden’s official website as the former vice president was campaigning to be the Democratic nominee for the 2020 presidential election. With a URL of joebiden[.]info, the parody site was indexed by Google higher than Biden’s official site, joebiden[.]com, when he launched his campaign in April 2019. The operative, who previously had created content for Donald Trump, said he did not create the site for the Trump campaign directly.

“The opening line on the parody site reads: “Uncle Joe is back and ready to take a hands-on approach to America’s problems!” It is full of images of Biden kissing and hugging young girls and women. At the bottom of the page a statement reads: “This site is political commentary and parody of Joe Biden’s Presidential campaign website. This is not Joe Biden’s actual website. It is intended for entertainment and political commentary only.””


In this example a website was created which claimed to be a parody of Joe Biden’s official website (T0143.004: Parody Persona).

Although the website was a parody, it ranked higher than Joe Biden’s real website on Google search. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143.004: Parody Persona + +**Summary**: Parody is a form of artistic expression that imitates the style or characteristics of a particular work, genre, or individual in a humorous or satirical way, often to comment on or critique the original work or subject matter. People may present as parodies to create humour or make a point by exaggerating or altering elements of the original, while still maintaining recognizable elements.

The use of parody is not an indication of inauthentic or malicious behaviour; parody allows people to present ideas or criticisms in a comedic or exaggerated manner, softening the impact of sensitive or contentious topics. Because parody is often protected as a form of free speech or artistic expression, it provides a legal and social framework for discussing controversial issues.

However, parody personas may be perceived as authentic personas, leading to people mistakenly believing that a parody account’s statements represent the real opinions of a parodied target. Threat actors may also use the guise of parody to spread campaign content. Parody personas may disclaim that they are operating as a parody, however this is not always the case, and is not always given prominence. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0143.md b/generated_pages/techniques/T0143.md index 608439c..1a40634 100644 --- a/generated_pages/techniques/T0143.md +++ b/generated_pages/techniques/T0143.md @@ -2,6 +2,48 @@ **Summary**: This Technique contains sub-techniques which analysts can use to assert whether an account is presenting an authentic, fabricated, or parody persona:

T0143.001: Authentic Persona
T0143.002: Fabricated Persona
T0143.003: Impersonated Persona
T0143.004: Parody Persona +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143: Persona Legitimacy + +**Summary**: This Technique contains sub-techniques which analysts can use to assert whether an account is presenting an authentic, fabricated, or parody persona:

T0143.001: Authentic Persona
T0143.002: Fabricated Persona
T0143.003: Impersonated Persona
T0143.004: Parody Persona + +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0143: Persona Legitimacy + +**Summary**: This Technique contains sub-techniques which analysts can use to assert whether an account is presenting an authentic, fabricated, or parody persona:

T0143.001: Authentic Persona
T0143.002: Fabricated Persona
T0143.003: Impersonated Persona
T0143.004: Parody Persona + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0144.001.md b/generated_pages/techniques/T0144.001.md index 17c407a..0576ea7 100644 --- a/generated_pages/techniques/T0144.001.md +++ b/generated_pages/techniques/T0144.001.md @@ -2,6 +2,50 @@ **Summary**: This sub-technique covers situations where analysts have identified the same persona being presented across multiple platforms.

Having multiple accounts presenting the same persona is not an indicator of inauthentic behaviour; many people create accounts and present as themselves on multiple platforms. However, threat actors are known to present the same persona across multiple platforms, benefiting from an increase in perceived legitimacy. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0144 Persona Legitimacy Evidence + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “Approximately one-third of the suspended accounts [in the network of inauthentic accounts attributed to Russia] tweeted primarily about Syria, in English, Russian, and Arabic; many accounts tweeted in all three languages. The themes these accounts pushed will be familiar to anyone who has studied Russian overt or covert information operations about Syria: 

- Praising Russia’s role in Syria; claiming Russia was killing terrorists in Syria and highlighting Russia’s humanitarian aid
- Criticizing the role of the Turkey and the US in Syria; claiming the US killed civilians in Syria
- Criticizing the White Helmets, and claiming that they worked with Westerners to created scenes to make it look like the Syrian government used chemical weapons

“The two most prominent Syria accounts were @Syria_FreeNews and @PamSpenser. 

“@Syria_FreeNews had 20,505 followers and was created on April 6, 2017. The account’s bio said “Exclusive information about Middle East and Northern Africa countries events. BreaKing news from the scene.””


This behaviour matches T0097.202: News Outlet Persona because the account @Syrira_FreeNews presented itself as a news outlet in its name, bio, and branding, across all websites on which the persona had been established (T0144.001: Persona Presented across Platforms). Twitter’s technical indicators allowed them to attribute the account “can be reliably tied to Russian state actors”. Because of this we can assert that the persona is entirely fabricated (T0143.002: Fabricated Persona); this is not a legitimate news outlet providing information about Syria, it’s an asset controlled by Russia publishing narratives beneficial to their agenda. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0144.001: Present Persona across Platforms + +**Summary**: This sub-technique covers situations where analysts have identified the same persona being presented across multiple platforms.

Having multiple accounts presenting the same persona is not an indicator of inauthentic behaviour; many people create accounts and present as themselves on multiple platforms. However, threat actors are known to present the same persona across multiple platforms, benefiting from an increase in perceived legitimacy. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0144 Persona Legitimacy Evidence + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | “Approximately one-third of the suspended accounts [in the network of inauthentic accounts attributed to Russia] tweeted primarily about Syria, in English, Russian, and Arabic; many accounts tweeted in all three languages. The themes these accounts pushed will be familiar to anyone who has studied Russian overt or covert information operations about Syria: 

- Praising Russia’s role in Syria; claiming Russia was killing terrorists in Syria and highlighting Russia’s humanitarian aid
- Criticizing the role of the Turkey and the US in Syria; claiming the US killed civilians in Syria
- Criticizing the White Helmets, and claiming that they worked with Westerners to created scenes to make it look like the Syrian government used chemical weapons

“The two most prominent Syria accounts were @Syria_FreeNews and @PamSpenser. 

“@Syria_FreeNews had 20,505 followers and was created on April 6, 2017. The account’s bio said “Exclusive information about Middle East and Northern Africa countries events. BreaKing news from the scene.””


This behaviour matches T0097.202: News Outlet Persona because the account @Syrira_FreeNews presented itself as a news outlet in its name, bio, and branding, across all websites on which the persona had been established (T0144.001: Persona Presented across Platforms). Twitter’s technical indicators allowed them to attribute the account “can be reliably tied to Russian state actors”. Because of this we can assert that the persona is entirely fabricated (T0143.002: Fabricated Persona); this is not a legitimate news outlet providing information about Syria, it’s an asset controlled by Russia publishing narratives beneficial to their agenda. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0144.001: Present Persona across Platforms + +**Summary**: This sub-technique covers situations where analysts have identified the same persona being presented across multiple platforms.

Having multiple accounts presenting the same persona is not an indicator of inauthentic behaviour; many people create accounts and present as themselves on multiple platforms. However, threat actors are known to present the same persona across multiple platforms, benefiting from an increase in perceived legitimacy. + **Tactic**: TA16 Establish Legitimacy @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0144.002.md b/generated_pages/techniques/T0144.002.md index dc58392..dab5127 100644 --- a/generated_pages/techniques/T0144.002.md +++ b/generated_pages/techniques/T0144.002.md @@ -2,6 +2,53 @@ **Summary**: Threat actors have been observed following a template when filling their accounts’ online profiles. This may be done to enable account holders to quickly present themselves as a real person with a targeted persona.

For example, an actor may be instructed to create many fabricated local accounts for use in an operation using a template of “[flag emojis], [location], [personal quote], [political party] supporter” in their account’s description. +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0144 Persona Legitimacy Evidence + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | The use of a templated account biography in a collection of accounts may be an indicator that the personas have been fabricated. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template)., In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0144.002: Persona Template + +**Summary**: Threat actors have been observed following a template when filling their accounts’ online profiles. This may be done to enable account holders to quickly present themselves as a real person with a targeted persona.

For example, an actor may be instructed to create many fabricated local accounts for use in an operation using a template of “[flag emojis], [location], [personal quote], [political party] supporter” in their account’s description. + +**Tactic**: TA16 Establish Legitimacy **Parent Technique:** T0144 Persona Legitimacy Evidence + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | The use of a templated account biography in a collection of accounts may be an indicator that the personas have been fabricated. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”


The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.

“A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.”

This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0144.002: Persona Template + +**Summary**: Threat actors have been observed following a template when filling their accounts’ online profiles. This may be done to enable account holders to quickly present themselves as a real person with a targeted persona.

For example, an actor may be instructed to create many fabricated local accounts for use in an operation using a template of “[flag emojis], [location], [personal quote], [political party] supporter” in their account’s description. + **Tactic**: TA16 Establish Legitimacy @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0144.md b/generated_pages/techniques/T0144.md index 0783789..17e99d0 100644 --- a/generated_pages/techniques/T0144.md +++ b/generated_pages/techniques/T0144.md @@ -2,6 +2,48 @@ **Summary**: This Technique contains behaviours which might indicate whether a persona is legitimate, a fabrication, or a parody.

For example, the same persona being consistently presented across platforms is consistent with how authentic users behave on social media. However, threat actors have also displayed this behaviour as a way to increase the perceived legitimacy of their fabricated personas (aka “backstopping”). +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0144: Persona Legitimacy Evidence + +**Summary**: This Technique contains behaviours which might indicate whether a persona is legitimate, a fabrication, or a parody.

For example, the same persona being consistently presented across platforms is consistent with how authentic users behave on social media. However, threat actors have also displayed this behaviour as a way to increase the perceived legitimacy of their fabricated personas (aka “backstopping”). + +**Tactic**: TA16 Establish Legitimacy + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0144: Persona Legitimacy Evidence + +**Summary**: This Technique contains behaviours which might indicate whether a persona is legitimate, a fabrication, or a parody.

For example, the same persona being consistently presented across platforms is consistent with how authentic users behave on social media. However, threat actors have also displayed this behaviour as a way to increase the perceived legitimacy of their fabricated personas (aka “backstopping”). + **Tactic**: TA16 Establish Legitimacy @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.001.md b/generated_pages/techniques/T0145.001.md index 63b7787..2d5d44f 100644 --- a/generated_pages/techniques/T0145.001.md +++ b/generated_pages/techniques/T0145.001.md @@ -2,6 +2,60 @@ **Summary**: Account imagery copied from an existing account.

Analysts may use reverse image search tools to try to identify previous uses of account imagery (e.g. a profile picture) by other accounts.

Threat Actors have been known to copy existing accounts’ imagery to impersonate said accounts, or to provide imagery for unrelated accounts which aren’t intended to impersonate the original assets’ owner. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | Actors may copy existing accounts’ imagery in an attempt to impersonate them. | +| [T0143.004 Parody Persona](../../generated_pages/techniques/T0143.004.md) | Actors may copy existing accounts’ imagery as part of a parody of that account. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00070 Eli Lilly Clarifies It’s Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (O’Carroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).

“According to a report published by El País, the main target of parties’ online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”


In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.006: Attractive Person Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.001: Copy Account Imagery + +**Summary**: Account imagery copied from an existing account.

Analysts may use reverse image search tools to try to identify previous uses of account imagery (e.g. a profile picture) by other accounts.

Threat Actors have been known to copy existing accounts’ imagery to impersonate said accounts, or to provide imagery for unrelated accounts which aren’t intended to impersonate the original assets’ owner. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | Actors may copy existing accounts’ imagery in an attempt to impersonate them. | +| [T0143.004 Parody Persona](../../generated_pages/techniques/T0143.004.md) | Actors may copy existing accounts’ imagery as part of a parody of that account. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00070 Eli Lilly Clarifies It’s Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. | +| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts’ imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (O’Carroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).

“According to a report published by El País, the main target of parties’ online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”


In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.006: Attractive Person Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.001: Copy Account Imagery + +**Summary**: Account imagery copied from an existing account.

Analysts may use reverse image search tools to try to identify previous uses of account imagery (e.g. a profile picture) by other accounts.

Threat Actors have been known to copy existing accounts’ imagery to impersonate said accounts, or to provide imagery for unrelated accounts which aren’t intended to impersonate the original assets’ owner. + **Tactic**: TA15 Establish Assets @@ -25,4 +79,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.002.md b/generated_pages/techniques/T0145.002.md index c8e3006..66de6e5 100644 --- a/generated_pages/techniques/T0145.002.md +++ b/generated_pages/techniques/T0145.002.md @@ -2,6 +2,56 @@ **Summary**: AI Generated images used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived legitimacy. By using an AI-generated picture for this purpose, they are able to present themselves as a real person without compromising their own identity, or risking detection by taking a real person’s existing profile picture. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) | Analysts should use this sub-technique to document use of AI generated imagery used to support narratives. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.

“The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants’ rights, and compared Poland’s treatment of migrants against other countries’. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”


This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).

Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery). | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns.

“Spamouflage is a coordinated inatuhentic behaviour network attributed to the Chinese state.

“Despite the WoS network’s relative sophistication, there are tell-tale signs that it is an influence operation. Several user profile photos display signs of AI generation or do not match the profile’s listed gender.”


A network of accounts connected to the facebook page “The War of Somethings” used AI-generated images of people as their profile picture (T0145.002: AI-Generated Account Imagery). | +| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.

“State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.

“However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.

“It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.

[...]

“It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”


In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.002: AI-Generated Account Imagery + +**Summary**: AI Generated images used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived legitimacy. By using an AI-generated picture for this purpose, they are able to present themselves as a real person without compromising their own identity, or risking detection by taking a real person’s existing profile picture. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) | Analysts should use this sub-technique to document use of AI generated imagery used to support narratives. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00082 Meta’s November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | “[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.

“The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants’ rights, and compared Poland’s treatment of migrants against other countries’. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”


This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).

Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery). | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns.

“Spamouflage is a coordinated inatuhentic behaviour network attributed to the Chinese state.

“Despite the WoS network’s relative sophistication, there are tell-tale signs that it is an influence operation. Several user profile photos display signs of AI generation or do not match the profile’s listed gender.”


A network of accounts connected to the facebook page “The War of Somethings” used AI-generated images of people as their profile picture (T0145.002: AI-Generated Account Imagery). | +| [I00091 Facebook uncovers Chinese network behind fake expert](../../generated_pages/incidents/I00091.md) | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China.

“State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile.

“However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends.

“It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him.

[...]

“It also said that his profile photo also appeared to have been generated using machine-learning capabilities.”


In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.002: AI-Generated Account Imagery + +**Summary**: AI Generated images used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived legitimacy. By using an AI-generated picture for this purpose, they are able to present themselves as a real person without compromising their own identity, or risking detection by taking a real person’s existing profile picture. + **Tactic**: TA15 Establish Assets @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.003.md b/generated_pages/techniques/T0145.003.md index 000c1aa..b4971e7 100644 --- a/generated_pages/techniques/T0145.003.md +++ b/generated_pages/techniques/T0145.003.md @@ -2,6 +2,50 @@ **Summary**: Animal used in account imagery.

An influence operation might flesh out its account by uploading a profile picture, increasing its perceived authenticity.

People sometimes legitimately use images of animals as their profile pictures (e.g. of their pets), and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “Beneath a video on Facebook about the war between Israel and Hamas, Lamonica Trout commented, “America is the war monger, the Jew’s own son!” She left identical comments beneath the same video on two other Facebook pages. Trout’s profile provides no information besides her name. It lists no friends, and there is not a single post or photograph in her feed. Trout’s profile photo shows an alligator.

“Lamonica Trout is likely an invention of the group behind Spamouflage, an ongoing, multi-year influence operation that promotes Beijing’s interests. Last year, Facebook’s parent company, Meta, took down 7,704 accounts and 954 pages it identified as part of the Spamouflage operation, which it described as the “largest known cross-platform influence operation [Meta had] disrupted to date.”2 Facebook’s terms of service prohibit a range of deceptive and inauthentic behaviors, including efforts to conceal the purpose of social media activity or the identity of those behind it.”


In this example an account attributed to a multi-year influence operation created the persona of Lamonica Trout in a Facebook account, which used an image of an animal in its profile picture (T0145.003: Animal Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.003: Animal Account Imagery + +**Summary**: Animal used in account imagery.

An influence operation might flesh out its account by uploading a profile picture, increasing its perceived authenticity.

People sometimes legitimately use images of animals as their profile pictures (e.g. of their pets), and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “Beneath a video on Facebook about the war between Israel and Hamas, Lamonica Trout commented, “America is the war monger, the Jew’s own son!” She left identical comments beneath the same video on two other Facebook pages. Trout’s profile provides no information besides her name. It lists no friends, and there is not a single post or photograph in her feed. Trout’s profile photo shows an alligator.

“Lamonica Trout is likely an invention of the group behind Spamouflage, an ongoing, multi-year influence operation that promotes Beijing’s interests. Last year, Facebook’s parent company, Meta, took down 7,704 accounts and 954 pages it identified as part of the Spamouflage operation, which it described as the “largest known cross-platform influence operation [Meta had] disrupted to date.”2 Facebook’s terms of service prohibit a range of deceptive and inauthentic behaviors, including efforts to conceal the purpose of social media activity or the identity of those behind it.”


In this example an account attributed to a multi-year influence operation created the persona of Lamonica Trout in a Facebook account, which used an image of an animal in its profile picture (T0145.003: Animal Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.003: Animal Account Imagery + +**Summary**: Animal used in account imagery.

An influence operation might flesh out its account by uploading a profile picture, increasing its perceived authenticity.

People sometimes legitimately use images of animals as their profile pictures (e.g. of their pets), and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.004.md b/generated_pages/techniques/T0145.004.md index 13bdc34..8cf7a49 100644 --- a/generated_pages/techniques/T0145.004.md +++ b/generated_pages/techniques/T0145.004.md @@ -2,6 +2,48 @@ **Summary**: Scenery or nature used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

People sometimes legitimately use images of scenery as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.004: Scenery Account Imagery + +**Summary**: Scenery or nature used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

People sometimes legitimately use images of scenery as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.004: Scenery Account Imagery + +**Summary**: Scenery or nature used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

People sometimes legitimately use images of scenery as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.005.md b/generated_pages/techniques/T0145.005.md index ffaef9a..4a97dc0 100644 --- a/generated_pages/techniques/T0145.005.md +++ b/generated_pages/techniques/T0145.005.md @@ -2,6 +2,48 @@ **Summary**: A cartoon/illustrated/anime character used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

People sometimes legitimately use images of illustrated characters as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.005: Illustrated Character Account Imagery + +**Summary**: A cartoon/illustrated/anime character used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

People sometimes legitimately use images of illustrated characters as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.005: Illustrated Character Account Imagery + +**Summary**: A cartoon/illustrated/anime character used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

People sometimes legitimately use images of illustrated characters as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.006.md b/generated_pages/techniques/T0145.006.md index 17fc530..a1e10f8 100644 --- a/generated_pages/techniques/T0145.006.md +++ b/generated_pages/techniques/T0145.006.md @@ -2,6 +2,58 @@ **Summary**: Attractive person used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

Pictures of physically attractive people can benefit threat actors by increasing attention given to their posts.

People sometimes legitimately use images of attractive people as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.109 Romantic Suitor Persona](../../generated_pages/techniques/T0097.109.md) | Accounts presenting as a romantic suitor may use an attractive person in their account imagery. | +| [T0151.017 Dating Platform](../../generated_pages/techniques/T0151.017.md) | Analysts can use this sub-technique for tagging cases where an account has been identified as using a dating platform. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (O’Carroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).

“According to a report published by El País, the main target of parties’ online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”


In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.006: Attractive Person Account Imagery). | +| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.

“In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.

“Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.

“In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. 

“In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””


In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.006: Attractive Person Account Imagery + +**Summary**: Attractive person used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

Pictures of physically attractive people can benefit threat actors by increasing attention given to their posts.

People sometimes legitimately use images of attractive people as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | +| [T0097.109 Romantic Suitor Persona](../../generated_pages/techniques/T0097.109.md) | Accounts presenting as a romantic suitor may use an attractive person in their account imagery. | +| [T0151.017 Dating Platform](../../generated_pages/techniques/T0151.017.md) | Analysts can use this sub-technique for tagging cases where an account has been identified as using a dating platform. | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). | +| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | “In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (O’Carroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).

“According to a report published by El País, the main target of parties’ online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”


In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.006: Attractive Person Account Imagery). | +| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.

“In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.

“Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.

“In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. 

“In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””


In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.006: Attractive Person Account Imagery + +**Summary**: Attractive person used in account imagery.

An influence operation might flesh out its account by uploading account imagery (e.g. a profile picture), increasing its perceived authenticity.

Pictures of physically attractive people can benefit threat actors by increasing attention given to their posts.

People sometimes legitimately use images of attractive people as their profile picture, and threat actors can mimic this behaviour to avoid the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery).

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + **Tactic**: TA15 Establish Assets @@ -24,4 +76,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.007.md b/generated_pages/techniques/T0145.007.md index 26d7261..0a5934a 100644 --- a/generated_pages/techniques/T0145.007.md +++ b/generated_pages/techniques/T0145.007.md @@ -2,6 +2,52 @@ **Summary**: Stock images used in account imagery.

Stock image websites produce photos of people in various situations. Threat Actors can purchase or appropriate these images for use in their account imagery, increasing perceived legitimacy while avoiding the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery). 

Stock images tend to include physically attractive people, and this can benefit threat actors by increasing attention given to their posts.

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler noun|linking verb|noun/verb/adjective|,” which appears to reveal the formula used to write Twitter bios for the accounts.”


This behaviour matches T0145.007: Stock Image Account Imagery because the account was identified as using a stock image as its profile picture. | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]

“Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”


In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.007: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.007: Stock Image Account Imagery + +**Summary**: Stock images used in account imagery.

Stock image websites produce photos of people in various situations. Threat Actors can purchase or appropriate these images for use in their account imagery, increasing perceived legitimacy while avoiding the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery). 

Stock images tend to include physically attractive people, and this can benefit threat actors by increasing attention given to their posts.

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0145 Establish Account Imagery + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.

“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler noun|linking verb|noun/verb/adjective|,” which appears to reveal the formula used to write Twitter bios for the accounts.”


This behaviour matches T0145.007: Stock Image Account Imagery because the account was identified as using a stock image as its profile picture. | +| [I00088 Much Ado About ‘Somethings’ - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | “The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]

“Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”


In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.007: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145.007: Stock Image Account Imagery + +**Summary**: Stock images used in account imagery.

Stock image websites produce photos of people in various situations. Threat Actors can purchase or appropriate these images for use in their account imagery, increasing perceived legitimacy while avoiding the risk of detection associated with stealing or AI-generating profile pictures (see T0145.001: Copy Account Imagery and T0145.002: AI-Generated Account Imagery). 

Stock images tend to include physically attractive people, and this can benefit threat actors by increasing attention given to their posts.

This Technique is often used by Coordinated Inauthentic Behaviour accounts (CIBs). A collection of accounts displaying the same behaviour using similar account imagery can indicate the presence of CIB. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0145.md b/generated_pages/techniques/T0145.md index 5735efc..9271bf4 100644 --- a/generated_pages/techniques/T0145.md +++ b/generated_pages/techniques/T0145.md @@ -2,6 +2,48 @@ **Summary**: Introduce visual elements to an account where a platform allows this functionality (e.g. a profile picture, a cover photo, etc). 

Threat Actors who don’t want to use pictures of themselves in their social media accounts may use alternate imagery to make their account appear more legitimate. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145: Establish Account Imagery + +**Summary**: Introduce visual elements to an account where a platform allows this functionality (e.g. a profile picture, a cover photo, etc). 

Threat Actors who don’t want to use pictures of themselves in their social media accounts may use alternate imagery to make their account appear more legitimate. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0145: Establish Account Imagery + +**Summary**: Introduce visual elements to an account where a platform allows this functionality (e.g. a profile picture, a cover photo, etc). 

Threat Actors who don’t want to use pictures of themselves in their social media accounts may use alternate imagery to make their account appear more legitimate. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.001.md b/generated_pages/techniques/T0146.001.md index 2fa3740..3e31b2d 100644 --- a/generated_pages/techniques/T0146.001.md +++ b/generated_pages/techniques/T0146.001.md @@ -2,6 +2,50 @@ **Summary**: Many online platforms allow users to create free accounts on their platform. A Free Account is an Account which does not require payment at account creation and is not subscribed to paid platform features. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik.

We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.

[...]

The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.

[...]

All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address.


In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.001: Free Account Asset + +**Summary**: Many online platforms allow users to create free accounts on their platform. A Free Account is an Account which does not require payment at account creation and is not subscribed to paid platform features. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik.

We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.

[...]

The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.

[...]

All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address.


In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.001: Free Account Asset + +**Summary**: Many online platforms allow users to create free accounts on their platform. A Free Account is an Account which does not require payment at account creation and is not subscribed to paid platform features. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.002.md b/generated_pages/techniques/T0146.002.md index 447e0f6..dc0e828 100644 --- a/generated_pages/techniques/T0146.002.md +++ b/generated_pages/techniques/T0146.002.md @@ -2,6 +2,50 @@ **Summary**: Some online platforms afford accounts extra features, or other benefits, if the user pays a fee. For example, as of September 2024, content posted by a Paid Account on X (previously Twitter) is prioritised in the platform’s algorithm. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.002: Paid Account Asset + +**Summary**: Some online platforms afford accounts extra features, or other benefits, if the user pays a fee. For example, as of September 2024, content posted by a Paid Account on X (previously Twitter) is prioritised in the platform’s algorithm. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.002: Paid Account Asset + +**Summary**: Some online platforms afford accounts extra features, or other benefits, if the user pays a fee. For example, as of September 2024, content posted by a Paid Account on X (previously Twitter) is prioritised in the platform’s algorithm. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.003.md b/generated_pages/techniques/T0146.003.md index 11614c3..d20f996 100644 --- a/generated_pages/techniques/T0146.003.md +++ b/generated_pages/techniques/T0146.003.md @@ -2,6 +2,54 @@ **Summary**: Some online platforms apply badges of verification to accounts which meet certain criteria.

On some platforms (such as dating apps) a verification badge signifies that the account has passed the platform’s identity verification checks. On some platforms (such as X (previously Twitter)) a verification badge signifies that an account has paid for the platform’s service. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.003: Verified Account Asset + +**Summary**: Some online platforms apply badges of verification to accounts which meet certain criteria.

On some platforms (such as dating apps) a verification badge signifies that the account has passed the platform’s identity verification checks. On some platforms (such as X (previously Twitter)) a verification badge signifies that an account has paid for the platform’s service. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.003: Verified Account Asset + +**Summary**: Some online platforms apply badges of verification to accounts which meet certain criteria.

On some platforms (such as dating apps) a verification badge signifies that the account has passed the platform’s identity verification checks. On some platforms (such as X (previously Twitter)) a verification badge signifies that an account has paid for the platform’s service. + **Tactic**: TA15 Establish Assets @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.004.md b/generated_pages/techniques/T0146.004.md index f5708a3..eee82ec 100644 --- a/generated_pages/techniques/T0146.004.md +++ b/generated_pages/techniques/T0146.004.md @@ -2,6 +2,52 @@ **Summary**: Some accounts will have special privileges / will be in control of the Digital Community Hosting Asset; for example, the Admin of a Facebook Page, a Moderator of a Subreddit, etc. etc. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.004: Administrator Account Asset + +**Summary**: Some accounts will have special privileges / will be in control of the Digital Community Hosting Asset; for example, the Admin of a Facebook Page, a Moderator of a Subreddit, etc. etc. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.004: Administrator Account Asset + +**Summary**: Some accounts will have special privileges / will be in control of the Digital Community Hosting Asset; for example, the Admin of a Facebook Page, a Moderator of a Subreddit, etc. etc. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.005.md b/generated_pages/techniques/T0146.005.md index e364eb2..ed6c484 100644 --- a/generated_pages/techniques/T0146.005.md +++ b/generated_pages/techniques/T0146.005.md @@ -2,6 +2,50 @@ **Summary**: Many platforms which host online communities require creation of a username (or another unique identifier) when an Account is created.

Sometimes people create usernames which are visually similar to other existing accounts’ usernames. While this is not necessarily an indicator of malicious behaviour, actors can create Lookalike Account IDs to support Impersonations or Parody. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.005: Lookalike Account ID + +**Summary**: Many platforms which host online communities require creation of a username (or another unique identifier) when an Account is created.

Sometimes people create usernames which are visually similar to other existing accounts’ usernames. While this is not necessarily an indicator of malicious behaviour, actors can create Lookalike Account IDs to support Impersonations or Parody. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.005: Lookalike Account ID + +**Summary**: Many platforms which host online communities require creation of a username (or another unique identifier) when an Account is created.

Sometimes people create usernames which are visually similar to other existing accounts’ usernames. While this is not necessarily an indicator of malicious behaviour, actors can create Lookalike Account IDs to support Impersonations or Parody. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.006.md b/generated_pages/techniques/T0146.006.md index 0a19c03..bf09c4a 100644 --- a/generated_pages/techniques/T0146.006.md +++ b/generated_pages/techniques/T0146.006.md @@ -2,6 +2,52 @@ **Summary**: Some online platforms allow users to take advantage of the platform’s features without creating an account. Examples include the Paste Platform Pastebin, and the Image Board Platforms 4chan and 8chan. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.

[...]

Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.

Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.

Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.

Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.

The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.


ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples’ faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.006: Open Access Platform + +**Summary**: Some online platforms allow users to take advantage of the platform’s features without creating an account. Examples include the Paste Platform Pastebin, and the Image Board Platforms 4chan and 8chan. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.

[...]

Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.

Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.

Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.

Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.

The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.


ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples’ faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.006: Open Access Platform + +**Summary**: Some online platforms allow users to take advantage of the platform’s features without creating an account. Examples include the Paste Platform Pastebin, and the Image Board Platforms 4chan and 8chan. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.007.md b/generated_pages/techniques/T0146.007.md index 5f4786a..c793389 100644 --- a/generated_pages/techniques/T0146.007.md +++ b/generated_pages/techniques/T0146.007.md @@ -2,6 +2,50 @@ **Summary**: An Automated Account is an account which is displaying automated behaviour, such as republishing or liking other accounts’ content, or publishing their own content. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.007: Automated Account Asset + +**Summary**: An Automated Account is an account which is displaying automated behaviour, such as republishing or liking other accounts’ content, or publishing their own content. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0146 Account Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146.007: Automated Account Asset + +**Summary**: An Automated Account is an account which is displaying automated behaviour, such as republishing or liking other accounts’ content, or publishing their own content. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0146.md b/generated_pages/techniques/T0146.md index 0c8d694..318bdcb 100644 --- a/generated_pages/techniques/T0146.md +++ b/generated_pages/techniques/T0146.md @@ -2,6 +2,56 @@ **Summary**: An Account is a user-specific profile that allows access to the features and services of an online platform, typically requiring a username and password for authentication. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146: Account Asset + +**Summary**: An Account is a user-specific profile that allows access to the features and services of an online platform, typically requiring a username and password for authentication. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0146: Account Asset + +**Summary**: An Account is a user-specific profile that allows access to the features and services of an online platform, typically requiring a username and password for authentication. + **Tactic**: TA15 Establish Assets @@ -25,4 +75,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0147.001.md b/generated_pages/techniques/T0147.001.md index 6b50ce9..5d47bca 100644 --- a/generated_pages/techniques/T0147.001.md +++ b/generated_pages/techniques/T0147.001.md @@ -2,6 +2,52 @@ **Summary**: A Game is Software which has been designed for interactive entertainment, where users take on challenges set by the game’s designers.

While Online Game Platforms allow people to play with each other, Games are designed for single player experiences. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:

Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream ‘Oy Vey!’ on your way to the command center.”

While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users.

A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist group’s modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.

Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.


White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB].

[...]

[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions’ Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.


Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.001: Game Asset + +**Summary**: A Game is Software which has been designed for interactive entertainment, where users take on challenges set by the game’s designers.

While Online Game Platforms allow people to play with each other, Games are designed for single player experiences. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:

Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream ‘Oy Vey!’ on your way to the command center.”

While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users.

A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist group’s modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.

Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.


White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB].

[...]

[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions’ Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.


Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.001: Game Asset + +**Summary**: A Game is Software which has been designed for interactive entertainment, where users take on challenges set by the game’s designers.

While Online Game Platforms allow people to play with each other, Games are designed for single player experiences. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0147.002.md b/generated_pages/techniques/T0147.002.md index aded486..3ed33b3 100644 --- a/generated_pages/techniques/T0147.002.md +++ b/generated_pages/techniques/T0147.002.md @@ -2,6 +2,52 @@ **Summary**: A Game Mod is a modification which can be applied to a Game or Multiplayer Online Game to add new content or functionality to the game.

Users can Modify Games to introduce new content to the game. Modified Games can be distributed on Software Delivery Platforms such as Steam or can be distributed within the Game or Multiplayer Online Game. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:

Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream ‘Oy Vey!’ on your way to the command center.”

While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users.

A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist group’s modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.

Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.


White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.

[...]

During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.

Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.


Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.002: Game Mod Asset + +**Summary**: A Game Mod is a modification which can be applied to a Game or Multiplayer Online Game to add new content or functionality to the game.

Users can Modify Games to introduce new content to the game. Modified Games can be distributed on Software Delivery Platforms such as Steam or can be distributed within the Game or Multiplayer Online Game. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:

Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream ‘Oy Vey!’ on your way to the command center.”

While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users.

A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist group’s modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.

Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.


White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.

[...]

During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.

Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.


Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.002: Game Mod Asset + +**Summary**: A Game Mod is a modification which can be applied to a Game or Multiplayer Online Game to add new content or functionality to the game.

Users can Modify Games to introduce new content to the game. Modified Games can be distributed on Software Delivery Platforms such as Steam or can be distributed within the Game or Multiplayer Online Game. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0147.003.md b/generated_pages/techniques/T0147.003.md index 1de0c2f..91964fe 100644 --- a/generated_pages/techniques/T0147.003.md +++ b/generated_pages/techniques/T0147.003.md @@ -2,6 +2,49 @@ **Summary**: Malware is Software which has been designed to cause harm or facilitate malicious behaviour on electronic devices.

DISARM recommends using the [MITRE ATT&CK Framework](https://attack.mitre.org/) to document malware types and their usage. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.003: Malware Asset + +**Summary**: Malware is Software which has been designed to cause harm or facilitate malicious behaviour on electronic devices.

DISARM recommends using the [MITRE ATT&CK Framework](https://attack.mitre.org/) to document malware types and their usage. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.003: Malware Asset + +**Summary**: Malware is Software which has been designed to cause harm or facilitate malicious behaviour on electronic devices.

DISARM recommends using the [MITRE ATT&CK Framework](https://attack.mitre.org/) to document malware types and their usage. + **Tactic**: TA15 Establish Assets @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0147.004.md b/generated_pages/techniques/T0147.004.md index 6d634f9..5c9f6ce 100644 --- a/generated_pages/techniques/T0147.004.md +++ b/generated_pages/techniques/T0147.004.md @@ -2,6 +2,48 @@ **Summary**: A Mobile App is an application which has been designed to run on mobile operating systems, such as Android or iOS.

Mobile Apps can enable access to online platforms (e.g. Facebook’s mobile app) or can provide software which users can run offline on their device. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.004: Mobile App Asset + +**Summary**: A Mobile App is an application which has been designed to run on mobile operating systems, such as Android or iOS.

Mobile Apps can enable access to online platforms (e.g. Facebook’s mobile app) or can provide software which users can run offline on their device. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0147 Software Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147.004: Mobile App Asset + +**Summary**: A Mobile App is an application which has been designed to run on mobile operating systems, such as Android or iOS.

Mobile Apps can enable access to online platforms (e.g. Facebook’s mobile app) or can provide software which users can run offline on their device. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0147.md b/generated_pages/techniques/T0147.md index d1d6983..61840e1 100644 --- a/generated_pages/techniques/T0147.md +++ b/generated_pages/techniques/T0147.md @@ -2,6 +2,48 @@ **Summary**: A Software is a program developed to run on computers or devices that helps users achieve specific goals, such as improving productivity, automating tasks, or having fun. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147: Software Asset + +**Summary**: A Software is a program developed to run on computers or devices that helps users achieve specific goals, such as improving productivity, automating tasks, or having fun. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0147: Software Asset + +**Summary**: A Software is a program developed to run on computers or devices that helps users achieve specific goals, such as improving productivity, automating tasks, or having fun. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.001.md b/generated_pages/techniques/T0148.001.md index 8037592..7f1a9ff 100644 --- a/generated_pages/techniques/T0148.001.md +++ b/generated_pages/techniques/T0148.001.md @@ -2,6 +2,50 @@ **Summary**: Online Banking Platforms are spaces provided by banks for their customers to manage their Bank Account online.

The Online Banking Platforms available differ by country. In the United Kingdom, examples of banking institutions which provide Online Banking Platforms include Lloyds, Barclays, and Monzo. In the United States, examples include Citibank, Chase, and Capital One. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.001: Online Banking Platform + +**Summary**: Online Banking Platforms are spaces provided by banks for their customers to manage their Bank Account online.

The Online Banking Platforms available differ by country. In the United Kingdom, examples of banking institutions which provide Online Banking Platforms include Lloyds, Barclays, and Monzo. In the United States, examples include Citibank, Chase, and Capital One. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.001: Online Banking Platform + +**Summary**: Online Banking Platforms are spaces provided by banks for their customers to manage their Bank Account online.

The Online Banking Platforms available differ by country. In the United Kingdom, examples of banking institutions which provide Online Banking Platforms include Lloyds, Barclays, and Monzo. In the United States, examples include Citibank, Chase, and Capital One. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.002.md b/generated_pages/techniques/T0148.002.md index 5676614..c99a3af 100644 --- a/generated_pages/techniques/T0148.002.md +++ b/generated_pages/techniques/T0148.002.md @@ -2,6 +2,50 @@ **Summary**: A Bank Account is a financial account that allows individuals or organisations to store, manage, and access their money, typically for saving, spending, or investment purposes. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.002: Bank Account Asset + +**Summary**: A Bank Account is a financial account that allows individuals or organisations to store, manage, and access their money, typically for saving, spending, or investment purposes. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.002: Bank Account Asset + +**Summary**: A Bank Account is a financial account that allows individuals or organisations to store, manage, and access their money, typically for saving, spending, or investment purposes. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.003.md b/generated_pages/techniques/T0148.003.md index cceebd4..5fa0a88 100644 --- a/generated_pages/techniques/T0148.003.md +++ b/generated_pages/techniques/T0148.003.md @@ -2,6 +2,52 @@ **Summary**: Stripe, Paypal, and Apple Pay, Chargebee, Recurly and Zuora are examples of Payment Processing Platforms.

Payment Processing Platforms produce programs providing Payment Processing or Subscription Processing capabilities which actors can use to set up online storefronts, or to take donations. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreon’s stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:

In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.

Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.

Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the author’s email to explore other financing alternatives.

[...]

Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.

Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.


In spite of Patreon’s stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).

Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.003: Payment Processing Platform + +**Summary**: Stripe, Paypal, and Apple Pay, Chargebee, Recurly and Zuora are examples of Payment Processing Platforms.

Payment Processing Platforms produce programs providing Payment Processing or Subscription Processing capabilities which actors can use to set up online storefronts, or to take donations. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreon’s stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:

In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.

Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.

Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the author’s email to explore other financing alternatives.

[...]

Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.

Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.


In spite of Patreon’s stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).

Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.003: Payment Processing Platform + +**Summary**: Stripe, Paypal, and Apple Pay, Chargebee, Recurly and Zuora are examples of Payment Processing Platforms.

Payment Processing Platforms produce programs providing Payment Processing or Subscription Processing capabilities which actors can use to set up online storefronts, or to take donations. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.004.md b/generated_pages/techniques/T0148.004.md index 30a3443..afd863b 100644 --- a/generated_pages/techniques/T0148.004.md +++ b/generated_pages/techniques/T0148.004.md @@ -2,6 +2,50 @@ **Summary**: A Payment Processing Capability is a feature of online platforms or software which enables the processing of one-off payments (e.g. an online checkout, or donation processing page).

Payment Processing Capabilities can enable platform users to purchase products or services or can facilitate donations to a given cause. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform)., This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.004: Payment Processing Capability + +**Summary**: A Payment Processing Capability is a feature of online platforms or software which enables the processing of one-off payments (e.g. an online checkout, or donation processing page).

Payment Processing Capabilities can enable platform users to purchase products or services or can facilitate donations to a given cause. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform)., This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.004: Payment Processing Capability + +**Summary**: A Payment Processing Capability is a feature of online platforms or software which enables the processing of one-off payments (e.g. an online checkout, or donation processing page).

Payment Processing Capabilities can enable platform users to purchase products or services or can facilitate donations to a given cause. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.005.md b/generated_pages/techniques/T0148.005.md index 991b60f..fa2434f 100644 --- a/generated_pages/techniques/T0148.005.md +++ b/generated_pages/techniques/T0148.005.md @@ -2,6 +2,48 @@ **Summary**: A Subscription Processing Capability is a feature of online platforms or software which enables the processing of recurring payments.

Subscription Processing Capabilities are typically used to enable recurring payments in exchange for continued access to products or services. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.005: Subscription Processing Capability + +**Summary**: A Subscription Processing Capability is a feature of online platforms or software which enables the processing of recurring payments.

Subscription Processing Capabilities are typically used to enable recurring payments in exchange for continued access to products or services. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.005: Subscription Processing Capability + +**Summary**: A Subscription Processing Capability is a feature of online platforms or software which enables the processing of recurring payments.

Subscription Processing Capabilities are typically used to enable recurring payments in exchange for continued access to products or services. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.006.md b/generated_pages/techniques/T0148.006.md index f6d5358..caa700a 100644 --- a/generated_pages/techniques/T0148.006.md +++ b/generated_pages/techniques/T0148.006.md @@ -2,6 +2,50 @@ **Summary**: Kickstarter and GoFundMe are examples of Crowdfunding Platforms.

Crowdfunding Platforms enable users with Accounts to create projects for other platform users to finance, usually in exchange for access to fruits of the project. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.006: Crowdfunding Platform + +**Summary**: Kickstarter and GoFundMe are examples of Crowdfunding Platforms.

Crowdfunding Platforms enable users with Accounts to create projects for other platform users to finance, usually in exchange for access to fruits of the project. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.006: Crowdfunding Platform + +**Summary**: Kickstarter and GoFundMe are examples of Crowdfunding Platforms.

Crowdfunding Platforms enable users with Accounts to create projects for other platform users to finance, usually in exchange for access to fruits of the project. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.007.md b/generated_pages/techniques/T0148.007.md index 4d94c67..be850a9 100644 --- a/generated_pages/techniques/T0148.007.md +++ b/generated_pages/techniques/T0148.007.md @@ -2,6 +2,51 @@ **Summary**: Amazon, eBay and Etsy are examples of eCommerce Platforms.

eCommerce Platforms enable users with Accounts to create online storefronts from which other platform users can purchase goods or services. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.007: eCommerce Platform + +**Summary**: Amazon, eBay and Etsy are examples of eCommerce Platforms.

eCommerce Platforms enable users with Accounts to create online storefronts from which other platform users can purchase goods or services. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.007: eCommerce Platform + +**Summary**: Amazon, eBay and Etsy are examples of eCommerce Platforms.

eCommerce Platforms enable users with Accounts to create online storefronts from which other platform users can purchase goods or services. + **Tactic**: TA15 Establish Assets @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.008.md b/generated_pages/techniques/T0148.008.md index 72efa81..fdd3434 100644 --- a/generated_pages/techniques/T0148.008.md +++ b/generated_pages/techniques/T0148.008.md @@ -2,6 +2,48 @@ **Summary**: Coinbase and Kraken are examples of Cryptocurrency Exchange Platforms.

Cryptocurrency Exchange Platforms provide users a digital marketplace where they can buy, sell, and trade cryptocurrencies, such as Bitcoin or Ethereum.

Some Cryptocurrency Exchange Platforms allow users to create a Cryptocurrency Wallet. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.008: Cryptocurrency Exchange Platform + +**Summary**: Coinbase and Kraken are examples of Cryptocurrency Exchange Platforms.

Cryptocurrency Exchange Platforms provide users a digital marketplace where they can buy, sell, and trade cryptocurrencies, such as Bitcoin or Ethereum.

Some Cryptocurrency Exchange Platforms allow users to create a Cryptocurrency Wallet. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.008: Cryptocurrency Exchange Platform + +**Summary**: Coinbase and Kraken are examples of Cryptocurrency Exchange Platforms.

Cryptocurrency Exchange Platforms provide users a digital marketplace where they can buy, sell, and trade cryptocurrencies, such as Bitcoin or Ethereum.

Some Cryptocurrency Exchange Platforms allow users to create a Cryptocurrency Wallet. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.009.md b/generated_pages/techniques/T0148.009.md index f88b8e2..0d679d6 100644 --- a/generated_pages/techniques/T0148.009.md +++ b/generated_pages/techniques/T0148.009.md @@ -2,6 +2,50 @@ **Summary**: A Cryptocurrency Wallet is a digital tool that allows users to store, send, and receive cryptocurrencies. It manages private and public keys, enabling secure access to a user's crypto assets.

An influence operation might use cryptocurrency to conceal that they are conducting operational activities, building assets, or sponsoring aligning entities. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.009: Cryptocurrency Wallet + +**Summary**: A Cryptocurrency Wallet is a digital tool that allows users to store, send, and receive cryptocurrencies. It manages private and public keys, enabling secure access to a user's crypto assets.

An influence operation might use cryptocurrency to conceal that they are conducting operational activities, building assets, or sponsoring aligning entities. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0148 Financial Instrument + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148.009: Cryptocurrency Wallet + +**Summary**: A Cryptocurrency Wallet is a digital tool that allows users to store, send, and receive cryptocurrencies. It manages private and public keys, enabling secure access to a user's crypto assets.

An influence operation might use cryptocurrency to conceal that they are conducting operational activities, building assets, or sponsoring aligning entities. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0148.md b/generated_pages/techniques/T0148.md index 2507086..9393833 100644 --- a/generated_pages/techniques/T0148.md +++ b/generated_pages/techniques/T0148.md @@ -2,6 +2,48 @@ **Summary**: A Financial Instrument is a platform or software that facilitates the sending, receiving, and management of money, enabling financial transactions between users or organisations.

Threat actors can deploy financial instruments legitimately to manage their own finances or illegitimately to support fraud schemes. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148: Financial Instrument + +**Summary**: A Financial Instrument is a platform or software that facilitates the sending, receiving, and management of money, enabling financial transactions between users or organisations.

Threat actors can deploy financial instruments legitimately to manage their own finances or illegitimately to support fraud schemes. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0148: Financial Instrument + +**Summary**: A Financial Instrument is a platform or software that facilitates the sending, receiving, and management of money, enabling financial transactions between users or organisations.

Threat actors can deploy financial instruments legitimately to manage their own finances or illegitimately to support fraud schemes. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.001.md b/generated_pages/techniques/T0149.001.md index 6733245..e958a74 100644 --- a/generated_pages/techniques/T0149.001.md +++ b/generated_pages/techniques/T0149.001.md @@ -2,6 +2,50 @@ **Summary**: A Domain is a web address (such as “google[.]com”), used to navigate to Websites on the internet.

Domains differ from Websites in that Websites are considered to be developed web pages which host content, whereas Domains do not necessarily host public-facing web content.

A threat actor may register a new domain to bypass the old domain being blocked. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.001: Domain Asset + +**Summary**: A Domain is a web address (such as “google[.]com”), used to navigate to Websites on the internet.

Domains differ from Websites in that Websites are considered to be developed web pages which host content, whereas Domains do not necessarily host public-facing web content.

A threat actor may register a new domain to bypass the old domain being blocked. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.001: Domain Asset + +**Summary**: A Domain is a web address (such as “google[.]com”), used to navigate to Websites on the internet.

Domains differ from Websites in that Websites are considered to be developed web pages which host content, whereas Domains do not necessarily host public-facing web content.

A threat actor may register a new domain to bypass the old domain being blocked. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.002.md b/generated_pages/techniques/T0149.002.md index 0c9298f..aeffe5f 100644 --- a/generated_pages/techniques/T0149.002.md +++ b/generated_pages/techniques/T0149.002.md @@ -2,6 +2,50 @@ **Summary**: An Email Domain is a Domain (such as “meta[.]com”) which has the ability to send emails (e.g. from an @meta[.]com address).

Any Domain which has an MX (Mail Exchange) record and configured SMTP (Simple Mail Transfer Protocol) settings can send and receive emails, and is therefore an Email Domain. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.002: Email Domain Asset + +**Summary**: An Email Domain is a Domain (such as “meta[.]com”) which has the ability to send emails (e.g. from an @meta[.]com address).

Any Domain which has an MX (Mail Exchange) record and configured SMTP (Simple Mail Transfer Protocol) settings can send and receive emails, and is therefore an Email Domain. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.002: Email Domain Asset + +**Summary**: An Email Domain is a Domain (such as “meta[.]com”) which has the ability to send emails (e.g. from an @meta[.]com address).

Any Domain which has an MX (Mail Exchange) record and configured SMTP (Simple Mail Transfer Protocol) settings can send and receive emails, and is therefore an Email Domain. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.003.md b/generated_pages/techniques/T0149.003.md index 0a89f1d..e1d657b 100644 --- a/generated_pages/techniques/T0149.003.md +++ b/generated_pages/techniques/T0149.003.md @@ -2,6 +2,51 @@ **Summary**: A Lookalike Domain is a Domain which is visually similar to another Domain, with the potential for web users to mistake one domain for the other.

Threat actors who want to impersonate organisations’ websites have been observed using a variety of domain impersonation methods. For example, actors wanting to create a domain impersonating netflix.com may use methods such as typosquatting (e.g. n3tflix.com), combosquatting (e.g. netflix-billing.com), or TLD swapping (e.g. netflix.top). +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.003: Lookalike Domain + +**Summary**: A Lookalike Domain is a Domain which is visually similar to another Domain, with the potential for web users to mistake one domain for the other.

Threat actors who want to impersonate organisations’ websites have been observed using a variety of domain impersonation methods. For example, actors wanting to create a domain impersonating netflix.com may use methods such as typosquatting (e.g. n3tflix.com), combosquatting (e.g. netflix-billing.com), or TLD swapping (e.g. netflix.top). + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | +| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.

In an effort to further gain the target’s confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.


In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain Asset) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.003: Lookalike Domain + +**Summary**: A Lookalike Domain is a Domain which is visually similar to another Domain, with the potential for web users to mistake one domain for the other.

Threat actors who want to impersonate organisations’ websites have been observed using a variety of domain impersonation methods. For example, actors wanting to create a domain impersonating netflix.com may use methods such as typosquatting (e.g. n3tflix.com), combosquatting (e.g. netflix-billing.com), or TLD swapping (e.g. netflix.top). + **Tactic**: TA15 Establish Assets @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.004.md b/generated_pages/techniques/T0149.004.md index 55bc2b7..e9d4a22 100644 --- a/generated_pages/techniques/T0149.004.md +++ b/generated_pages/techniques/T0149.004.md @@ -2,6 +2,50 @@ **Summary**: A Redirecting Domain is a Domain which has been configured to redirect users to another Domain when visited. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.004: Redirecting Domain Asset + +**Summary**: A Redirecting Domain is a Domain which has been configured to redirect users to another Domain when visited. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.004: Redirecting Domain Asset + +**Summary**: A Redirecting Domain is a Domain which has been configured to redirect users to another Domain when visited. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.005.md b/generated_pages/techniques/T0149.005.md index ad94ac0..2bfc6a1 100644 --- a/generated_pages/techniques/T0149.005.md +++ b/generated_pages/techniques/T0149.005.md @@ -2,6 +2,51 @@ **Summary**: A Server is a computer which provides resources, services, or data to other computers over a network. There are different types of servers, such as web servers (which serve web pages and applications to users), database servers (which manage and provide access to databases), and file servers (which store and share files across a network). +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.005: Server Asset + +**Summary**: A Server is a computer which provides resources, services, or data to other computers over a network. There are different types of servers, such as web servers (which serve web pages and applications to users), database servers (which manage and provide access to databases), and file servers (which store and share files across a network). + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.005: Server Asset + +**Summary**: A Server is a computer which provides resources, services, or data to other computers over a network. There are different types of servers, such as web servers (which serve web pages and applications to users), database servers (which manage and provide access to databases), and file servers (which store and share files across a network). + **Tactic**: TA15 Establish Assets @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.006.md b/generated_pages/techniques/T0149.006.md index 59ba086..5a942b1 100644 --- a/generated_pages/techniques/T0149.006.md +++ b/generated_pages/techniques/T0149.006.md @@ -2,6 +2,49 @@ **Summary**: An IP Address is a unique numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. IP addresses are commonly a part of any online infrastructure.

IP addresses can be in IPV4 dotted decimal (x.x.x.x) or IPV6 colon-separated hexadecimal (y:y:y:y:y:y:y:y) formats. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.006: IP Address Asset + +**Summary**: An IP Address is a unique numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. IP addresses are commonly a part of any online infrastructure.

IP addresses can be in IPV4 dotted decimal (x.x.x.x) or IPV6 colon-separated hexadecimal (y:y:y:y:y:y:y:y) formats. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.006: IP Address Asset + +**Summary**: An IP Address is a unique numerical label assigned to each device connected to a computer network that uses the Internet Protocol for communication. IP addresses are commonly a part of any online infrastructure.

IP addresses can be in IPV4 dotted decimal (x.x.x.x) or IPV6 colon-separated hexadecimal (y:y:y:y:y:y:y:y) formats. + **Tactic**: TA15 Establish Assets @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.007.md b/generated_pages/techniques/T0149.007.md index 469a792..550e231 100644 --- a/generated_pages/techniques/T0149.007.md +++ b/generated_pages/techniques/T0149.007.md @@ -2,6 +2,48 @@ **Summary**: A VPN (Virtual Private Network) is a service which creates secure, encrypted connections over the internet, allowing users to transmit data safely and access network resources remotely. It masks IP Addresses, enhancing privacy and security by preventing unauthorised access and tracking. VPNs are commonly used for protecting sensitive information, bypassing geographic restrictions, and maintaining online anonymity.

VPNs can also allow a threat actor to pose as if they are located in one country while in reality being based in another. By doing so, they can try to either mis-attribute their activities to another actor or better hide their own identity. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.007: VPN Asset + +**Summary**: A VPN (Virtual Private Network) is a service which creates secure, encrypted connections over the internet, allowing users to transmit data safely and access network resources remotely. It masks IP Addresses, enhancing privacy and security by preventing unauthorised access and tracking. VPNs are commonly used for protecting sensitive information, bypassing geographic restrictions, and maintaining online anonymity.

VPNs can also allow a threat actor to pose as if they are located in one country while in reality being based in another. By doing so, they can try to either mis-attribute their activities to another actor or better hide their own identity. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.007: VPN Asset + +**Summary**: A VPN (Virtual Private Network) is a service which creates secure, encrypted connections over the internet, allowing users to transmit data safely and access network resources remotely. It masks IP Addresses, enhancing privacy and security by preventing unauthorised access and tracking. VPNs are commonly used for protecting sensitive information, bypassing geographic restrictions, and maintaining online anonymity.

VPNs can also allow a threat actor to pose as if they are located in one country while in reality being based in another. By doing so, they can try to either mis-attribute their activities to another actor or better hide their own identity. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.008.md b/generated_pages/techniques/T0149.008.md index a5058e2..92f29e6 100644 --- a/generated_pages/techniques/T0149.008.md +++ b/generated_pages/techniques/T0149.008.md @@ -2,6 +2,48 @@ **Summary**: A Proxy IP Address allows a threat actor to mask their real IP Address by putting a layer between them and the online content they’re connecting with.

Proxy IP Addresses can hide the connection between the threat actor and their online infrastructure. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.008: Proxy IP Address Asset + +**Summary**: A Proxy IP Address allows a threat actor to mask their real IP Address by putting a layer between them and the online content they’re connecting with.

Proxy IP Addresses can hide the connection between the threat actor and their online infrastructure. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.008: Proxy IP Address Asset + +**Summary**: A Proxy IP Address allows a threat actor to mask their real IP Address by putting a layer between them and the online content they’re connecting with.

Proxy IP Addresses can hide the connection between the threat actor and their online infrastructure. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.009.md b/generated_pages/techniques/T0149.009.md index 8b1f3ed..35205f7 100644 --- a/generated_pages/techniques/T0149.009.md +++ b/generated_pages/techniques/T0149.009.md @@ -2,6 +2,48 @@ **Summary**: An Internet Connected Physical Asset (sometimes referred to as IoT (Internet of Things)) is a physical asset which has internet connectivity to support online features, such as digital signage, wireless printers, and smart TVs. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.009: Internet Connected Physical Asset + +**Summary**: An Internet Connected Physical Asset (sometimes referred to as IoT (Internet of Things)) is a physical asset which has internet connectivity to support online features, such as digital signage, wireless printers, and smart TVs. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0149 Online Infrastructure + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149.009: Internet Connected Physical Asset + +**Summary**: An Internet Connected Physical Asset (sometimes referred to as IoT (Internet of Things)) is a physical asset which has internet connectivity to support online features, such as digital signage, wireless printers, and smart TVs. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0149.md b/generated_pages/techniques/T0149.md index 2a8902d..7119a76 100644 --- a/generated_pages/techniques/T0149.md +++ b/generated_pages/techniques/T0149.md @@ -2,6 +2,48 @@ **Summary**: Online Infrastructure consists of technical assets which enable online activity, such as domains, servers, and IP addresses. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149: Online Infrastructure + +**Summary**: Online Infrastructure consists of technical assets which enable online activity, such as domains, servers, and IP addresses. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0149: Online Infrastructure + +**Summary**: Online Infrastructure consists of technical assets which enable online activity, such as domains, servers, and IP addresses. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.001.md b/generated_pages/techniques/T0150.001.md index 6469aa0..9488cc3 100644 --- a/generated_pages/techniques/T0150.001.md +++ b/generated_pages/techniques/T0150.001.md @@ -2,6 +2,50 @@ **Summary**: A Newly Created Asset is an asset which has been created and used for the first time in a documented potential incident.

For example, analysts which can identify a recent creation date of Accounts participating in the spread of a new narrative can assert these are Newly Created Assets.

Analysts should use Dormant if the asset was created and laid dormant for an extended period of time before activity. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.001: Newly Created Asset + +**Summary**: A Newly Created Asset is an asset which has been created and used for the first time in a documented potential incident.

For example, analysts which can identify a recent creation date of Accounts participating in the spread of a new narrative can assert these are Newly Created Assets.

Analysts should use Dormant if the asset was created and laid dormant for an extended period of time before activity. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.001: Newly Created Asset + +**Summary**: A Newly Created Asset is an asset which has been created and used for the first time in a documented potential incident.

For example, analysts which can identify a recent creation date of Accounts participating in the spread of a new narrative can assert these are Newly Created Assets.

Analysts should use Dormant if the asset was created and laid dormant for an extended period of time before activity. + **Tactic**: TA15 Establish Assets @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.002.md b/generated_pages/techniques/T0150.002.md index e9e2a42..7abfd87 100644 --- a/generated_pages/techniques/T0150.002.md +++ b/generated_pages/techniques/T0150.002.md @@ -2,6 +2,48 @@ **Summary**: A Dormant Asset is an asset which was inactive for an extended period before being used in a documented potential incident. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.002: Dormant Asset + +**Summary**: A Dormant Asset is an asset which was inactive for an extended period before being used in a documented potential incident. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.002: Dormant Asset + +**Summary**: A Dormant Asset is an asset which was inactive for an extended period before being used in a documented potential incident. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.003.md b/generated_pages/techniques/T0150.003.md index 2c90dd5..7ac00bf 100644 --- a/generated_pages/techniques/T0150.003.md +++ b/generated_pages/techniques/T0150.003.md @@ -2,6 +2,52 @@ **Summary**: Pre-Existing Assets are assets which existed before the observed incident which have not been Repurposed; i.e. they are still being used for their original purpose.

An example could be an Account which presented itself with a Journalist Persona prior to and during the observed potential incident. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.003: Pre-Existing Asset + +**Summary**: Pre-Existing Assets are assets which existed before the observed incident which have not been Repurposed; i.e. they are still being used for their original purpose.

An example could be an Account which presented itself with a Journalist Persona prior to and during the observed potential incident. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.003: Pre-Existing Asset + +**Summary**: Pre-Existing Assets are assets which existed before the observed incident which have not been Repurposed; i.e. they are still being used for their original purpose.

An example could be an Account which presented itself with a Journalist Persona prior to and during the observed potential incident. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.004.md b/generated_pages/techniques/T0150.004.md index 4ad7d34..8eac37c 100644 --- a/generated_pages/techniques/T0150.004.md +++ b/generated_pages/techniques/T0150.004.md @@ -2,6 +2,53 @@ **Summary**: Repurposed Assets are assets which have been identified as being used previously, but are now being used for different purposes, or have new Presented Personas.

Actors have been documented compromising assets, and then repurposing them to present Inauthentic Personas as part of their operations. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.004: Repurposed Asset + +**Summary**: Repurposed Assets are assets which have been identified as being used previously, but are now being used for different purposes, or have new Presented Personas.

Actors have been documented compromising assets, and then repurposing them to present Inauthentic Personas as part of their operations. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.004: Repurposed Asset + +**Summary**: Repurposed Assets are assets which have been identified as being used previously, but are now being used for different purposes, or have new Presented Personas.

Actors have been documented compromising assets, and then repurposing them to present Inauthentic Personas as part of their operations. + **Tactic**: TA15 Establish Assets @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.005.md b/generated_pages/techniques/T0150.005.md index 6cf77d7..472243c 100644 --- a/generated_pages/techniques/T0150.005.md +++ b/generated_pages/techniques/T0150.005.md @@ -2,6 +2,56 @@ **Summary**: A Compromised Asset is an asset which was originally created or belonged to another person or organisation, but which an actor has gained access to without their consent.

See also MITRE ATT&CK T1708: Valid Accounts. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00066 The online war between Qatar and Saudi Arabia](../../generated_pages/incidents/I00066.md) | _"In the early hours of 24 May 2017, a news story appeared on the website of Qatar's official news agency, QNA, reporting that the country's emir, Sheikh Tamim bin Hamad al-Thani, had made an astonishing speech."_

_"[…]_

_"Qatar claimed that the QNA had been hacked. And they said the hack was designed to deliberately spread fake news about the country's leader and its foreign policies. The Qataris specifically blamed UAE, an allegation later repeated by a Washington Post report which cited US intelligence sources. The UAE categorically denied those reports._

_"But the story of the emir's speech unleashed a media free-for-all. Within minutes, Saudi and UAE-owned TV networks - Al Arabiya and Sky News Arabia - picked up on the comments attributed to al-Thani. Both networks accused Qatar of funding extremist groups and of destabilising the region."_

This incident demonstrates how threat actors used a compromised website to allow for an inauthentic narrative to be given a level of credibility which caused significant political fallout (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.004: Website Asset, T0150.005: Compromised Asset). | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet)., An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.005: Compromised Asset + +**Summary**: A Compromised Asset is an asset which was originally created or belonged to another person or organisation, but which an actor has gained access to without their consent.

See also MITRE ATT&CK T1708: Valid Accounts. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00066 The online war between Qatar and Saudi Arabia](../../generated_pages/incidents/I00066.md) | _"In the early hours of 24 May 2017, a news story appeared on the website of Qatar's official news agency, QNA, reporting that the country's emir, Sheikh Tamim bin Hamad al-Thani, had made an astonishing speech."_

_"[…]_

_"Qatar claimed that the QNA had been hacked. And they said the hack was designed to deliberately spread fake news about the country's leader and its foreign policies. The Qataris specifically blamed UAE, an allegation later repeated by a Washington Post report which cited US intelligence sources. The UAE categorically denied those reports._

_"But the story of the emir's speech unleashed a media free-for-all. Within minutes, Saudi and UAE-owned TV networks - Al Arabiya and Sky News Arabia - picked up on the comments attributed to al-Thani. Both networks accused Qatar of funding extremist groups and of destabilising the region."_

This incident demonstrates how threat actors used a compromised website to allow for an inauthentic narrative to be given a level of credibility which caused significant political fallout (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.004: Website Asset, T0150.005: Compromised Asset). | +| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | “The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish government’s decision to change Belwederska Street to Stepan Bandera Street.

“In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górka’s post and his Facebook account were no longer accessible.

“The post on Górka’s Facebook page was shared by Dariusz Walusiak’s Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.

“Walusiak’s Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.

“The fact that Joker DPR’s Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”


In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letter’s narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account Asset, T0150.005: Compromised Asset, T0151.001: Social Media Platform).

This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiak’s existing personas as experts in Polish history. | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet)., An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.005: Compromised Asset + +**Summary**: A Compromised Asset is an asset which was originally created or belonged to another person or organisation, but which an actor has gained access to without their consent.

See also MITRE ATT&CK T1708: Valid Accounts. + **Tactic**: TA15 Establish Assets @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.006.md b/generated_pages/techniques/T0150.006.md index 27018ec..44a4970 100644 --- a/generated_pages/techniques/T0150.006.md +++ b/generated_pages/techniques/T0150.006.md @@ -2,6 +2,49 @@ **Summary**: A Purchased Asset is an asset which actors paid for the ownership of.

For example, threat actors have been observed selling compromised social media accounts on dark web marketplaces, which can be used to disguise operation activity. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.006: Purchased Asset + +**Summary**: A Purchased Asset is an asset which actors paid for the ownership of.

For example, threat actors have been observed selling compromised social media accounts on dark web marketplaces, which can be used to disguise operation activity. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:

[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.

Other domains attributed to Suavelos (T0149.001: Domain Asset) reveal a website set up to sell merchandise (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website Asset, T0097.207: NGO Persona).

To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owner’s identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains.

The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.


Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain Asset, T0152.003: Website Hosting Platform, T0150.006: Purchased). The site’s IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server Asset, T0149.006: IP Address Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.006: Purchased Asset + +**Summary**: A Purchased Asset is an asset which actors paid for the ownership of.

For example, threat actors have been observed selling compromised social media accounts on dark web marketplaces, which can be used to disguise operation activity. + **Tactic**: TA15 Establish Assets @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.007.md b/generated_pages/techniques/T0150.007.md index ffc455a..fc7aa1c 100644 --- a/generated_pages/techniques/T0150.007.md +++ b/generated_pages/techniques/T0150.007.md @@ -2,6 +2,52 @@ **Summary**: A Rented Asset is an asset which actors are temporarily renting or subscribing to.

For example, threat actors have been observed renting temporary access to legitimate accounts on online platforms in order to disguise operation activity. +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | “In the days leading up to the UK’s [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]

“The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters’ friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”


In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts’ existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.007: Rented Asset + +**Summary**: A Rented Asset is an asset which actors are temporarily renting or subscribing to.

For example, threat actors have been observed renting temporary access to legitimate accounts on online platforms in order to disguise operation activity. + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | “In the days leading up to the UK’s [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]

“The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots’ activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodman’s public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters’ friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”


In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts’ existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0150.007: Rented Asset, T0151.017: Dating Platform). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”. The report touches upon how actors gained access to twitter accounts, and what personas they presented:

Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaign’s chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.

[...]

Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation won’t give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.


Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a user’s identity), which were repurposed and used updated account imagery (T0146.003: Verified Account Asset, T0150.007: Rented Asset, T0150.004: Repurposed Asset, T00145.006: Attractive Person Account Imagery). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.007: Rented Asset + +**Summary**: A Rented Asset is an asset which actors are temporarily renting or subscribing to.

For example, threat actors have been observed renting temporary access to legitimate accounts on online platforms in order to disguise operation activity. + **Tactic**: TA15 Establish Assets @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.008.md b/generated_pages/techniques/T0150.008.md index b1a6b38..96fbc3d 100644 --- a/generated_pages/techniques/T0150.008.md +++ b/generated_pages/techniques/T0150.008.md @@ -2,6 +2,48 @@ **Summary**: A Bulk Created Asset is an asset which was created alongside many other instances of the same asset.

Actors have been observed bulk creating Accounts on Social Media Platforms such as Facebook. Indicators of bulk asset creation include its creation date, assets’ naming conventions, their configuration (e.g. templated personas, visually similar profile pictures), or their activity (e.g. post timings, narratives posted). +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.008: Bulk Created Asset + +**Summary**: A Bulk Created Asset is an asset which was created alongside many other instances of the same asset.

Actors have been observed bulk creating Accounts on Social Media Platforms such as Facebook. Indicators of bulk asset creation include its creation date, assets’ naming conventions, their configuration (e.g. templated personas, visually similar profile pictures), or their activity (e.g. post timings, narratives posted). + +**Tactic**: TA15 Establish Assets **Parent Technique:** T0150 Asset Origin + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150.008: Bulk Created Asset + +**Summary**: A Bulk Created Asset is an asset which was created alongside many other instances of the same asset.

Actors have been observed bulk creating Accounts on Social Media Platforms such as Facebook. Indicators of bulk asset creation include its creation date, assets’ naming conventions, their configuration (e.g. templated personas, visually similar profile pictures), or their activity (e.g. post timings, narratives posted). + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0150.md b/generated_pages/techniques/T0150.md index fa47f11..bcfa17d 100644 --- a/generated_pages/techniques/T0150.md +++ b/generated_pages/techniques/T0150.md @@ -2,6 +2,48 @@ **Summary**: Asset Origin contains a list of ways that an actor can obtain an asset. For example, they can create new accounts on online platforms, or they can compromise existing accounts or websites. +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150: Asset Origin + +**Summary**: Asset Origin contains a list of ways that an actor can obtain an asset. For example, they can create new accounts on online platforms, or they can compromise existing accounts or websites. + +**Tactic**: TA15 Establish Assets + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0150: Asset Origin + +**Summary**: Asset Origin contains a list of ways that an actor can obtain an asset. For example, they can create new accounts on online platforms, or they can compromise existing accounts or websites. + **Tactic**: TA15 Establish Assets @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.001.md b/generated_pages/techniques/T0151.001.md index 26f1fba..b46a42d 100644 --- a/generated_pages/techniques/T0151.001.md +++ b/generated_pages/techniques/T0151.001.md @@ -2,6 +2,62 @@ **Summary**: Examples of popular Social Media Platforms include Facebook, Instagram, and VK.

Social Media Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.

Social Media Platforms typically allow the creation of Online Community Groups and Online Community Pages.

Accounts on Social Media Platforms are typically presented with a feed of content posted to the platform. The content that populates this feed can be aggregated by the platform’s proprietary Content Recommendation Algorithm, or users can “friend” or “follow” other accounts to add their posts to their feed.

Many Social Media Platforms also allow users to send direct messages to other users on the platform. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).


Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00115 How Facebook shapes your feed](../../generated_pages/incidents/I00115.md) | This 2021 report by The Washington Post explains the mechanics of Facebook’s algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):

In its early years, Facebook’s algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebook’s news feed algorithm.

Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold people’s attention.

In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they weren’t interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.

Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.

The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didn’t help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.

[...]

Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.

One takeaway is that Facebook’s algorithm isn’t a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users’ values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.
| +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.001: Social Media Platform + +**Summary**: Examples of popular Social Media Platforms include Facebook, Instagram, and VK.

Social Media Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.

Social Media Platforms typically allow the creation of Online Community Groups and Online Community Pages.

Accounts on Social Media Platforms are typically presented with a feed of content posted to the platform. The content that populates this feed can be aggregated by the platform’s proprietary Content Recommendation Algorithm, or users can “friend” or “follow” other accounts to add their posts to their feed.

Many Social Media Platforms also allow users to send direct messages to other users on the platform. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).


Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00115 How Facebook shapes your feed](../../generated_pages/incidents/I00115.md) | This 2021 report by The Washington Post explains the mechanics of Facebook’s algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):

In its early years, Facebook’s algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebook’s news feed algorithm.

Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold people’s attention.

In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they weren’t interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.

Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.

The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didn’t help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.

[...]

Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.

One takeaway is that Facebook’s algorithm isn’t a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users’ values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.
| +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.001: Social Media Platform + +**Summary**: Examples of popular Social Media Platforms include Facebook, Instagram, and VK.

Social Media Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.

Social Media Platforms typically allow the creation of Online Community Groups and Online Community Pages.

Accounts on Social Media Platforms are typically presented with a feed of content posted to the platform. The content that populates this feed can be aggregated by the platform’s proprietary Content Recommendation Algorithm, or users can “friend” or “follow” other accounts to add their posts to their feed.

Many Social Media Platforms also allow users to send direct messages to other users on the platform. + **Tactic**: TA07 Select Channels and Affordances @@ -27,4 +83,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.002.md b/generated_pages/techniques/T0151.002.md index fd0bd55..b4fd654 100644 --- a/generated_pages/techniques/T0151.002.md +++ b/generated_pages/techniques/T0151.002.md @@ -2,6 +2,56 @@ **Summary**: Some online platforms allow people with Accounts to create Online Community Groups. Groups are usually created around a specific topic or locality, and allow users to post content to the group, and interact with other users’ posted content.

For example, Meta’s Social Media Platform Facebook allows users to create a “Facebook group”. This feature is not exclusive to Social Media Platforms; the Microblogging Platform X (prev. Twitter) allows users to create “X Communities”, groups based on particular topics which users can join and post to; the Software Delivery Platform Steam allows users to create Steam Community Groups.

Online Community Groups can be open or gated (for example, groups can require admin approval before users can join). +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB].

[...]

[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions’ Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.


Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):

The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.

The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The group’s overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.
, ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass)., ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.002: Online Community Group + +**Summary**: Some online platforms allow people with Accounts to create Online Community Groups. Groups are usually created around a specific topic or locality, and allow users to post content to the group, and interact with other users’ posted content.

For example, Meta’s Social Media Platform Facebook allows users to create a “Facebook group”. This feature is not exclusive to Social Media Platforms; the Microblogging Platform X (prev. Twitter) allows users to create “X Communities”, groups based on particular topics which users can join and post to; the Software Delivery Platform Steam allows users to create Steam Community Groups.

Online Community Groups can be open or gated (for example, groups can require admin approval before users can join). + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB].

[...]

[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions’ Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.


Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):

The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.

The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The group’s overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.
, ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass)., ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.002: Online Community Group + +**Summary**: Some online platforms allow people with Accounts to create Online Community Groups. Groups are usually created around a specific topic or locality, and allow users to post content to the group, and interact with other users’ posted content.

For example, Meta’s Social Media Platform Facebook allows users to create a “Facebook group”. This feature is not exclusive to Social Media Platforms; the Microblogging Platform X (prev. Twitter) allows users to create “X Communities”, groups based on particular topics which users can join and post to; the Software Delivery Platform Steam allows users to create Steam Community Groups.

Online Community Groups can be open or gated (for example, groups can require admin approval before users can join). + **Tactic**: TA07 Select Channels and Affordances @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.003.md b/generated_pages/techniques/T0151.003.md index e87a244..6cf6204 100644 --- a/generated_pages/techniques/T0151.003.md +++ b/generated_pages/techniques/T0151.003.md @@ -2,6 +2,51 @@ **Summary**: A Facebook Page is an example of an Online Community Page.

Online Community Pages allow Administrator Accounts to post content to the page, which other users can interact with. Pages can be followed or liked by other users - but these users can’t initiate new posts to the page. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.003: Online Community Page + +**Summary**: A Facebook Page is an example of an Online Community Page.

Online Community Pages allow Administrator Accounts to post content to the page, which other users can interact with. Pages can be followed or liked by other users - but these users can’t initiate new posts to the page. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024’s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.

A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.

But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.

Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.

But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.

The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.


A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis).

The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account Asset, T0148.007: eCommerce Platform). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.003: Online Community Page + +**Summary**: A Facebook Page is an example of an Online Community Page.

Online Community Pages allow Administrator Accounts to post content to the page, which other users can interact with. Pages can be followed or liked by other users - but these users can’t initiate new posts to the page. + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.004.md b/generated_pages/techniques/T0151.004.md index 624bf9c..a3aa01a 100644 --- a/generated_pages/techniques/T0151.004.md +++ b/generated_pages/techniques/T0151.004.md @@ -2,6 +2,56 @@ **Summary**: Examples of popular Chat Platforms include WhatsApp, WeChat, Telegram, and Signal; Slack, Mattermost, and Discord; Zoom, GoTo Meeting, and WebEx.

Chat Platforms allow users to engage in text, audio, or video chats with other platform users.

Different Chat Platforms afford users different capabilities. Examples include Direct Messaging, Chat Rooms, Chat Broadcast Channels, and Chat Community Servers.

Some Chat Platforms enable encrypted communication between platform users. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “[Russia’s social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."

“Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”


In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona)., “[Russia’s social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."

“Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”


In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.004: Chat Platform + +**Summary**: Examples of popular Chat Platforms include WhatsApp, WeChat, Telegram, and Signal; Slack, Mattermost, and Discord; Zoom, GoTo Meeting, and WebEx.

Chat Platforms allow users to engage in text, audio, or video chats with other platform users.

Different Chat Platforms afford users different capabilities. Examples include Direct Messaging, Chat Rooms, Chat Broadcast Channels, and Chat Community Servers.

Some Chat Platforms enable encrypted communication between platform users. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | +| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | “[Russia’s social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."

“Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”


In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona)., “[Russia’s social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."

“Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”


In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.004: Chat Platform + +**Summary**: Examples of popular Chat Platforms include WhatsApp, WeChat, Telegram, and Signal; Slack, Mattermost, and Discord; Zoom, GoTo Meeting, and WebEx.

Chat Platforms allow users to engage in text, audio, or video chats with other platform users.

Different Chat Platforms afford users different capabilities. Examples include Direct Messaging, Chat Rooms, Chat Broadcast Channels, and Chat Community Servers.

Some Chat Platforms enable encrypted communication between platform users. + **Tactic**: TA07 Select Channels and Affordances @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.005.md b/generated_pages/techniques/T0151.005.md index a0ce8d0..7df3f43 100644 --- a/generated_pages/techniques/T0151.005.md +++ b/generated_pages/techniques/T0151.005.md @@ -2,6 +2,50 @@ **Summary**: Chat Platforms such as Discord, Slack, and Microsoft Teams allow users to create their own Chat Community Servers, which they can invite other platform users to join.

Chat Community Servers are online communities made up of Chat Rooms (or “Channels”) in which users can discuss the given group’s topic. Groups can either be public (shown in the server’s browsable list of channels, available for any member to view and join) or Gated (users must be added to the chat group by existing members to participate).

Some Chat Community Servers allow users to create Chat Broadcast Groups, in which only specific members (e.g. server administrators) of the chat are able to post new content to the group. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.005: Chat Community Server + +**Summary**: Chat Platforms such as Discord, Slack, and Microsoft Teams allow users to create their own Chat Community Servers, which they can invite other platform users to join.

Chat Community Servers are online communities made up of Chat Rooms (or “Channels”) in which users can discuss the given group’s topic. Groups can either be public (shown in the server’s browsable list of channels, available for any member to view and join) or Gated (users must be added to the chat group by existing members to participate).

Some Chat Community Servers allow users to create Chat Broadcast Groups, in which only specific members (e.g. server administrators) of the chat are able to post new content to the group. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.005: Chat Community Server + +**Summary**: Chat Platforms such as Discord, Slack, and Microsoft Teams allow users to create their own Chat Community Servers, which they can invite other platform users to join.

Chat Community Servers are online communities made up of Chat Rooms (or “Channels”) in which users can discuss the given group’s topic. Groups can either be public (shown in the server’s browsable list of channels, available for any member to view and join) or Gated (users must be added to the chat group by existing members to participate).

Some Chat Community Servers allow users to create Chat Broadcast Groups, in which only specific members (e.g. server administrators) of the chat are able to post new content to the group. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.006.md b/generated_pages/techniques/T0151.006.md index cabaa34..d2db0bc 100644 --- a/generated_pages/techniques/T0151.006.md +++ b/generated_pages/techniques/T0151.006.md @@ -2,6 +2,50 @@ **Summary**: Many platforms which enable community interaction allow users to create Chat Rooms; a room in which members of the group can talk to each other via text, audio, or video.

Most Chat Rooms are Gated; users must be added to the chat group before they can post to the chat group, or view its content. For example, on WhatsApp a user can create a Chat Room containing other WhatsApp users whose contact information they have. At this point the user who created the Chat Room has an Administrator Account; they are uniquely able to add other users to the Chat Room.

However, Chat Rooms made on Chat Community Servers such as Discord can be Gated or open. If left open, anyone on the server can view the Chat Room (“channel”), read its contents, and choose to join it.

Examples of Platforms which allow creation of Chat Rooms include:
Instagram, Facebook, X (prev. Twitter) (Group Direct Messaging)
Whatsapp, Telegram, WeChat, Signal (Group Chats)
Discord, Slack, Mattermost, Microsoft Teams (Channels) +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.006: Chat Room + +**Summary**: Many platforms which enable community interaction allow users to create Chat Rooms; a room in which members of the group can talk to each other via text, audio, or video.

Most Chat Rooms are Gated; users must be added to the chat group before they can post to the chat group, or view its content. For example, on WhatsApp a user can create a Chat Room containing other WhatsApp users whose contact information they have. At this point the user who created the Chat Room has an Administrator Account; they are uniquely able to add other users to the Chat Room.

However, Chat Rooms made on Chat Community Servers such as Discord can be Gated or open. If left open, anyone on the server can view the Chat Room (“channel”), read its contents, and choose to join it.

Examples of Platforms which allow creation of Chat Rooms include:
Instagram, Facebook, X (prev. Twitter) (Group Direct Messaging)
Whatsapp, Telegram, WeChat, Signal (Group Chats)
Discord, Slack, Mattermost, Microsoft Teams (Channels) + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme right’s usage of Discord servers:

Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.

Chatrooms – known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führer’s Gas Chamber”.


In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).

Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.

Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.

The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.


Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.006: Chat Room + +**Summary**: Many platforms which enable community interaction allow users to create Chat Rooms; a room in which members of the group can talk to each other via text, audio, or video.

Most Chat Rooms are Gated; users must be added to the chat group before they can post to the chat group, or view its content. For example, on WhatsApp a user can create a Chat Room containing other WhatsApp users whose contact information they have. At this point the user who created the Chat Room has an Administrator Account; they are uniquely able to add other users to the Chat Room.

However, Chat Rooms made on Chat Community Servers such as Discord can be Gated or open. If left open, anyone on the server can view the Chat Room (“channel”), read its contents, and choose to join it.

Examples of Platforms which allow creation of Chat Rooms include:
Instagram, Facebook, X (prev. Twitter) (Group Direct Messaging)
Whatsapp, Telegram, WeChat, Signal (Group Chats)
Discord, Slack, Mattermost, Microsoft Teams (Channels) + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.007.md b/generated_pages/techniques/T0151.007.md index dfd2b2d..f845f8a 100644 --- a/generated_pages/techniques/T0151.007.md +++ b/generated_pages/techniques/T0151.007.md @@ -2,6 +2,49 @@ **Summary**: A Chat Broadcast Group is a type of Chat Group in which only specific members can send content to the channel (typically administrators, or approved group members). Members of the channel may be able to react to content, or comment on it, but can’t directly push new content to the channel.

Examples include:
WhatsApp, Telegram, Discord: Chat Groups in which only admins are able to post new content.
X (prev. Twitter): Spaces (an audio discussion hosting feature) in which admins control who can speak at a given moment. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.007: Chat Broadcast Group + +**Summary**: A Chat Broadcast Group is a type of Chat Group in which only specific members can send content to the channel (typically administrators, or approved group members). Members of the channel may be able to react to content, or comment on it, but can’t directly push new content to the channel.

Examples include:
WhatsApp, Telegram, Discord: Chat Groups in which only admins are able to post new content.
X (prev. Twitter): Spaces (an audio discussion hosting feature) in which admins control who can speak at a given moment. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.007: Chat Broadcast Group + +**Summary**: A Chat Broadcast Group is a type of Chat Group in which only specific members can send content to the channel (typically administrators, or approved group members). Members of the channel may be able to react to content, or comment on it, but can’t directly push new content to the channel.

Examples include:
WhatsApp, Telegram, Discord: Chat Groups in which only admins are able to post new content.
X (prev. Twitter): Spaces (an audio discussion hosting feature) in which admins control who can speak at a given moment. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.008.md b/generated_pages/techniques/T0151.008.md index 01a6af4..b5248e2 100644 --- a/generated_pages/techniques/T0151.008.md +++ b/generated_pages/techniques/T0151.008.md @@ -2,6 +2,60 @@ **Summary**: Examples of Microblogging Platforms include TikTok, Threads, Bluesky, Mastodon, QQ, Tumblr, and X (formerly Twitter).

Microblogging Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.

Accounts on Microblogging Platforms are able to post short-form text content alongside media.

Content posted to the platforms is aggregated into different feeds and presented to the user. Typical feeds include content posted by other Accounts which the user follows, and content promoted by the platform’s proprietary Content Recommendation Algorithm. Users can also search or use hashtags to discover new content.

Mastodon is an open-source decentralised software which allows anyone to create their own Microblogging Platform that can communicate with other platforms within the “fediverse” (similar to how different email platforms can send emails to each other). Meta’s Threads is a Microblogging Platform which can interact with the fediverse. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).


Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.008: Microblogging Platform + +**Summary**: Examples of Microblogging Platforms include TikTok, Threads, Bluesky, Mastodon, QQ, Tumblr, and X (formerly Twitter).

Microblogging Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.

Accounts on Microblogging Platforms are able to post short-form text content alongside media.

Content posted to the platforms is aggregated into different feeds and presented to the user. Typical feeds include content posted by other Accounts which the user follows, and content promoted by the platform’s proprietary Content Recommendation Algorithm. Users can also search or use hashtags to discover new content.

Mastodon is an open-source decentralised software which allows anyone to create their own Microblogging Platform that can communicate with other platforms within the “fediverse” (similar to how different email platforms can send emails to each other). Meta’s Threads is a Microblogging Platform which can interact with the fediverse. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).


Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00113 Inside the Shadowy World of Disinformation for Hire in Kenya](../../generated_pages/incidents/I00113.md) | Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations’ operationalisation:

In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when it’s time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. It’s what enables them to achieve their goal of trending on Twitter and gain amplification.

[...]

They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.


An M-PESA account (T0148.002: Bank Account Asset, T0148.001: Online Banking Platform) was used to pay campaign participants.

Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). | +| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.

Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.

They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.

Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the site’s subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. X’s terms and conditions do not state whether subscriber accounts are pre-vetted.

Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “I’d been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.

“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which I’d need to download an app.”

Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.


In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account Asset, T0146.003: Verified Account Asset, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created Asset). | +| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:

The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.

That is, until a few minutes into the debate.

All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.

The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.

The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it


In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). | +| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:

[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.

Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset).

A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. | +| [I00129 Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison](../../generated_pages/incidents/I00129.md) | An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.

Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.

Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.

Clark was able to access the accounts after convincing an employee at Twitter he worked in the company’s information technology department, according to the Tampa Bay Times.


In this example a threat actor gained access to Twitter’s customer service portal through social engineering (T0146.004: Administrator Account Asset, T0150.005: Compromised Asset, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account Asset, T0143.003: Impersonated Persona, T0150.005: Compromised Asset, T0151.008: Microblogging Platform).

The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.008: Microblogging Platform + +**Summary**: Examples of Microblogging Platforms include TikTok, Threads, Bluesky, Mastodon, QQ, Tumblr, and X (formerly Twitter).

Microblogging Platforms allow users to create Accounts, which they can configure to present themselves to other platform users. This typically involves Establishing Account Imagery and Presenting a Persona.

Accounts on Microblogging Platforms are able to post short-form text content alongside media.

Content posted to the platforms is aggregated into different feeds and presented to the user. Typical feeds include content posted by other Accounts which the user follows, and content promoted by the platform’s proprietary Content Recommendation Algorithm. Users can also search or use hashtags to discover new content.

Mastodon is an open-source decentralised software which allows anyone to create their own Microblogging Platform that can communicate with other platforms within the “fediverse” (similar to how different email platforms can send emails to each other). Meta’s Threads is a Microblogging Platform which can interact with the fediverse. + **Tactic**: TA07 Select Channels and Affordances @@ -26,4 +80,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.009.md b/generated_pages/techniques/T0151.009.md index 4d4fc34..39774da 100644 --- a/generated_pages/techniques/T0151.009.md +++ b/generated_pages/techniques/T0151.009.md @@ -2,6 +2,53 @@ **Summary**: Examples of Legacy Online Forum Platforms include Something Awful (SA Forums), Ars Technica forums, and NeoGAF, and the forums available on the Mumsnet and War Thunder websites.

Legacy Online Forum Platforms are a type of message board (using software such as vBulletin or phpBB) popular in the early 2000s for online communities. They are often used to provide spaces for a community to exist around a given website or topic.

Legacy Online Forum Platforms allow users to create Accounts to join in discussion threads posted to any number of Forums and Sub-Forums on the platform. Forums and Sub-Forums can be Gated, allowing access to approved users only. They can vary in size. Some are larger platforms that host a wider set of topics and communities while others are smaller in scope and size. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamer Uprising Forums (GUF) [is an online discussion platform using the classic forum structure] aimed directly at gamers. It is run by US Neo-Nazi Andrew Anglin and explicitly targets politically right-wing gamers. This forum mainly includes antisemitic, sexist, and racist topics, but also posts on related issues such as esotericism, conspiracy narratives, pro-Russian propaganda, alternative medicine, Christian religion, content related to the incel- and manosphere, lists of criminal offences committed by non-white people, links to right-wing news sites, homophobia and trans-hostility, troll guides, anti-leftism, ableism and much more. Most noticeable were the high number of antisemitic references. For example, there is a thread with hundreds of machine-generated images, most of which feature openly antisemitic content and popular antisemitic references. Many users chose explicitly antisemitic avatars. Some of the usernames also provide clues to the users’ ideologies and profiles feature swastikas as a type of progress bar and indicator of the user’s activity in the forum.

The GUF’s front page contains an overview of the forum, user statistics, and so-called “announcements”. In addition to advice-like references, these feature various expressions of hateful ideologies. At the time of the exploration, the following could be read there: “Jews are the problem!”, “Women should be raped”, “The Jews are going to be required to return stolen property”, “Immigrants will have to be physically removed”, “Console gaming is for n******” and “Anger is a womanly emotion”. New users have to prove themselves in an area for newcomers referred to in imageboard slang as the “Newfag Barn”. Only when the newcomers’ posts have received a substantial number of likes from established users, are they allowed to post in other parts of the forum. It can be assumed that this will also lead to competitions to outdo each other in posting extreme content. However, it is always possible to view all posts and content on the site. In any case, it can be assumed that the platform hardly addresses milieus that are not already radicalised or at risk of radicalisation and is therefore deemed relevant for radicalisation research. However, the number of registered users is low (typical for radicalised milieus) and, hence, the platform may only be of interest when studying a small group of highly radicalised individuals.


Gamer Uprising Forum is a legacy online forum, with access gated behind approval of existing platform users (T0155.003: Approval Gated Asset, T0151.009: Legacy Online Forum Platform). | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.009: Legacy Online Forum Platform + +**Summary**: Examples of Legacy Online Forum Platforms include Something Awful (SA Forums), Ars Technica forums, and NeoGAF, and the forums available on the Mumsnet and War Thunder websites.

Legacy Online Forum Platforms are a type of message board (using software such as vBulletin or phpBB) popular in the early 2000s for online communities. They are often used to provide spaces for a community to exist around a given website or topic.

Legacy Online Forum Platforms allow users to create Accounts to join in discussion threads posted to any number of Forums and Sub-Forums on the platform. Forums and Sub-Forums can be Gated, allowing access to approved users only. They can vary in size. Some are larger platforms that host a wider set of topics and communities while others are smaller in scope and size. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamer Uprising Forums (GUF) [is an online discussion platform using the classic forum structure] aimed directly at gamers. It is run by US Neo-Nazi Andrew Anglin and explicitly targets politically right-wing gamers. This forum mainly includes antisemitic, sexist, and racist topics, but also posts on related issues such as esotericism, conspiracy narratives, pro-Russian propaganda, alternative medicine, Christian religion, content related to the incel- and manosphere, lists of criminal offences committed by non-white people, links to right-wing news sites, homophobia and trans-hostility, troll guides, anti-leftism, ableism and much more. Most noticeable were the high number of antisemitic references. For example, there is a thread with hundreds of machine-generated images, most of which feature openly antisemitic content and popular antisemitic references. Many users chose explicitly antisemitic avatars. Some of the usernames also provide clues to the users’ ideologies and profiles feature swastikas as a type of progress bar and indicator of the user’s activity in the forum.

The GUF’s front page contains an overview of the forum, user statistics, and so-called “announcements”. In addition to advice-like references, these feature various expressions of hateful ideologies. At the time of the exploration, the following could be read there: “Jews are the problem!”, “Women should be raped”, “The Jews are going to be required to return stolen property”, “Immigrants will have to be physically removed”, “Console gaming is for n******” and “Anger is a womanly emotion”. New users have to prove themselves in an area for newcomers referred to in imageboard slang as the “Newfag Barn”. Only when the newcomers’ posts have received a substantial number of likes from established users, are they allowed to post in other parts of the forum. It can be assumed that this will also lead to competitions to outdo each other in posting extreme content. However, it is always possible to view all posts and content on the site. In any case, it can be assumed that the platform hardly addresses milieus that are not already radicalised or at risk of radicalisation and is therefore deemed relevant for radicalisation research. However, the number of registered users is low (typical for radicalised milieus) and, hence, the platform may only be of interest when studying a small group of highly radicalised individuals.


Gamer Uprising Forum is a legacy online forum, with access gated behind approval of existing platform users (T0155.003: Approval Gated Asset, T0151.009: Legacy Online Forum Platform). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00118 ‘War Thunder’ players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.

This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.

A player, who identified himself as a British tank commander, claimed that the game’s developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.

The self-described tank commander’s bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.

The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tank’s specs that were pulled from the Challenger 2’s Army Equipment Support Publication, which is essentially a technical manual.

[...]

A moderator for the forum, who’s handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.


A user of War Thunder’s forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account Asset, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.009: Legacy Online Forum Platform + +**Summary**: Examples of Legacy Online Forum Platforms include Something Awful (SA Forums), Ars Technica forums, and NeoGAF, and the forums available on the Mumsnet and War Thunder websites.

Legacy Online Forum Platforms are a type of message board (using software such as vBulletin or phpBB) popular in the early 2000s for online communities. They are often used to provide spaces for a community to exist around a given website or topic.

Legacy Online Forum Platforms allow users to create Accounts to join in discussion threads posted to any number of Forums and Sub-Forums on the platform. Forums and Sub-Forums can be Gated, allowing access to approved users only. They can vary in size. Some are larger platforms that host a wider set of topics and communities while others are smaller in scope and size. + **Tactic**: TA07 Select Channels and Affordances @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.010.md b/generated_pages/techniques/T0151.010.md index c0ee61d..fcbde90 100644 --- a/generated_pages/techniques/T0151.010.md +++ b/generated_pages/techniques/T0151.010.md @@ -2,6 +2,48 @@ **Summary**: Reddit, Lemmy and Tildes are examples of Community Forum Platforms.

Community Forum Platforms are exemplified by users’ ability to create their own sub-communities (Community Sub-Forums) which other platform users can join.

Platform users can view aggregated content from all Community Sub-Forums they subscribe to, or they can view all content from a particular Community Sub-Forum. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.010: Community Forum Platform + +**Summary**: Reddit, Lemmy and Tildes are examples of Community Forum Platforms.

Community Forum Platforms are exemplified by users’ ability to create their own sub-communities (Community Sub-Forums) which other platform users can join.

Platform users can view aggregated content from all Community Sub-Forums they subscribe to, or they can view all content from a particular Community Sub-Forum. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.010: Community Forum Platform + +**Summary**: Reddit, Lemmy and Tildes are examples of Community Forum Platforms.

Community Forum Platforms are exemplified by users’ ability to create their own sub-communities (Community Sub-Forums) which other platform users can join.

Platform users can view aggregated content from all Community Sub-Forums they subscribe to, or they can view all content from a particular Community Sub-Forum. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.011.md b/generated_pages/techniques/T0151.011.md index bb8c959..f4a2886 100644 --- a/generated_pages/techniques/T0151.011.md +++ b/generated_pages/techniques/T0151.011.md @@ -2,6 +2,50 @@ **Summary**: Community Forum Platforms are made up of many Community Sub-Forums. Sub-Forums provide spaces for platform users to create a community based around any topic.

For example, Reddit (a popular Community Forum Platform) has over 138,000 “subreddits” (Community Sub-Forums), including 1082 unique cat-based communities.

Typically, Sub-Forums allow users post text, image, or video to them, and other platform users can up/downvote, or comment on it. Sub-forums may have their own extra rules alongside the platform’s global rules, enforced by community moderators.

While most Sub-Forums are made by users with Accounts on the Community Forum Platform, Sub-Forums can also be created by the platform itself. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.011: Community Sub-Forum + +**Summary**: Community Forum Platforms are made up of many Community Sub-Forums. Sub-Forums provide spaces for platform users to create a community based around any topic.

For example, Reddit (a popular Community Forum Platform) has over 138,000 “subreddits” (Community Sub-Forums), including 1082 unique cat-based communities.

Typically, Sub-Forums allow users post text, image, or video to them, and other platform users can up/downvote, or comment on it. Sub-forums may have their own extra rules alongside the platform’s global rules, enforced by community moderators.

While most Sub-Forums are made by users with Accounts on the Community Forum Platform, Sub-Forums can also be created by the platform itself. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00101 Pro-Putin Disinformation Warriors Take War of Aggression to Reddit](../../generated_pages/incidents/I00101.md) | This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:

The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.

Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.

The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to don’t really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”

When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.

Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”


A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised Asset). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account Asset, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.011: Community Sub-Forum + +**Summary**: Community Forum Platforms are made up of many Community Sub-Forums. Sub-Forums provide spaces for platform users to create a community based around any topic.

For example, Reddit (a popular Community Forum Platform) has over 138,000 “subreddits” (Community Sub-Forums), including 1082 unique cat-based communities.

Typically, Sub-Forums allow users post text, image, or video to them, and other platform users can up/downvote, or comment on it. Sub-forums may have their own extra rules alongside the platform’s global rules, enforced by community moderators.

While most Sub-Forums are made by users with Accounts on the Community Forum Platform, Sub-Forums can also be created by the platform itself. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.012.md b/generated_pages/techniques/T0151.012.md index a07e687..f9a841b 100644 --- a/generated_pages/techniques/T0151.012.md +++ b/generated_pages/techniques/T0151.012.md @@ -2,6 +2,52 @@ **Summary**: 4chan and 8chan are examples of Image Board Platforms.

Image Board Platforms provide individual boards on which users can start threads related to the board’s topic. For example, 4chan’s /pol/ board provides a space for users to talk about politics.

Most Image Board Platforms allow users to post without creating an account. Posts are typically made anonymously, although users can choose to post under a pseudonym. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.012: Image Board Platform + +**Summary**: 4chan and 8chan are examples of Image Board Platforms.

Image Board Platforms provide individual boards on which users can start threads related to the board’s topic. For example, 4chan’s /pol/ board provides a space for users to talk about politics.

Most Image Board Platforms allow users to post without creating an account. Posts are typically made anonymously, although users can choose to post under a pseudonym. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.012: Image Board Platform + +**Summary**: 4chan and 8chan are examples of Image Board Platforms.

Image Board Platforms provide individual boards on which users can start threads related to the board’s topic. For example, 4chan’s /pol/ board provides a space for users to talk about politics.

Most Image Board Platforms allow users to post without creating an account. Posts are typically made anonymously, although users can choose to post under a pseudonym. + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.013.md b/generated_pages/techniques/T0151.013.md index d7445ba..5783411 100644 --- a/generated_pages/techniques/T0151.013.md +++ b/generated_pages/techniques/T0151.013.md @@ -2,6 +2,48 @@ **Summary**: Quora, Stack Overflow, and Yahoo Answers are examples of Question and Answer Platforms.

Question and Answer Platforms allow users to create Accounts letting them post questions to the platform community, and respond to other platform users’ questions. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.013: Question and Answer Platform + +**Summary**: Quora, Stack Overflow, and Yahoo Answers are examples of Question and Answer Platforms.

Question and Answer Platforms allow users to create Accounts letting them post questions to the platform community, and respond to other platform users’ questions. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.013: Question and Answer Platform + +**Summary**: Quora, Stack Overflow, and Yahoo Answers are examples of Question and Answer Platforms.

Question and Answer Platforms allow users to create Accounts letting them post questions to the platform community, and respond to other platform users’ questions. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.014.md b/generated_pages/techniques/T0151.014.md index 13d5d9e..fd09071 100644 --- a/generated_pages/techniques/T0151.014.md +++ b/generated_pages/techniques/T0151.014.md @@ -2,6 +2,50 @@ **Summary**: Many platforms enable community interaction via Comments Sections on posted content. Comments Sections allow platform users to comment on content posted by other users.

On some platforms Comments Sections are the only place available for community interaction, such as news websites which provide a Comments Section to discuss articles posted to the website. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.014: Comments Section + +**Summary**: Many platforms enable community interaction via Comments Sections on posted content. Comments Sections allow platform users to comment on content posted by other users.

On some platforms Comments Sections are the only place available for community interaction, such as news websites which provide a Comments Section to discuss articles posted to the website. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.014: Comments Section + +**Summary**: Many platforms enable community interaction via Comments Sections on posted content. Comments Sections allow platform users to comment on content posted by other users.

On some platforms Comments Sections are the only place available for community interaction, such as news websites which provide a Comments Section to discuss articles posted to the website. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.015.md b/generated_pages/techniques/T0151.015.md index d648442..f6f4cdf 100644 --- a/generated_pages/techniques/T0151.015.md +++ b/generated_pages/techniques/T0151.015.md @@ -2,6 +2,50 @@ **Summary**: Roblox, Minecraft, Fortnite, League of Legends, and World of Warcraft are examples of Online Game Platforms.

Online Game Platforms allow users to create Accounts which they can use to access Online Game Sessions; i.e. an individual instance of a multiplayer online game.

Many Online Game Platforms support text or voice chat within Online Game Sessions. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:

Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream ‘Oy Vey!’ on your way to the command center.”

While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users.

A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist group’s modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.

Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.


White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.015: Online Game Platform + +**Summary**: Roblox, Minecraft, Fortnite, League of Legends, and World of Warcraft are examples of Online Game Platforms.

Online Game Platforms allow users to create Accounts which they can use to access Online Game Sessions; i.e. an individual instance of a multiplayer online game.

Many Online Game Platforms support text or voice chat within Online Game Sessions. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00098 Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them](../../generated_pages/incidents/I00098.md) | This report looks at how extremists exploit games and gaming adjacent platforms:

Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream ‘Oy Vey!’ on your way to the command center.”

While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users.

A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist group’s modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.

Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.


White supremacists created a game aligned with their ideology (T0147.001: Game Asset). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod Asset). Extremists also use communication features available in online games to recruit new members. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.015: Online Game Platform + +**Summary**: Roblox, Minecraft, Fortnite, League of Legends, and World of Warcraft are examples of Online Game Platforms.

Online Game Platforms allow users to create Accounts which they can use to access Online Game Sessions; i.e. an individual instance of a multiplayer online game.

Many Online Game Platforms support text or voice chat within Online Game Sessions. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.016.md b/generated_pages/techniques/T0151.016.md index 92e795f..1e8b297 100644 --- a/generated_pages/techniques/T0151.016.md +++ b/generated_pages/techniques/T0151.016.md @@ -2,6 +2,48 @@ **Summary**: Online Game Sessions are instances of a game played on an Online Game Platform. Examples of Online Game Sessions include a match in Fortnite or League of Legends, or a server in Minecraft, Fortnite, or World of Warcraft.

Some Online Game Platforms (such as Fortnite, League of Legends, and World of Warcraft) host Online Game Sessions on their own Servers, and don’t allow other actors to host Online Game Sessions.

Some Online Game Platforms (such as Roblox and Minecraft) allow users to host instances of Online Game Sessions on their own Servers. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.016: Online Game Session + +**Summary**: Online Game Sessions are instances of a game played on an Online Game Platform. Examples of Online Game Sessions include a match in Fortnite or League of Legends, or a server in Minecraft, Fortnite, or World of Warcraft.

Some Online Game Platforms (such as Fortnite, League of Legends, and World of Warcraft) host Online Game Sessions on their own Servers, and don’t allow other actors to host Online Game Sessions.

Some Online Game Platforms (such as Roblox and Minecraft) allow users to host instances of Online Game Sessions on their own Servers. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.016: Online Game Session + +**Summary**: Online Game Sessions are instances of a game played on an Online Game Platform. Examples of Online Game Sessions include a match in Fortnite or League of Legends, or a server in Minecraft, Fortnite, or World of Warcraft.

Some Online Game Platforms (such as Fortnite, League of Legends, and World of Warcraft) host Online Game Sessions on their own Servers, and don’t allow other actors to host Online Game Sessions.

Some Online Game Platforms (such as Roblox and Minecraft) allow users to host instances of Online Game Sessions on their own Servers. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.017.md b/generated_pages/techniques/T0151.017.md index 5c8c64b..8cfb98f 100644 --- a/generated_pages/techniques/T0151.017.md +++ b/generated_pages/techniques/T0151.017.md @@ -2,6 +2,50 @@ **Summary**: Tinder, Bumble, Grindr, Tantan, Badoo, Plenty of Fish, hinge, LOVOO, OkCupid, happn, and Mamba are examples of Dating Platforms.

Dating Platforms allow users to create Accounts, letting them connect with other platform users with the purpose of developing a physical/romantic relationship. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | _"In the days leading up to the UK’s [2017] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes._

_"Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each other’s profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions."_

This incident matches T0151.017: Dating Platform, as users of Tinder were targeted in an attempt to persuade users to vote for a particular party in the upcoming election, rather than for the purpose of connecting those who were authentically interested in dating each other. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.017: Dating Platform + +**Summary**: Tinder, Bumble, Grindr, Tantan, Badoo, Plenty of Fish, hinge, LOVOO, OkCupid, happn, and Mamba are examples of Dating Platforms.

Dating Platforms allow users to create Accounts, letting them connect with other platform users with the purpose of developing a physical/romantic relationship. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0151 Digital Community Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | _"In the days leading up to the UK’s [2017] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes._

_"Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each other’s profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions."_

This incident matches T0151.017: Dating Platform, as users of Tinder were targeted in an attempt to persuade users to vote for a particular party in the upcoming election, rather than for the purpose of connecting those who were authentically interested in dating each other. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151.017: Dating Platform + +**Summary**: Tinder, Bumble, Grindr, Tantan, Badoo, Plenty of Fish, hinge, LOVOO, OkCupid, happn, and Mamba are examples of Dating Platforms.

Dating Platforms allow users to create Accounts, letting them connect with other platform users with the purpose of developing a physical/romantic relationship. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0151.md b/generated_pages/techniques/T0151.md index 73c9331..de8a2c3 100644 --- a/generated_pages/techniques/T0151.md +++ b/generated_pages/techniques/T0151.md @@ -2,6 +2,48 @@ **Summary**: A Digital Community Hosting Asset is an online asset which can be used by actors to provide spaces for users to interact with each other.

Sub-techniques categorised under Digital Community Hosting Assets can include Content Hosting and Content Delivery capabilities; however, their nominal primary purpose is to provide a space for community interaction. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151: Digital Community Hosting Asset + +**Summary**: A Digital Community Hosting Asset is an online asset which can be used by actors to provide spaces for users to interact with each other.

Sub-techniques categorised under Digital Community Hosting Assets can include Content Hosting and Content Delivery capabilities; however, their nominal primary purpose is to provide a space for community interaction. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0151: Digital Community Hosting Asset + +**Summary**: A Digital Community Hosting Asset is an online asset which can be used by actors to provide spaces for users to interact with each other.

Sub-techniques categorised under Digital Community Hosting Assets can include Content Hosting and Content Delivery capabilities; however, their nominal primary purpose is to provide a space for community interaction. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.001.md b/generated_pages/techniques/T0152.001.md index 8722d91..030d940 100644 --- a/generated_pages/techniques/T0152.001.md +++ b/generated_pages/techniques/T0152.001.md @@ -2,6 +2,50 @@ **Summary**: Medium and Substack are examples of Blogging Platforms.

By creating an Account on a Blogging Platform, people are able to create their own Blog. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.001: Blogging Platform + +**Summary**: Medium and Substack are examples of Blogging Platforms.

By creating an Account on a Blogging Platform, people are able to create their own Blog. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.001: Blogging Platform + +**Summary**: Medium and Substack are examples of Blogging Platforms.

By creating an Account on a Blogging Platform, people are able to create their own Blog. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.002.md b/generated_pages/techniques/T0152.002.md index 2e98829..072b353 100644 --- a/generated_pages/techniques/T0152.002.md +++ b/generated_pages/techniques/T0152.002.md @@ -2,6 +2,49 @@ **Summary**: Blogs are a collation of posts centred on a particular topic, author, or collection of authors.

Some platforms are designed to support users in hosting content online, such as Blogging Platforms like Substack which allow users to create Blogs, but other online platforms can also be used to produce a Blog; a Paid Account on X (prev Twitter) is able to post long-form text content to their timeline in a style of a blog.

Actors may create Accounts on Blogging Platforms to create a Blog, or make their own Blog on a Website. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.002: Blog Asset + +**Summary**: Blogs are a collation of posts centred on a particular topic, author, or collection of authors.

Some platforms are designed to support users in hosting content online, such as Blogging Platforms like Substack which allow users to create Blogs, but other online platforms can also be used to produce a Blog; a Paid Account on X (prev Twitter) is able to post long-form text content to their timeline in a style of a blog.

Actors may create Accounts on Blogging Platforms to create a Blog, or make their own Blog on a Website. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.002: Blog Asset + +**Summary**: Blogs are a collation of posts centred on a particular topic, author, or collection of authors.

Some platforms are designed to support users in hosting content online, such as Blogging Platforms like Substack which allow users to create Blogs, but other online platforms can also be used to produce a Blog; a Paid Account on X (prev Twitter) is able to post long-form text content to their timeline in a style of a blog.

Actors may create Accounts on Blogging Platforms to create a Blog, or make their own Blog on a Website. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.003.md b/generated_pages/techniques/T0152.003.md index 3cb77f6..d20e6a5 100644 --- a/generated_pages/techniques/T0152.003.md +++ b/generated_pages/techniques/T0152.003.md @@ -2,6 +2,51 @@ **Summary**: Examples of Website Hosting Platforms include Wix, Webflow, Weebly, and Wordpress.

Website Hosting Platforms help users with managing online infrastructure required to host a website online; such as securing IP Addresses and Domains. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.003: Website Hosting Platform + +**Summary**: Examples of Website Hosting Platforms include Wix, Webflow, Weebly, and Wordpress.

Website Hosting Platforms help users with managing online infrastructure required to host a website online; such as securing IP Addresses and Domains. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:

The SDA’s deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the country’s biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britain’s Daily Mail and France’s 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top.


As part of the SDA’s work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.003: Website Hosting Platform + +**Summary**: Examples of Website Hosting Platforms include Wix, Webflow, Weebly, and Wordpress.

Website Hosting Platforms help users with managing online infrastructure required to host a website online; such as securing IP Addresses and Domains. + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.004.md b/generated_pages/techniques/T0152.004.md index 5080c25..e75bce5 100644 --- a/generated_pages/techniques/T0152.004.md +++ b/generated_pages/techniques/T0152.004.md @@ -2,6 +2,55 @@ **Summary**: A Website is a collection of related web pages hosted on a server and accessible via a web browser. Websites have an associated Domain and can host various types of content, such as text, images, videos, and interactive features.

When a Website is fleshed out, it Presents a Persona to site visitors. For example, the Domain “bbc.co.uk/news” hosts a Website which uses the News Outlet Persona. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.004: Website Asset + +**Summary**: A Website is a collection of related web pages hosted on a server and accessible via a web browser. Websites have an associated Domain and can host various types of content, such as text, images, videos, and interactive features.

When a Website is fleshed out, it Presents a Persona to site visitors. For example, the Domain “bbc.co.uk/news” hosts a Website which uses the News Outlet Persona. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.004: Website Asset + +**Summary**: A Website is a collection of related web pages hosted on a server and accessible via a web browser. Websites have an associated Domain and can host various types of content, such as text, images, videos, and interactive features.

When a Website is fleshed out, it Presents a Persona to site visitors. For example, the Domain “bbc.co.uk/news” hosts a Website which uses the News Outlet Persona. + **Tactic**: TA07 Select Channels and Affordances @@ -23,4 +72,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.005.md b/generated_pages/techniques/T0152.005.md index bb85bf3..bb043f5 100644 --- a/generated_pages/techniques/T0152.005.md +++ b/generated_pages/techniques/T0152.005.md @@ -2,6 +2,51 @@ **Summary**: Pastebin is an example of a Paste Platform.

Paste Platforms allow people to upload unformatted text to the platform, which they can share via a link. Some Paste Platforms are Open Access Platforms which allow users to upload content without creating an Account first. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.005: Paste Platform + +**Summary**: Pastebin is an example of a Paste Platform.

Paste Platforms allow people to upload unformatted text to the platform, which they can share via a link. Some Paste Platforms are Open Access Platforms which allow users to upload content without creating an Account first. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | +| [I00104 Macron Campaign Hit With “Massive and Coordinated” Hacking Attack](../../generated_pages/incidents/I00104.md) | A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.

At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.

“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”

The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”


Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.005: Paste Platform + +**Summary**: Pastebin is an example of a Paste Platform.

Paste Platforms allow people to upload unformatted text to the platform, which they can share via a link. Some Paste Platforms are Open Access Platforms which allow users to upload content without creating an Account first. + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.006.md b/generated_pages/techniques/T0152.006.md index 50a55a8..8848242 100644 --- a/generated_pages/techniques/T0152.006.md +++ b/generated_pages/techniques/T0152.006.md @@ -2,6 +2,51 @@ **Summary**: YouTube, Vimeo, and LiveLeak are examples of Video Platforms.

Video Platforms allow people to create Accounts which they can use to upload video content for people to watch on the platform.

The ability to host videos is not exclusive to Video Platforms; many online platforms allow users with Accounts to upload video content. However, Video Platforms’ primary purpose is to be a place to host and view video content. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.006: Video Platform + +**Summary**: YouTube, Vimeo, and LiveLeak are examples of Video Platforms.

Video Platforms allow people to create Accounts which they can use to upload video content for people to watch on the platform.

The ability to host videos is not exclusive to Video Platforms; many online platforms allow users with Accounts to upload video content. However, Video Platforms’ primary purpose is to be a place to host and view video content. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.006: Video Platform + +**Summary**: YouTube, Vimeo, and LiveLeak are examples of Video Platforms.

Video Platforms allow people to create Accounts which they can use to upload video content for people to watch on the platform.

The ability to host videos is not exclusive to Video Platforms; many online platforms allow users with Accounts to upload video content. However, Video Platforms’ primary purpose is to be a place to host and view video content. + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +66,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.007.md b/generated_pages/techniques/T0152.007.md index 5d196d9..efa4a1d 100644 --- a/generated_pages/techniques/T0152.007.md +++ b/generated_pages/techniques/T0152.007.md @@ -2,6 +2,48 @@ **Summary**: Soundcloud, Spotify, and YouTube Music; Apple Podcasts, Podbean, and Captivate are examples of Audio Platforms.

Audio Platforms allow people to create Accounts which they can use to upload audio content to the platform.

The ability to host audio is not exclusive to Audio Platforms; many online platforms allow users with Accounts to upload audio content. However, Audio Platforms’ primary purpose is to be a place to host and listen to audio content. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.007: Audio Platform + +**Summary**: Soundcloud, Spotify, and YouTube Music; Apple Podcasts, Podbean, and Captivate are examples of Audio Platforms.

Audio Platforms allow people to create Accounts which they can use to upload audio content to the platform.

The ability to host audio is not exclusive to Audio Platforms; many online platforms allow users with Accounts to upload audio content. However, Audio Platforms’ primary purpose is to be a place to host and listen to audio content. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.007: Audio Platform + +**Summary**: Soundcloud, Spotify, and YouTube Music; Apple Podcasts, Podbean, and Captivate are examples of Audio Platforms.

Audio Platforms allow people to create Accounts which they can use to upload audio content to the platform.

The ability to host audio is not exclusive to Audio Platforms; many online platforms allow users with Accounts to upload audio content. However, Audio Platforms’ primary purpose is to be a place to host and listen to audio content. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.008.md b/generated_pages/techniques/T0152.008.md index f7505ca..333f4dd 100644 --- a/generated_pages/techniques/T0152.008.md +++ b/generated_pages/techniques/T0152.008.md @@ -2,6 +2,50 @@ **Summary**: Twitch.tv and Whatnot are examples of Live Streaming Platforms.

Live Streaming Platforms allow people to create Accounts and stream live content (video or audio). A temporary open Group Chat is created alongside live streamed content for viewers to discuss the stream. Some Live Streaming Platforms allow users to archive streamed content for later non-live viewing.

The ability to stream live media is not exclusive to Live Streaming Platforms; many online platforms allow users with Accounts to stream content (such as the Video Platform YouTube’s “YouTube Live”, and the Social Media Platform Facebook’s “Facebook Live”). However, Live Streaming Platforms’ primary purpose is to be a place for people to stream content live. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00124 Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook](../../generated_pages/incidents/I00124.md) | This report “Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook” looks at influence operations in relation to gaming. Part of the report looks at the use of gaming platforms, including DLive; a streaming platform (T0152.008: Live Streaming Platform);

“Like Twitch and YouTube, DLive is a video streaming service that enables users (also known as "streamers," or "content creators") to record themselves talking, playing video games, and other activities [...] DLive is built on blockchain technology, using its own currency directly through it rather than relying on advertising revenue.”

[...]

The emergence of blockchain technology has also created opportunities for reduced censorship. Due to the decentralised nature of blockchain platforms, content deletion from numerous servers takes longer than centralised systems. While DLive has community guidelines that forbid harassment or hate speech, it also allegedly provides users with protection from deplatforming, a practice where tech companies prevent individuals or groups from using their websites (Cohen, 2020). DLive's lack of content moderation and deplatforming has attracted far-right extremists and fringe streamers who have been barred from mainstream social media platforms like YouTube (Cohen, 2020; Gais & Edison Hayden, 2020). PewDiePie, one of YouTube's most popular content creators with nearly 94 million subscribers, moved exclusively to DLive in 2019. Although financial factors played a significant role in PewDiePie's decision, some have been drawn to DLive as a consequence of being deplatformed from other video streaming services (Gais & Edison Hayden, 2020).

According to recent findings from ISD, extremist groups have taken advantage of the relative lack of content moderation. The platform has been used to spread racist, sexist, and homophobic content, as well as conspiracy theories that would likely be banned on other platforms (Thomas, 2021). DLive is also known to have played a role in the events leading up to the January 6th Capitol insurrection, with far- right extremists livestreaming the event and receiving donations from viewers (Lakhani, 2021, p. 9). In response to the storming of the Capitol, DLive has implemented stricter content moderation policies, including demonetisation and the banning of influential figures associated with far-right extremism (ibid, p. 18). The findings of ISD´s analysis of DLive indicates that these actions reduced the "safe harbor" that extremists had previously enjoyed on DLive (Thomas, 2021). However, some claim that extremism still has a foothold on the platform despite these efforts to remove it (Schlegel, 2021b).
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.008: Live Streaming Platform + +**Summary**: Twitch.tv and Whatnot are examples of Live Streaming Platforms.

Live Streaming Platforms allow people to create Accounts and stream live content (video or audio). A temporary open Group Chat is created alongside live streamed content for viewers to discuss the stream. Some Live Streaming Platforms allow users to archive streamed content for later non-live viewing.

The ability to stream live media is not exclusive to Live Streaming Platforms; many online platforms allow users with Accounts to stream content (such as the Video Platform YouTube’s “YouTube Live”, and the Social Media Platform Facebook’s “Facebook Live”). However, Live Streaming Platforms’ primary purpose is to be a place for people to stream content live. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00124 Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook](../../generated_pages/incidents/I00124.md) | This report “Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook” looks at influence operations in relation to gaming. Part of the report looks at the use of gaming platforms, including DLive; a streaming platform (T0152.008: Live Streaming Platform);

“Like Twitch and YouTube, DLive is a video streaming service that enables users (also known as "streamers," or "content creators") to record themselves talking, playing video games, and other activities [...] DLive is built on blockchain technology, using its own currency directly through it rather than relying on advertising revenue.”

[...]

The emergence of blockchain technology has also created opportunities for reduced censorship. Due to the decentralised nature of blockchain platforms, content deletion from numerous servers takes longer than centralised systems. While DLive has community guidelines that forbid harassment or hate speech, it also allegedly provides users with protection from deplatforming, a practice where tech companies prevent individuals or groups from using their websites (Cohen, 2020). DLive's lack of content moderation and deplatforming has attracted far-right extremists and fringe streamers who have been barred from mainstream social media platforms like YouTube (Cohen, 2020; Gais & Edison Hayden, 2020). PewDiePie, one of YouTube's most popular content creators with nearly 94 million subscribers, moved exclusively to DLive in 2019. Although financial factors played a significant role in PewDiePie's decision, some have been drawn to DLive as a consequence of being deplatformed from other video streaming services (Gais & Edison Hayden, 2020).

According to recent findings from ISD, extremist groups have taken advantage of the relative lack of content moderation. The platform has been used to spread racist, sexist, and homophobic content, as well as conspiracy theories that would likely be banned on other platforms (Thomas, 2021). DLive is also known to have played a role in the events leading up to the January 6th Capitol insurrection, with far- right extremists livestreaming the event and receiving donations from viewers (Lakhani, 2021, p. 9). In response to the storming of the Capitol, DLive has implemented stricter content moderation policies, including demonetisation and the banning of influential figures associated with far-right extremism (ibid, p. 18). The findings of ISD´s analysis of DLive indicates that these actions reduced the "safe harbor" that extremists had previously enjoyed on DLive (Thomas, 2021). However, some claim that extremism still has a foothold on the platform despite these efforts to remove it (Schlegel, 2021b).
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.008: Live Streaming Platform + +**Summary**: Twitch.tv and Whatnot are examples of Live Streaming Platforms.

Live Streaming Platforms allow people to create Accounts and stream live content (video or audio). A temporary open Group Chat is created alongside live streamed content for viewers to discuss the stream. Some Live Streaming Platforms allow users to archive streamed content for later non-live viewing.

The ability to stream live media is not exclusive to Live Streaming Platforms; many online platforms allow users with Accounts to stream content (such as the Video Platform YouTube’s “YouTube Live”, and the Social Media Platform Facebook’s “Facebook Live”). However, Live Streaming Platforms’ primary purpose is to be a place for people to stream content live. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.009.md b/generated_pages/techniques/T0152.009.md index 02c2e2e..146234a 100644 --- a/generated_pages/techniques/T0152.009.md +++ b/generated_pages/techniques/T0152.009.md @@ -2,6 +2,52 @@ **Summary**: Apple’s App Store, Google’s Google Play Store, and Valve’s Steam are examples of Software Delivery Platforms.

Software Delivery Platforms are designed to enable users to download programmes uploaded to the platform. Software can be purchased, or downloaded for free.

Some Software Delivery Platforms require users to have an Account before they can download software, and software they acquire becomes associated with the account (i.e. the account owns a licence to download the software). Some platforms don’t require users to make accounts before downloading software.

Actors may create their own Software Delivery Platform on a Domain they own. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB].

[...]

[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions’ Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.


Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset)., In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.

[...]

During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.

Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.


Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod Asset). | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):

The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.

The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The group’s overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.
, ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass)., ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.009: Software Delivery Platform + +**Summary**: Apple’s App Store, Google’s Google Play Store, and Valve’s Steam are examples of Software Delivery Platforms.

Software Delivery Platforms are designed to enable users to download programmes uploaded to the platform. Software can be purchased, or downloaded for free.

Some Software Delivery Platforms require users to have an Account before they can download software, and software they acquire becomes associated with the account (i.e. the account owns a licence to download the software). Some platforms don’t require users to make accounts before downloading software.

Actors may create their own Software Delivery Platform on a Domain they own. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB].

[...]

[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions’ Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.


Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game Asset)., In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.

[...]

During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.

Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.


Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod Asset). | +| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):

The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.

The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The group’s overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.
, ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

One function of these Steam groups is the organisation of ‘raids’ – coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.

Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass)., ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steam’s social capabilities to enable online harm campaigns:

A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis.

[...]

This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms


Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.009: Software Delivery Platform + +**Summary**: Apple’s App Store, Google’s Google Play Store, and Valve’s Steam are examples of Software Delivery Platforms.

Software Delivery Platforms are designed to enable users to download programmes uploaded to the platform. Software can be purchased, or downloaded for free.

Some Software Delivery Platforms require users to have an Account before they can download software, and software they acquire becomes associated with the account (i.e. the account owns a licence to download the software). Some platforms don’t require users to make accounts before downloading software.

Actors may create their own Software Delivery Platform on a Domain they own. + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.010.md b/generated_pages/techniques/T0152.010.md index e623a57..59c4326 100644 --- a/generated_pages/techniques/T0152.010.md +++ b/generated_pages/techniques/T0152.010.md @@ -2,6 +2,49 @@ **Summary**: Dropbox and Google Drive are examples of File Hosting Platforms.

File Hosting Platforms allow people to create Accounts which they can use to host files on another server, enabling access to content on any machine, and the ability to easily share files with anyone online.

Actors may also create their own File Hosting Platform on a Website or Server they control. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.010: File Hosting Platform + +**Summary**: Dropbox and Google Drive are examples of File Hosting Platforms.

File Hosting Platforms allow people to create Accounts which they can use to host files on another server, enabling access to content on any machine, and the ability to easily share files with anyone online.

Actors may also create their own File Hosting Platform on a Website or Server they control. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00102 Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board](../../generated_pages/incidents/I00102.md) | On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chan’s /pol/ board.

Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chan’s /pol/ board.

This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooter’s since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.

The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.


Before carrying out a mass shooting, the shooter posted a thread to 8chan’s /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account Asset, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).

The report looks deeper into 8chan’s /pol/ board:

8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.

[...]

I’ve browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooter’s manifesto into other languages in an attempt to inspire more shootings across the globe.

This tactic can work, and today’s shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.


Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).

When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.

When Bellingcat tweeted out a warning about shitposting and the shooter’s manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.

This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooter’s massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”

In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They can’t fathom that there are brave White men alive who have the willpower and courage it takes to say, ‘Fuck my life—I’m willing to sacrifice everything for the benefit of my race.’”


Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.010: File Hosting Platform + +**Summary**: Dropbox and Google Drive are examples of File Hosting Platforms.

File Hosting Platforms allow people to create Accounts which they can use to host files on another server, enabling access to content on any machine, and the ability to easily share files with anyone online.

Actors may also create their own File Hosting Platform on a Website or Server they control. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.011.md b/generated_pages/techniques/T0152.011.md index 6c80eba..8dd3726 100644 --- a/generated_pages/techniques/T0152.011.md +++ b/generated_pages/techniques/T0152.011.md @@ -2,6 +2,48 @@ **Summary**: Wikipedia, Fandom, Ruwiki, TV Tropes, and the SCP Foundation are examples of Wiki Platforms.

Wikis use wiki software to allow platform users to collaboratively create and maintain an encyclopedia of information related to a given topic. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.011: Wiki Platform + +**Summary**: Wikipedia, Fandom, Ruwiki, TV Tropes, and the SCP Foundation are examples of Wiki Platforms.

Wikis use wiki software to allow platform users to collaboratively create and maintain an encyclopedia of information related to a given topic. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.011: Wiki Platform + +**Summary**: Wikipedia, Fandom, Ruwiki, TV Tropes, and the SCP Foundation are examples of Wiki Platforms.

Wikis use wiki software to allow platform users to collaboratively create and maintain an encyclopedia of information related to a given topic. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.012.md b/generated_pages/techniques/T0152.012.md index 1b7c691..10dde7d 100644 --- a/generated_pages/techniques/T0152.012.md +++ b/generated_pages/techniques/T0152.012.md @@ -2,6 +2,54 @@ **Summary**: Patreon, Fansly, and OnlyFans are examples of Subscription Service Platforms.

Subscription Service Platforms enable users with Accounts to host online content to which other platform users can subscribe to access. Content typically requires Paid Subscription to access, however open content is often also supported. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | +| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreon’s stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:

In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.

Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.

Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the author’s email to explore other financing alternatives.

[...]

Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.

Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.


In spite of Patreon’s stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).

Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.012: Subscription Service Platform + +**Summary**: Patreon, Fansly, and OnlyFans are examples of Subscription Service Platforms.

Subscription Service Platforms enable users with Accounts to host online content to which other platform users can subscribe to access. Content typically requires Paid Subscription to access, however open content is often also supported. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0152 Digital Content Hosting Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:

More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.

A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account Asset, T0148.006: Crowdfunding Platform).

On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.

We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”


The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account Asset, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).

On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”

An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account Asset, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | +| [I00112 Patreon allows disinformation and conspiracies to be monetised in Spain](../../generated_pages/incidents/I00112.md) | In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreon’s stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:

In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.

Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.

Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the author’s email to explore other financing alternatives.

[...]

Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.

Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.


In spite of Patreon’s stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account Asset, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).

Some actors were observed accepting donations via PayPal (T0146: Account Asset, T0148.003: Payment Processing Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152.012: Subscription Service Platform + +**Summary**: Patreon, Fansly, and OnlyFans are examples of Subscription Service Platforms.

Subscription Service Platforms enable users with Accounts to host online content to which other platform users can subscribe to access. Content typically requires Paid Subscription to access, however open content is often also supported. + **Tactic**: TA07 Select Channels and Affordances @@ -22,4 +70,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0152.md b/generated_pages/techniques/T0152.md index 988ce42..088d6ae 100644 --- a/generated_pages/techniques/T0152.md +++ b/generated_pages/techniques/T0152.md @@ -2,6 +2,48 @@ **Summary**: Digital Content Hosting Assets are online assets which are primarily designed to allow actors to upload content to the internet.

Sub-techniques categorised under Digital Content Hosting Assets can include Community Hosting and Content Delivery capabilities; however their nominal primary purpose is to host content online. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152: Digital Content Hosting Asset + +**Summary**: Digital Content Hosting Assets are online assets which are primarily designed to allow actors to upload content to the internet.

Sub-techniques categorised under Digital Content Hosting Assets can include Community Hosting and Content Delivery capabilities; however their nominal primary purpose is to host content online. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0152: Digital Content Hosting Asset + +**Summary**: Digital Content Hosting Assets are online assets which are primarily designed to allow actors to upload content to the internet.

Sub-techniques categorised under Digital Content Hosting Assets can include Community Hosting and Content Delivery capabilities; however their nominal primary purpose is to host content online. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.001.md b/generated_pages/techniques/T0153.001.md index b1ce667..3263186 100644 --- a/generated_pages/techniques/T0153.001.md +++ b/generated_pages/techniques/T0153.001.md @@ -2,6 +2,52 @@ **Summary**: Gmail, iCloud mail, and Microsoft Outlook are examples of Email Platforms.

Email Platforms are online platforms which allow people to create Accounts that they can use to send and receive emails to and from other email accounts.

Instead of using an Email Platform, actors may set up their own Email Domain, letting them send and receive emails on a custom domain.

Analysts should default to Email Platform if they cannot confirm whether an email was sent using a privately operated email, or via an account on a public email platform (for example, in situations where analysts are coding third party reporting which does not specify the type of email used). +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | +| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik.

We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.

[...]

The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.

[...]

All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address.


In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.001: Email Platform + +**Summary**: Gmail, iCloud mail, and Microsoft Outlook are examples of Email Platforms.

Email Platforms are online platforms which allow people to create Accounts that they can use to send and receive emails to and from other email accounts.

Instead of using an Email Platform, actors may set up their own Email Domain, letting them send and receive emails on a custom domain.

Analysts should default to Email Platform if they cannot confirm whether an email was sent using a privately operated email, or via an account on a public email platform (for example, in situations where analysts are coding third party reporting which does not specify the type of email used). + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trump’s presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.

The PDF document is a 271-page opposition research file on former President Donald Trump’s running mate, Sen. JD Vance, R-Ohio.

For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.

But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.

[...]

Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.

NBC News was not part of the Robert persona’s direct outreach, but it has viewed its correspondence with a reporter at another publication.

One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trump’s three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.


In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account Asset, T0097.100: Individual Persona, T0153.001: Email Platform).

The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog Asset, T0150.003: Pre-Existing Asset). | +| [I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts](../../generated_pages/incidents/I00121.md) | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik.

We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.

[...]

The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.

[...]

All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address.


In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.001: Email Platform + +**Summary**: Gmail, iCloud mail, and Microsoft Outlook are examples of Email Platforms.

Email Platforms are online platforms which allow people to create Accounts that they can use to send and receive emails to and from other email accounts.

Instead of using an Email Platform, actors may set up their own Email Domain, letting them send and receive emails on a custom domain.

Analysts should default to Email Platform if they cannot confirm whether an email was sent using a privately operated email, or via an account on a public email platform (for example, in situations where analysts are coding third party reporting which does not specify the type of email used). + **Tactic**: TA07 Select Channels and Affordances @@ -21,4 +67,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.002.md b/generated_pages/techniques/T0153.002.md index 8109d4e..64002ef 100644 --- a/generated_pages/techniques/T0153.002.md +++ b/generated_pages/techniques/T0153.002.md @@ -2,6 +2,48 @@ **Summary**: Bitly and TinyURL are examples of Link Shortening Platforms.

Link Shortening Platforms are online platforms which allow people to create Accounts that they can use to convert existing URLs into Shortened Links, or into QR Codes. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.002: Link Shortening Platform + +**Summary**: Bitly and TinyURL are examples of Link Shortening Platforms.

Link Shortening Platforms are online platforms which allow people to create Accounts that they can use to convert existing URLs into Shortened Links, or into QR Codes. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.002: Link Shortening Platform + +**Summary**: Bitly and TinyURL are examples of Link Shortening Platforms.

Link Shortening Platforms are online platforms which allow people to create Accounts that they can use to convert existing URLs into Shortened Links, or into QR Codes. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.003.md b/generated_pages/techniques/T0153.003.md index aefd178..c74d8c3 100644 --- a/generated_pages/techniques/T0153.003.md +++ b/generated_pages/techniques/T0153.003.md @@ -2,6 +2,48 @@ **Summary**: A Shortened Link is a custom URL which is typically a shortened version of another URL. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.003: Shortened Link Asset + +**Summary**: A Shortened Link is a custom URL which is typically a shortened version of another URL. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.003: Shortened Link Asset + +**Summary**: A Shortened Link is a custom URL which is typically a shortened version of another URL. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.004.md b/generated_pages/techniques/T0153.004.md index 99c375c..71fc7ba 100644 --- a/generated_pages/techniques/T0153.004.md +++ b/generated_pages/techniques/T0153.004.md @@ -2,6 +2,50 @@ **Summary**: A QR Code allows people to use cameras on their smartphones to open a URL. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.004: QR Code Asset + +**Summary**: A QR Code allows people to use cameras on their smartphones to open a URL. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00128 #TrollTracker: Outward Influence Operation From Iran](../../generated_pages/incidents/I00128.md) | [Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the People’s Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.

The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.

Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEK’s movement in Iran in the mid-1990’s. The file was embedded as a QR code on one of the page’s images.


In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code Asset), which took users to a document hosted on another website (T0152.004: Website Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.004: QR Code Asset + +**Summary**: A QR Code allows people to use cameras on their smartphones to open a URL. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.005.md b/generated_pages/techniques/T0153.005.md index 963b696..748ddaf 100644 --- a/generated_pages/techniques/T0153.005.md +++ b/generated_pages/techniques/T0153.005.md @@ -2,6 +2,53 @@ **Summary**: Google Ads, Facebook Ads, and LinkedIn Marketing Solutions are examples of Online Advertising Platforms.

Online Advertising Platforms are online platforms which allow people to create Accounts that they can use to upload and deliver adverts to people online. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)

Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.

Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.

Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.


Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.005: Online Advertising Platform + +**Summary**: Google Ads, Facebook Ads, and LinkedIn Marketing Solutions are examples of Online Advertising Platforms.

Online Advertising Platforms are online platforms which allow people to create Accounts that they can use to upload and deliver adverts to people online. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)

Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.

Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.

Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.


Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.005: Online Advertising Platform + +**Summary**: Google Ads, Facebook Ads, and LinkedIn Marketing Solutions are examples of Online Advertising Platforms.

Online Advertising Platforms are online platforms which allow people to create Accounts that they can use to upload and deliver adverts to people online. + **Tactic**: TA07 Select Channels and Affordances @@ -22,4 +69,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.006.md b/generated_pages/techniques/T0153.006.md index e3b02a3..cd39f88 100644 --- a/generated_pages/techniques/T0153.006.md +++ b/generated_pages/techniques/T0153.006.md @@ -2,6 +2,56 @@ **Summary**: Many online platforms have Content Recommendation Algorithms, which promote content posted to the platform to users based on metrics the platform operators are trying to meet. Algorithms typically surface platform content which the user is likely to engage with, based on how they and other users have behaved on the platform. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).


Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00115 How Facebook shapes your feed](../../generated_pages/incidents/I00115.md) | This 2021 report by The Washington Post explains the mechanics of Facebook’s algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):

In its early years, Facebook’s algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebook’s news feed algorithm.

Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold people’s attention.

In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they weren’t interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.

Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.

The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didn’t help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.

[...]

Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.

One takeaway is that Facebook’s algorithm isn’t a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users’ values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.006: Content Recommendation Algorithm + +**Summary**: Many online platforms have Content Recommendation Algorithms, which promote content posted to the platform to users based on metrics the platform operators are trying to meet. Algorithms typically surface platform content which the user is likely to engage with, based on how they and other users have behaved on the platform. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.

[...]

Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.

Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).


Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. | +| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | This article examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology:

Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms’ community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebook’s algorithm for related pages and found suggested Facebook pages

[...]

This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.

[...]

Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.

Once visitors are on the website supporting its advertisement revenue, Suavelos’ goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos.


Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebook’s algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).

Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website Asset), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account Asset, T0153.005: Online Advertising Platform). | +| [I00114 ‘Carol’s Journey’: What Facebook knew about how it radicalized users](../../generated_pages/incidents/I00114.md) | This report examines internal Facebook communications which reveal employees’ concerns about how the platform’s algorithm was recommending users join extremist conspiracy groups.

In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Smith didn’t follow the recommended QAnon groups, but whatever algorithm Facebook was using to determine how she should engage with the platform pushed ahead just the same. Within one week, Smith’s feed was full of groups and pages that had violated Facebook’s own rules, including those against hate speech and disinformation.

Smith wasn’t a real person. A researcher employed by Facebook invented the account, along with those of other fictitious “test users” in 2019 and 2020, as part of an experiment in studying the platform’s role in misinforming and polarizing users through its recommendations systems.

That researcher said Smith’s Facebook experience was “a barrage of extreme, conspiratorial, and graphic content.”


Facebook’s Algorithm suggested users join groups which supported the QAnon movement (T0151.001: Social Media Platform, T0151.002: Online Community Group, T0153.006: Content Recommendation Algorithm, T0097.208: Social Cause Persona).

Further investigation by Facebook uncovered that its advertising platform had been used to promote QAnon narratives (T0146: Account Asset, T0114: Deliver Ads, T0153.005: Online Advertising Platform):

For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

[...]

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, alleged planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

[...]

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

While Facebook’s ban initially appeared effective, a problem remained: The removal of groups and pages didn’t wipe out QAnon’s most extreme followers, who continued to organize on the platform.

“There was enough evidence to raise red flags in the expert community that Facebook and other platforms failed to address QAnon’s violent extremist dimension,” said Marc-André Argentino, a research fellow at King’s College London’s International Centre for the Study of Radicalisation, who has extensively studied QAnon.

Believers simply rebranded as anti-child-trafficking groups or migrated to other communities, including those around the anti-vaccine movement.

[...]

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.
| +| [I00115 How Facebook shapes your feed](../../generated_pages/incidents/I00115.md) | This 2021 report by The Washington Post explains the mechanics of Facebook’s algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):

In its early years, Facebook’s algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebook’s news feed algorithm.

Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold people’s attention.

In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they weren’t interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.

Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.

The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didn’t help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.

[...]

Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.

One takeaway is that Facebook’s algorithm isn’t a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users’ values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.
| + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.006: Content Recommendation Algorithm + +**Summary**: Many online platforms have Content Recommendation Algorithms, which promote content posted to the platform to users based on metrics the platform operators are trying to meet. Algorithms typically surface platform content which the user is likely to engage with, based on how they and other users have behaved on the platform. + **Tactic**: TA07 Select Channels and Affordances @@ -23,4 +73,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.007.md b/generated_pages/techniques/T0153.007.md index c497d3a..fa95669 100644 --- a/generated_pages/techniques/T0153.007.md +++ b/generated_pages/techniques/T0153.007.md @@ -2,6 +2,48 @@ **Summary**: Many online platforms allow users to contact other platform users via Direct Messaging; private messaging which can be initiated by a user with other platform users.

Examples include messaging on WhatsApp, Telegram, and Signal; direct messages (DMs) on Facebook or Instagram.

Some platforms’ Direct Messaging capabilities provide users with Encrypted Communication. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.007: Direct Messaging + +**Summary**: Many online platforms allow users to contact other platform users via Direct Messaging; private messaging which can be initiated by a user with other platform users.

Examples include messaging on WhatsApp, Telegram, and Signal; direct messages (DMs) on Facebook or Instagram.

Some platforms’ Direct Messaging capabilities provide users with Encrypted Communication. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0153 Digital Content Delivery Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153.007: Direct Messaging + +**Summary**: Many online platforms allow users to contact other platform users via Direct Messaging; private messaging which can be initiated by a user with other platform users.

Examples include messaging on WhatsApp, Telegram, and Signal; direct messages (DMs) on Facebook or Instagram.

Some platforms’ Direct Messaging capabilities provide users with Encrypted Communication. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0153.md b/generated_pages/techniques/T0153.md index c5aaf26..8f3cc95 100644 --- a/generated_pages/techniques/T0153.md +++ b/generated_pages/techniques/T0153.md @@ -2,6 +2,48 @@ **Summary**: Digital Content Delivery Assets are assets which support the delivery of content to users online.

Sub-techniques categorised under Digital Content Delivery Assets can include Community Hosting and Content Hosting capabilities; however their nominal primary purpose is to support the delivery of content to users online. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153: Digital Content Delivery Asset + +**Summary**: Digital Content Delivery Assets are assets which support the delivery of content to users online.

Sub-techniques categorised under Digital Content Delivery Assets can include Community Hosting and Content Hosting capabilities; however their nominal primary purpose is to support the delivery of content to users online. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0153: Digital Content Delivery Asset + +**Summary**: Digital Content Delivery Assets are assets which support the delivery of content to users online.

Sub-techniques categorised under Digital Content Delivery Assets can include Community Hosting and Content Hosting capabilities; however their nominal primary purpose is to support the delivery of content to users online. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0154.001.md b/generated_pages/techniques/T0154.001.md index a8454e7..f0ac663 100644 --- a/generated_pages/techniques/T0154.001.md +++ b/generated_pages/techniques/T0154.001.md @@ -2,6 +2,48 @@ **Summary**: OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Turing-NLG, Google’s T5 (Text-to-Text Transfer Transformer), and Facebook’s BART are examples of AI LLM (Large Language Model) Platforms.

AI LLM Platforms are online platforms which allow people to create Accounts that they can use to interact with the platform’s AI Large Language Model, to produce text-based content.

LLMs can create hyper-realistic synthetic text that is both scalable and persuasive. LLMs can largely automate content production, reducing the overhead in persona creation, and generate culturally appropriate outputs that are less prone to exhibiting conspicuous signs of inauthenticity.

Some platforms implement protections against misuse of AI by their users. Threat Actors have been observed bypassing these protections using prompt injections, poisoning, jailbreaking, or integrity attacks. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0154 Digital Content Creation Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0154.001: AI LLM Platform + +**Summary**: OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Turing-NLG, Google’s T5 (Text-to-Text Transfer Transformer), and Facebook’s BART are examples of AI LLM (Large Language Model) Platforms.

AI LLM Platforms are online platforms which allow people to create Accounts that they can use to interact with the platform’s AI Large Language Model, to produce text-based content.

LLMs can create hyper-realistic synthetic text that is both scalable and persuasive. LLMs can largely automate content production, reducing the overhead in persona creation, and generate culturally appropriate outputs that are less prone to exhibiting conspicuous signs of inauthenticity.

Some platforms implement protections against misuse of AI by their users. Threat Actors have been observed bypassing these protections using prompt injections, poisoning, jailbreaking, or integrity attacks. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0154 Digital Content Creation Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0154.001: AI LLM Platform + +**Summary**: OpenAI’s ChatGPT, Google’s Bard, Microsoft’s Turing-NLG, Google’s T5 (Text-to-Text Transfer Transformer), and Facebook’s BART are examples of AI LLM (Large Language Model) Platforms.

AI LLM Platforms are online platforms which allow people to create Accounts that they can use to interact with the platform’s AI Large Language Model, to produce text-based content.

LLMs can create hyper-realistic synthetic text that is both scalable and persuasive. LLMs can largely automate content production, reducing the overhead in persona creation, and generate culturally appropriate outputs that are less prone to exhibiting conspicuous signs of inauthenticity.

Some platforms implement protections against misuse of AI by their users. Threat Actors have been observed bypassing these protections using prompt injections, poisoning, jailbreaking, or integrity attacks. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0154.002.md b/generated_pages/techniques/T0154.002.md index 125711e..ac10b7f 100644 --- a/generated_pages/techniques/T0154.002.md +++ b/generated_pages/techniques/T0154.002.md @@ -2,6 +2,55 @@ **Summary**: AI Media Platforms are online platforms that allow people to create Accounts which they can use to produce image, video, or audio content (also known as “deepfakes”) using the platform’s AI Software.

Midjourney, DALL-E, Stable Diffusion, and Adobe Firefly are examples of AI Media Platforms which allow users to Develop AI-Generated Images, AI-Generated Videos and AI-Generated Account Imagery.

Similarly, Reface, Zao, FaceApp, and Wombo are mobile apps which offer features for creating AI-Generated videos, gifs, or trending memes.

AI-Generated Audio such as text-to-speech and voice cloning have revolutionised the creation of synthetic voices that closely mimic human speech. AI Media Platforms such as Descript, Fliki, Murf AI, PlayHT, and Resemble AI can be used to generate synthetic voice.

Some platforms implement protections against misuse of AI by their users. Threat Actors have been observed bypassing these protections using prompt injections, poisoning, jailbreaking, or integrity attacks. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0154 Digital Content Creation Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | +| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.

[...]

Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.

Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.

Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.

Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.

The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.


ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples’ faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0154.002: AI Media Platform + +**Summary**: AI Media Platforms are online platforms that allow people to create Accounts which they can use to produce image, video, or audio content (also known as “deepfakes”) using the platform’s AI Software.

Midjourney, DALL-E, Stable Diffusion, and Adobe Firefly are examples of AI Media Platforms which allow users to Develop AI-Generated Images, AI-Generated Videos and AI-Generated Account Imagery.

Similarly, Reface, Zao, FaceApp, and Wombo are mobile apps which offer features for creating AI-Generated videos, gifs, or trending memes.

AI-Generated Audio such as text-to-speech and voice cloning have revolutionised the creation of synthetic voices that closely mimic human speech. AI Media Platforms such as Descript, Fliki, Murf AI, PlayHT, and Resemble AI can be used to generate synthetic voice.

Some platforms implement protections against misuse of AI by their users. Threat Actors have been observed bypassing these protections using prompt injections, poisoning, jailbreaking, or integrity attacks. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0154 Digital Content Creation Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:

On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement.

Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).

Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.

Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | +| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidia’s publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. It’s also irresponsible and needs to be restricted immediately.

[...]

Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.

Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.

Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that it’s been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.

Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammer’s identity after the fact. Perhaps the scammer used an old classmate’s photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.

The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human who’s never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos don’t give law enforcement much to work with.


ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples’ faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). | +| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | “I seriously don't understand why I have to constantly put up with these dumbasses here every day.”

So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.

The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.

The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.

[...]

But what those sharing the clip didn’t realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.

[...]

[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.

And they believed they knew who made the fake.

Police charged 31-year-old Dazhon Darien, the school’s athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.

He was arrested at the airport, where police say he was planning to fly to Houston, Texas.

Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.

Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.

Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.


By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server Asset, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account Asset, T0154.002: AI Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0154.002: AI Media Platform + +**Summary**: AI Media Platforms are online platforms that allow people to create Accounts which they can use to produce image, video, or audio content (also known as “deepfakes”) using the platform’s AI Software.

Midjourney, DALL-E, Stable Diffusion, and Adobe Firefly are examples of AI Media Platforms which allow users to Develop AI-Generated Images, AI-Generated Videos and AI-Generated Account Imagery.

Similarly, Reface, Zao, FaceApp, and Wombo are mobile apps which offer features for creating AI-Generated videos, gifs, or trending memes.

AI-Generated Audio such as text-to-speech and voice cloning have revolutionised the creation of synthetic voices that closely mimic human speech. AI Media Platforms such as Descript, Fliki, Murf AI, PlayHT, and Resemble AI can be used to generate synthetic voice.

Some platforms implement protections against misuse of AI by their users. Threat Actors have been observed bypassing these protections using prompt injections, poisoning, jailbreaking, or integrity attacks. + **Tactic**: TA07 Select Channels and Affordances @@ -23,4 +72,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0154.md b/generated_pages/techniques/T0154.md index 333adbe..7d86927 100644 --- a/generated_pages/techniques/T0154.md +++ b/generated_pages/techniques/T0154.md @@ -2,6 +2,48 @@ **Summary**: Digital Content Creation Assets are Platforms or Software which help actors produce content for publication online. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0154: Digital Content Creation Asset + +**Summary**: Digital Content Creation Assets are Platforms or Software which help actors produce content for publication online. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0154: Digital Content Creation Asset + +**Summary**: Digital Content Creation Assets are Platforms or Software which help actors produce content for publication online. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.001.md b/generated_pages/techniques/T0155.001.md index f348556..0d87799 100644 --- a/generated_pages/techniques/T0155.001.md +++ b/generated_pages/techniques/T0155.001.md @@ -2,6 +2,48 @@ **Summary**: A Password Gated Asset is an online asset which requires a password to gain access.

Examples include password protected Servers set up to be a File Hosting Platform, or password protected Community Sub-Forums. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.001: Password Gated Asset + +**Summary**: A Password Gated Asset is an online asset which requires a password to gain access.

Examples include password protected Servers set up to be a File Hosting Platform, or password protected Community Sub-Forums. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.001: Password Gated Asset + +**Summary**: A Password Gated Asset is an online asset which requires a password to gain access.

Examples include password protected Servers set up to be a File Hosting Platform, or password protected Community Sub-Forums. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.002.md b/generated_pages/techniques/T0155.002.md index 2ddb69d..aa75b3b 100644 --- a/generated_pages/techniques/T0155.002.md +++ b/generated_pages/techniques/T0155.002.md @@ -2,6 +2,48 @@ **Summary**: An Invite Gated Asset is an online asset which requires an existing user to invite other users for access to the asset.

Examples include Chat Groups in which Administrator Accounts are able to add or remove users, or File Hosting Platforms which allow users to invite other users to access their files. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.002: Invite Gated Asset + +**Summary**: An Invite Gated Asset is an online asset which requires an existing user to invite other users for access to the asset.

Examples include Chat Groups in which Administrator Accounts are able to add or remove users, or File Hosting Platforms which allow users to invite other users to access their files. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.002: Invite Gated Asset + +**Summary**: An Invite Gated Asset is an online asset which requires an existing user to invite other users for access to the asset.

Examples include Chat Groups in which Administrator Accounts are able to add or remove users, or File Hosting Platforms which allow users to invite other users to access their files. + **Tactic**: TA07 Select Channels and Affordances @@ -19,4 +61,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.003.md b/generated_pages/techniques/T0155.003.md index 0664b9e..f63e46d 100644 --- a/generated_pages/techniques/T0155.003.md +++ b/generated_pages/techniques/T0155.003.md @@ -2,6 +2,50 @@ **Summary**: An Approval Gated Asset is an online asset which requires approval from Administrator Accounts for access to the asset.

Examples include Online Community Groups on Facebook, which can be configured to require questions and approval before access, and Accounts on Social Media Platforms such as Instagram, which allow users to set their accounts as visible to approved friends only. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamer Uprising Forums (GUF) [is an online discussion platform using the classic forum structure] aimed directly at gamers. It is run by US Neo-Nazi Andrew Anglin and explicitly targets politically right-wing gamers. This forum mainly includes antisemitic, sexist, and racist topics, but also posts on related issues such as esotericism, conspiracy narratives, pro-Russian propaganda, alternative medicine, Christian religion, content related to the incel- and manosphere, lists of criminal offences committed by non-white people, links to right-wing news sites, homophobia and trans-hostility, troll guides, anti-leftism, ableism and much more. Most noticeable were the high number of antisemitic references. For example, there is a thread with hundreds of machine-generated images, most of which feature openly antisemitic content and popular antisemitic references. Many users chose explicitly antisemitic avatars. Some of the usernames also provide clues to the users’ ideologies and profiles feature swastikas as a type of progress bar and indicator of the user’s activity in the forum.

The GUF’s front page contains an overview of the forum, user statistics, and so-called “announcements”. In addition to advice-like references, these feature various expressions of hateful ideologies. At the time of the exploration, the following could be read there: “Jews are the problem!”, “Women should be raped”, “The Jews are going to be required to return stolen property”, “Immigrants will have to be physically removed”, “Console gaming is for n******” and “Anger is a womanly emotion”. New users have to prove themselves in an area for newcomers referred to in imageboard slang as the “Newfag Barn”. Only when the newcomers’ posts have received a substantial number of likes from established users, are they allowed to post in other parts of the forum. It can be assumed that this will also lead to competitions to outdo each other in posting extreme content. However, it is always possible to view all posts and content on the site. In any case, it can be assumed that the platform hardly addresses milieus that are not already radicalised or at risk of radicalisation and is therefore deemed relevant for radicalisation research. However, the number of registered users is low (typical for radicalised milieus) and, hence, the platform may only be of interest when studying a small group of highly radicalised individuals.


Gamer Uprising Forum is a legacy online forum, with access gated behind approval of existing platform users (T0155.003: Approval Gated Asset, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.003: Approval Gated Asset + +**Summary**: An Approval Gated Asset is an online asset which requires approval from Administrator Accounts for access to the asset.

Examples include Online Community Groups on Facebook, which can be configured to require questions and approval before access, and Accounts on Social Media Platforms such as Instagram, which allow users to set their accounts as visible to approved friends only. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00105 Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors](../../generated_pages/incidents/I00105.md) | In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:

Gamer Uprising Forums (GUF) [is an online discussion platform using the classic forum structure] aimed directly at gamers. It is run by US Neo-Nazi Andrew Anglin and explicitly targets politically right-wing gamers. This forum mainly includes antisemitic, sexist, and racist topics, but also posts on related issues such as esotericism, conspiracy narratives, pro-Russian propaganda, alternative medicine, Christian religion, content related to the incel- and manosphere, lists of criminal offences committed by non-white people, links to right-wing news sites, homophobia and trans-hostility, troll guides, anti-leftism, ableism and much more. Most noticeable were the high number of antisemitic references. For example, there is a thread with hundreds of machine-generated images, most of which feature openly antisemitic content and popular antisemitic references. Many users chose explicitly antisemitic avatars. Some of the usernames also provide clues to the users’ ideologies and profiles feature swastikas as a type of progress bar and indicator of the user’s activity in the forum.

The GUF’s front page contains an overview of the forum, user statistics, and so-called “announcements”. In addition to advice-like references, these feature various expressions of hateful ideologies. At the time of the exploration, the following could be read there: “Jews are the problem!”, “Women should be raped”, “The Jews are going to be required to return stolen property”, “Immigrants will have to be physically removed”, “Console gaming is for n******” and “Anger is a womanly emotion”. New users have to prove themselves in an area for newcomers referred to in imageboard slang as the “Newfag Barn”. Only when the newcomers’ posts have received a substantial number of likes from established users, are they allowed to post in other parts of the forum. It can be assumed that this will also lead to competitions to outdo each other in posting extreme content. However, it is always possible to view all posts and content on the site. In any case, it can be assumed that the platform hardly addresses milieus that are not already radicalised or at risk of radicalisation and is therefore deemed relevant for radicalisation research. However, the number of registered users is low (typical for radicalised milieus) and, hence, the platform may only be of interest when studying a small group of highly radicalised individuals.


Gamer Uprising Forum is a legacy online forum, with access gated behind approval of existing platform users (T0155.003: Approval Gated Asset, T0151.009: Legacy Online Forum Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.003: Approval Gated Asset + +**Summary**: An Approval Gated Asset is an online asset which requires approval from Administrator Accounts for access to the asset.

Examples include Online Community Groups on Facebook, which can be configured to require questions and approval before access, and Accounts on Social Media Platforms such as Instagram, which allow users to set their accounts as visible to approved friends only. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.004.md b/generated_pages/techniques/T0155.004.md index f4a5c79..563494b 100644 --- a/generated_pages/techniques/T0155.004.md +++ b/generated_pages/techniques/T0155.004.md @@ -2,6 +2,50 @@ **Summary**: A Geoblocked Asset is an online asset which cannot be accessed in specific geographical locations.

Assets can be Geoblocked by choice of the platform, or can have Geoblocking mandated by regulators, and enforced through Internet Service Providers. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.004: Geoblocked Asset + +**Summary**: A Geoblocked Asset is an online asset which cannot be accessed in specific geographical locations.

Assets can be Geoblocked by choice of the platform, or can have Geoblocking mandated by regulators, and enforced through Internet Service Providers. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | “The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”

[...]

“Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.

“Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehon’s supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 2012–2019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”


In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain Asset, T0150.004: Repurposed Asset).

Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website Asset, T0155.004: Geoblocked Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.004: Geoblocked Asset + +**Summary**: A Geoblocked Asset is an online asset which cannot be accessed in specific geographical locations.

Assets can be Geoblocked by choice of the platform, or can have Geoblocking mandated by regulators, and enforced through Internet Service Providers. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.005.md b/generated_pages/techniques/T0155.005.md index 45e95ea..f88a45a 100644 --- a/generated_pages/techniques/T0155.005.md +++ b/generated_pages/techniques/T0155.005.md @@ -2,6 +2,49 @@ **Summary**: A Paid Access Asset is an online asset which requires a single payment for permanent access to the asset. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.005: Paid Access Asset + +**Summary**: A Paid Access Asset is an online asset which requires a single payment for permanent access to the asset. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00099 More Women Are Facing The Reality Of Deepfakes, And They’re Ruining Lives](../../generated_pages/incidents/I00099.md) | Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “It’s like you’re in a tunnel, going further and further into this enclosed space, where there’s no light,” she tells Vogue. This feeling pervaded Helen’s life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.

[...]

Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what they’ve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.

Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought she’d be carrying this “dirty secret” forever, and so she stopped writing.

[...]

Meanwhile, deepfake ‘communities’ are thriving. There are now dedicated sites, user-friendly apps and organised ‘request’ procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a woman’s image and a bot will strip her naked.

“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didn’t consent to, like my suffering is your livelihood.” She’s even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?


A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)).

Another website enabled users to commission custom deepfakes (T0152.004: Website Asset, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.005: Paid Access Asset + +**Summary**: A Paid Access Asset is an online asset which requires a single payment for permanent access to the asset. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.006.md b/generated_pages/techniques/T0155.006.md index 5dde162..72f978e 100644 --- a/generated_pages/techniques/T0155.006.md +++ b/generated_pages/techniques/T0155.006.md @@ -2,6 +2,49 @@ **Summary**: A Subscription Access Asset is an online asset which requires a continued subscription for access to the asset.

Examples include the Blogging Platform Substack, which affords Blogs hosted on their platform the ability to produce subscriber-only posts, and the Subscription Service Platform Patreon. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.006: Subscription Access Asset + +**Summary**: A Subscription Access Asset is an online asset which requires a continued subscription for access to the asset.

Examples include the Blogging Platform Substack, which affords Blogs hosted on their platform the ability to produce subscriber-only posts, and the Subscription Service Platform Patreon. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:

“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.

“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”

Patreon did not respond to VICE News’ request for comment on the report’s findings.

One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.

[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on people’s fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earth’s climate problems.

DuByne offers seven different membership levels for supporters, beginning at just $1 per month.

The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.

The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Biden’s presidency. Some of these accounts are earning over $600 per month.


David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account Asset, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access Asset). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.006: Subscription Access Asset + +**Summary**: A Subscription Access Asset is an online asset which requires a continued subscription for access to the asset.

Examples include the Blogging Platform Substack, which affords Blogs hosted on their platform the ability to produce subscriber-only posts, and the Subscription Service Platform Patreon. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.007.md b/generated_pages/techniques/T0155.007.md index 474c71b..8c35b2a 100644 --- a/generated_pages/techniques/T0155.007.md +++ b/generated_pages/techniques/T0155.007.md @@ -2,6 +2,50 @@ **Summary**: Some online platforms support encrypted communication between platform users, for example the Chat Platforms Telegram and Signal. +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.007: Encrypted Communication Channel + +**Summary**: Some online platforms support encrypted communication between platform users, for example the Chat Platforms Telegram and Signal. + +**Tactic**: TA07 Select Channels and Affordances **Parent Technique:** T0155 Gated Asset + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | “While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”

In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account Asset, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155.007: Encrypted Communication Channel + +**Summary**: Some online platforms support encrypted communication between platform users, for example the Chat Platforms Telegram and Signal. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +64,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file diff --git a/generated_pages/techniques/T0155.md b/generated_pages/techniques/T0155.md index b648e4a..b14a72e 100644 --- a/generated_pages/techniques/T0155.md +++ b/generated_pages/techniques/T0155.md @@ -2,6 +2,49 @@ **Summary**: Some assets are Gated; closed communities or platforms which can’t be accessed openly. They may be password protected or require admin approval for entry. Many different digital assets can be gated. This technique contains sub-techniques with methods used to gate assets. Analysts can use T0155: Gated Asset if the method of gating is unclear. +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155: Gated Asset + +**Summary**: Some assets are Gated; closed communities or platforms which can’t be accessed openly. They may be password protected or require admin approval for entry. Many different digital assets can be gated. This technique contains sub-techniques with methods used to gate assets. Analysts can use T0155: Gated Asset if the method of gating is unclear. + +**Tactic**: TA07 Select Channels and Affordances + + +| Associated Technique | Description | +| --------- | ------------------------- | + + + +| Incident | Descriptions given for this incident | +| -------- | -------------------- | +| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos’ use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.

In going back to Suavelos’ main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;

Suavelos linked out to an online store which it controlled (T0152.004: Website Asset, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account Asset, T0148.003: Payment Processing Platform).

The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account Asset, T0151.008: Microblogging Platform), YouTube (T0146: Account Asset, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account Asset, T0151.001: Social Media Platform). | + + + +| Counters | Response types | +| -------- | -------------- | + + +# Technique T0155: Gated Asset + +**Summary**: Some assets are Gated; closed communities or platforms which can’t be accessed openly. They may be password protected or require admin approval for entry. Many different digital assets can be gated. This technique contains sub-techniques with methods used to gate assets. Analysts can use T0155: Gated Asset if the method of gating is unclear. + **Tactic**: TA07 Select Channels and Affordances @@ -20,4 +63,3 @@ | -------- | -------------- | -DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW \ No newline at end of file