Fixed some formatting issues with version 1.5

This commit is contained in:
Stephen Campbell 2024-07-27 05:24:28 -04:00
parent 2c4757b429
commit d83d55c722
54 changed files with 101 additions and 660 deletions

View file

@ -8,7 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | Click-bait (economic actors) fake news sites (ie: Denver Guardian; Macedonian teens) |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.<br><br> “Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.<br><br> “The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.<br><br> “It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”</i><br><br> Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.<br><br> We cant know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], however, the Donbas News International (DNI) agency, based in Donetsk, Ukraine, and (since September 2016) an official state media outlet of the unrecognized separatist Donetsk Peoples Republic, ran an article under the sensational headline, “US sends 3,600 tanks against Russia — massive NATO deployment under way.” DNI is run by Finnish exile Janus Putkonen, described by the Finnish national broadcaster, YLE, as a “Finnish info warrior”, and the first foreigner to be granted a Donetsk passport.<br><br>“The equally sensational opening paragraph ran, “The NATO war preparation against Russia, Operation Atlantic Resolve, is in full swing. 2,000 US tanks will be sent in coming days from Germany to Eastern Europe, and 1,600 US tanks is deployed to storage facilities in the Netherlands. At the same time, NATO countries are sending thousands of soldiers in to Russian borders.”<br><br>“The report is based around an obvious factual error, conflating the total number of vehicles with the actual number of tanks, and therefore multiplying the actual tank force 20 times over. For context, military website globalfirepower.com puts the total US tank force at 8,848. If the DNI story had been true, it would have meant sending 40% of all the US main battle tanks to Europe in one go.<br><br>“Could this have been an innocent mistake? The simple answer is “no”. The journalist who penned the story had a sufficient command of the details to be able to write, later in the same article, “In January, 26 tanks, 100 other vehicles and 120 containers will be transported by train to Lithuania. Germany will send the 122nd Infantry Battalion.” Yet the same author apparently believed, in the headline and first paragraph, that every single vehicle in Atlantic Resolve is a tank. To call this an innocent mistake is simply not plausible.<br><br>“The DNI story can only realistically be considered a deliberate fake designed to caricaturize and demonize NATO, the United States and Germany (tactfully referred to in the report as having “rolled over Eastern Europe in its war of extermination 75 years ago”) by grossly overstating the number of MBTs involved.”</i><br><br>This behaviour matches T0016: Create Clickbait because the person who wrote the story is shown to be aware of the fact that there were non-tank vehicles later in their story, but still chose to give the article a sensationalist headline claiming that all vehicles being sent were tanks. |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -8,7 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”.<br><br><i>“A conspicuous aspect of these accounts is the likely usage of machine-translated Hebrew. The disjointed and linguistically strange comments imply that the CIBs architects are not Hebrew-speaking and likely translate to Hebrew using online tools. Theres no official way to confirm that a text is translated, but it is evident when the gender for nouns is incorrect, very unusual words or illogical grammar being used usually lead to the conclusion that the comment was not written by a native speaker that is aware of the nuances of the language.”</i><br><br>In this example analysts asserted that accounts were posting content which had been translated via machine (T0085.008: Machine Translated Text), based on indicators such as issues with grammar and gender. |
| [I00088 Much Ado About Somethings - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | <i>“The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]<br><br> “Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”</i><br><br> In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.006: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). |
| [I00088 Much Ado About Somethings - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | <i>“The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]<br><br> “Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”</i><br><br> In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.007: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). |

View file

@ -8,7 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T00143.004: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T00143.004: Impersonated Persona). |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |

View file

@ -11,8 +11,7 @@
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”</i><br><br> In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
| [I00078 Metas September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | <i>“[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.<br><br> “This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”</i><br><br> Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.<br><br> Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | <i>“Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.<br><br> ““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants difficult lives in Europe,” Meta said. “They also posted about Polands strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian governments handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””</i><br><br> In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):<br><br><i>“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.”<I>
|
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):<br><br><i>“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.”<I> |
| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | <i>“Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”</i><br><br>In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts profile pictures (T0145.001: Copy Account Imagery). |

View file

@ -10,8 +10,7 @@
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”</i><br><br>In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) |
| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | <i>“The Syria portion of the network [of inauthentic accounts attributed to Russia] included additional sockpuppet accounts. One of these claimed to be a gay rights defender in Syria. Several said they were Syrian journalists. Another account, @SophiaHammer3, said she was born in Syria but currently lives in London. “Im fond of history and politics. I struggle for justice.” Twitter users had previously observed that Sophia was likely a sockpuppet.”</i><br><br> This behaviour matches T0097.103: Activist Persona because the account presents itself as defending a political cause - in this case gay rights.<br><br> Twitters technical indicators allowed their analysts to assert that these accounts were “reliably tied to Russian state actors”, meaning the presented personas were entirely fabricated (T0143.002: Fabricated Persona); these accounts are not legitimate gay rights defenders or journalists, theyre assets controlled by Russia publishing narratives beneficial to their agenda. |
| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | <i>“One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powells Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.<br><br> “Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler noun|linking verb|noun/verb/adjective|,” which appears to reveal the formula used to write Twitter bios for the accounts.”</i><br><br> The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). |
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <I>“[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.<br><br> “The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants rights, and compared Polands treatment of migrants against other countries. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”</i><br><br> This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).<br><br> Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery)., <i>“[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.<br><br>“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“</i><br><br>In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content)
|
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <i>“[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.<br><br>“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“</i><br><br>In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -7,10 +7,10 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T00143.004: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T0143.003: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | <i>“The Black Matters Facebook Page [operated by Russias Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDNT BELIEVE THE MEDIA / SO I BECAME ONE”]”<i><br><br> In this example an asset controlled by Russias Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.<br><br> “The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”</i><br><br> In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. |
| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | <i>Approximately one-third of the suspended accounts [in the network of inauthentic accounts attributed to Russia] tweeted primarily about Syria, in English, Russian, and Arabic; many accounts tweeted in all three languages. The themes these accounts pushed will be familiar to anyone who has studied Russian overt or covert information operations about Syria: <br> <br>- Praising Russias role in Syria; claiming Russia was killing terrorists in Syria and highlighting Russias humanitarian aid <br>- Criticizing the role of the Turkey and the US in Syria; claiming the US killed civilians in Syria <br>- Criticizing the White Helmets, and claiming that they worked with Westerners to created scenes to make it look like the Syrian government used chemical weapons <br><br> “The two most prominent Syria accounts were @Syria_FreeNews and @PamSpenser. <br><br>@Syria_FreeNews had 20,505 followers and was created on April 6, 2017. The accounts bio said “Exclusive information about Middle East and Northern Africa countries events. BreaKing news from the scene.””</i><br><br> This behaviour matches T0097.202: News Outlet Persona because the account @Syrira_FreeNews presented itself as a news outlet in its name, bio, and branding, across all websites on which the persona had been established (T0144.001: Persona Presented across Platforms). Twitters technical indicators allowed them to attribute the account “can be reliably tied to Russian state actors”. Because of this we can assert that the persona is entirely fabricated (T0143.002: Fabricated Persona); this is not a legitimate news outlet providing information about Syria, its an asset controlled by Russia publishing narratives beneficial to their agenda., <i>Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”</i><br><br> In an effort to make the Oriental Reviews stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).<br><br> In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | <i>“Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”</i><br><br> In an effort to make the Oriental Reviews stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).<br><br> In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.<br><br> “Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.<br><br> “The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.<br><br> “It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”</i><br><br> Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.<br><br> We cant know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. |
| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.<br><br> <i>“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the sites true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.<br><br> “The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.<br><br> “In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””</i><br><br> The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). |

View file

@ -1,6 +1,6 @@
# Technique T0097.204: Think Tank Persona
* **Summary**: An institution with a think tank persona presents itself as a think tank; an organisation that aims to conduct original research and propose new policies or solutions, especially for social and scientific problems.<br><br> While presenting as a think tank is not an indication of inauthentic behaviour, think tank personas are commonly used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.204: Think Tank Persona). They may be created to give legitimacy to narratives and allow them to suggest politically beneficial solutions to societal issues.<br><br> Legitimate think tanks could have a political bias that they may not be transparent about, they could use their persona for malicious purposes, or they could be exploited by threat actors (T0143.001: Authentic Persona, T0097.204: Think Tank Persona). For example, a think tank could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge.<br><br> <b>Associated Techniques and Sub-techniques</b><br> <b>T0097.107: Researcher Persona:</br> Institutions presenting as think tanks may also present researchers working within the organisation.
* **Summary**: An institution with a think tank persona presents itself as a think tank; an organisation that aims to conduct original research and propose new policies or solutions, especially for social and scientific problems.<br><br> While presenting as a think tank is not an indication of inauthentic behaviour, think tank personas are commonly used by threat actors as a front for their operational activity (T0143.002: Fabricated Persona, T0097.204: Think Tank Persona). They may be created to give legitimacy to narratives and allow them to suggest politically beneficial solutions to societal issues.<br><br> Legitimate think tanks could have a political bias that they may not be transparent about, they could use their persona for malicious purposes, or they could be exploited by threat actors (T0143.001: Authentic Persona, T0097.204: Think Tank Persona). For example, a think tank could take money for using their position to provide legitimacy to a false narrative, or be tricked into doing so without their knowledge.<br><br> <b>Associated Techniques and Sub-techniques</b><br> <b>T0097.107: Researcher Persona:</b> Institutions presenting as think tanks may also present researchers working within the organisation.
* **Belongs to tactic stage**: TA16

File diff suppressed because one or more lines are too long

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T00143.004: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |

View file

@ -7,11 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | _"In the days leading up to the UKs [2017] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes._ <br /> <br />
_"Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each others profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions."_ <br /> <br />
This incident matches T0104.002: Dating App, as users of Tinder were targeted in an attempt to persuade users to vote for a particular party in the upcoming election, rather than for the purpose of connecting those who were authentically interested in dating each other. |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | _"In the days leading up to the UKs [2017] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes._<br /> <br />_"Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each others profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions."_ <br /> <br />This incident matches T0104.002: Dating App, as users of Tinder were targeted in an attempt to persuade users to vote for a particular party in the upcoming election, rather than for the purpose of connecting those who were authentically interested in dating each other. |

View file

@ -7,8 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <i>“[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.<br><br>“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“</i><br><br>In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content)
|
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <i>“[Meta] removed a network of accounts in Vietnam for violating our Inauthentic Behavior policy against mass reporting. They coordinated the targeting of activists and other people who publicly criticized the Vietnamese government and used false reports of various violations in an attempt to have these users removed from our platform. The people behind this activity relied primarily on authentic and duplicate accounts to submit hundreds — in some cases, thousands — of complaints against their targets through our abuse reporting flows.<br><br>“Many operators also maintained fake accounts — some of which were detected and disabled by our automated systems — to pose as their targets so they could then report the legitimate accounts as fake. They would frequently change the gender and name of their fake accounts to resemble the target individual. Among the most common claims in this misleading reporting activity were complaints of impersonation, and to a much lesser extent inauthenticity. The network also advertised abusive services in their bios and constantly evolved their tactics in an attempt to evade detection.“</i><br><br>In this example actors repurposed their accounts to impersonate targeted activists (T0097.103: Activist Persona, T0143.003: Impersonated Persona) in order to falsely report the activists legitimate accounts as impersonations (T0124.001: Report Non-Violative Opposing Content). |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -9,7 +9,7 @@
| -------- | -------------------- |
| [I00064 Tinder nightmares: the promise and peril of political bots](../../generated_pages/incidents/I00064.md) | <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation (T0141.001: Acquire Compromised Account). The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona). |
| [I00065 'Ghostwriter' Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned With Russian Security Interests](../../generated_pages/incidents/I00065.md) | _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_ <br /> <br />This example demonstrates how threat actors can use _T0141.001: Acquire Compromised Account_ to distribute inauthentic content while exploiting the legitimate account holders persona. |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T00143.004: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T0143.003: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -7,8 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00067 Understanding Information disorder](../../generated_pages/incidents/I00067.md) | <i>“In France, in the lead-up to the 2017 election, we saw [the] labeling content as satire” as a deliberate tactic. In one example, written up by Adrien Sénécat in Le Monde, it shows the step-by-step approach of those who want to use satire in this way.”<br><br> “PHASE 1: Le Gorafi, a satirical site [which focuses on news/current affairs], reported” that French presidential candidate Emmanuel Macron feels dirty after touching poor peoples hands. This worked as an attack on Macron as he is regularly characterized as being out of touch and elitist.<br><br> “PHASE 2: Hyper-partisan Facebook Pages used this claim” and created new reports, including footage of Macron visiting a factory, and wiping his hands during the visit.<br><br> “PHASE 3: The videos went viral, and a worker in another factory challenged Macron to shake his dirty, working class hands.” The news cycle continued.”</I><br><br> In this example a satirical news website (T0097.202: News Outlet Persona, T0143.004: Parody Persona) published a narrative claiming Macron felt dirty after touching poor peoples hands. This story was uncritically amplified without the context that its origin was a parody site, and with video content appearing to support the narrative., <i>“A 2019 case in the US involved a Republican political operative who created a parody site designed to look like Joe Bidens official website as the former vice president was campaigning to be the Democratic nominee for the 2020 presidential election. With a URL of joebiden[.]info, the parody site was indexed by Google higher than Bidens official site, joebiden[.]com, when he launched his campaign in April 2019. The operative, who previously had created content for Donald Trump, said he did not create the site for the Trump campaign directly.<br><br> “The opening line on the parody site reads: “Uncle Joe is back and ready to take a hands-on approach to Americas problems!” It is full of images of Biden kissing and hugging young girls and women. At the bottom of the page a statement reads: “This site is political commentary and parody of Joe Bidens Presidential campaign website. This is not Joe Bidens actual website. It is intended for entertainment and political commentary only.””</i><br><br> In this example a website was created which claimed to be a parody of Joe Bidens official website (T0143.004: Parody Persona).<br><br> Although the website was a parody, it ranked higher than Joe Bidens real website on Google search. |
| [I00070 Eli Lilly Clarifies Its Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | <i>“Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.<br><br> “[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).<br><br> The parody account tweeted “we are excited to announce insulin is free now.””</i><br><br> In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. |
| [I00067 Understanding Information disorder](../../generated_pages/incidents/I00067.md) | <i>“A 2019 case in the US involved a Republican political operative who created a parody site designed to look like Joe Bidens official website as the former vice president was campaigning to be the Democratic nominee for the 2020 presidential election. With a URL of joebiden[.]info, the parody site was indexed by Google higher than Bidens official site, joebiden[.]com, when he launched his campaign in April 2019. The operative, who previously had created content for Donald Trump, said he did not create the site for the Trump campaign directly.<br><br> “The opening line on the parody site reads: “Uncle Joe is back and ready to take a hands-on approach to Americas problems!” It is full of images of Biden kissing and hugging young girls and women. At the bottom of the page a statement reads: “This site is political commentary and parody of Joe Bidens Presidential campaign website. This is not Joe Bidens actual website. It is intended for entertainment and political commentary only.””</i><br><br> In this example a website was created which claimed to be a parody of Joe Bidens official website (T0143.004: Parody Persona).<br><br> Although the website was a parody, it ranked higher than Joe Bidens real website on Google search. |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00088 Much Ado About Somethings - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | "<i>“Beneath a video on Facebook about the war between Israel and Hamas, Lamonica Trout commented, “America is the war monger, the Jews own son!” She left identical comments beneath the same video on two other Facebook pages. Trouts profile provides no information besides her name. It lists no friends, and there is not a single post or photograph in her feed. Trouts profile photo shows an alligator.<br><br> “Lamonica Trout is likely an invention of the group behind Spamouflage, an ongoing, multi-year influence operation that promotes Beijings interests. Last year, Facebooks parent company, Meta, took down 7,704 accounts and 954 pages it identified as part of the Spamouflage operation, which it described as the “largest known cross-platform influence operation [Meta had] disrupted to date.”2 Facebooks terms of service prohibit a range of deceptive and inauthentic behaviors, including efforts to conceal the purpose of social media activity or the identity of those behind it.”</i><br><br> In this example an account attributed to a multi-year influence operation created the persona of Lamonica Trout in a Facebook account, which used an image of an animal in its profile picture (T0145.003: Animal Account Imagery)." |
| [I00088 Much Ado About Somethings - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | <i>“Beneath a video on Facebook about the war between Israel and Hamas, Lamonica Trout commented, “America is the war monger, the Jews own son!” She left identical comments beneath the same video on two other Facebook pages. Trouts profile provides no information besides her name. It lists no friends, and there is not a single post or photograph in her feed. Trouts profile photo shows an alligator.<br><br> “Lamonica Trout is likely an invention of the group behind Spamouflage, an ongoing, multi-year influence operation that promotes Beijings interests. Last year, Facebooks parent company, Meta, took down 7,704 accounts and 954 pages it identified as part of the Spamouflage operation, which it described as the “largest known cross-platform influence operation [Meta had] disrupted to date.”2 Facebooks terms of service prohibit a range of deceptive and inauthentic behaviors, including efforts to conceal the purpose of social media activity or the identity of those behind it.”</i><br><br> In this example an account attributed to a multi-year influence operation created the persona of Lamonica Trout in a Facebook account, which used an image of an animal in its profile picture (T0145.003: Animal Account Imagery). |

View file

@ -7,12 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | <i>“One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powells Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.<br><br>
“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler noun|linking verb|noun/verb/adjective|,” which appears to reveal the formula used to write Twitter bios for the accounts.”</I><br><br>
This behaviour matches T0145.006: Stock Image Account Imagery because the account was identified as using a stock image as its profile picture. |
| [I00088 Much Ado About Somethings - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | <i>“The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]<br><br> “Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”</i><br><br> In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.006: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | <i>“In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.<br><br> [...]<br><br> “Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”</i><br><br> In this example accounts associated with an influence operation used account imagery showing <i>“young, attractive Israeli women”</i> (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). |
| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | <i>“In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (OCarroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).<br><br>“According to a report published by El País, the main target of parties online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”</i><br><br>In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.006: Attractive Person Account Imagery). |
| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | <I>“On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.<br><br> “In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.<br><br> “Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.<br><br> “In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. <br><br> “In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””</i><br><br> In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). |

View file

@ -7,9 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | <i>“In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.<br><br> [...]<br><br> “Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”</i><br><br> In this example accounts associated with an influence operation used account imagery showing <i>“young, attractive Israeli women”</i> (T0145.007: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery). |
| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | <i>“In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (OCarroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).<br><br>“According to a report published by El País, the main target of parties online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”</i><br><br>In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.007: Attractive Person Account Imagery). |
| [I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords](../../generated_pages/incidents/I00089.md) | <I>“On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs.<br><br> “In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims.<br><br> “Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday.<br><br> “In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. <br><br> “In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.””</i><br><br> In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.007: Attractive Person Account Imagery, T0143.002: Fabricated Persona). |
| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | <i>“One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powells Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.<br><br>“Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler noun|linking verb|noun/verb/adjective|,” which appears to reveal the formula used to write Twitter bios for the accounts.”</I><br><br>This behaviour matches T0145.007: Stock Image Account Imagery because the account was identified as using a stock image as its profile picture. |
| [I00088 Much Ado About Somethings - China-Linked Influence Operation Endures Despite Takedown](../../generated_pages/incidents/I00088.md) | <i>“The broader War of Somethings (WoS) network, so dubbed because all the Facebook pages and user accounts in the network are connected to “The War of Somethings” page,  behaves very similarly to previous Spamouflage campaigns. [Spamouflage is a coordinated inauthentic behaviour network attributed to the Chinese state.]<br><br> “Like other components of Spamouflage, the WoS network sometimes intersperses apolitical content with its more agenda-driven material. Many members post nearly identical comments at almost the same time. The text includes markers of automatic translation while error messages included as profile photos indicate the automated pulling of stock images.”</i><br><br> In this example analysts found an indicator of automated use of stock images in Facebook accounts; some of the accounts in the network appeared to have mistakenly uploaded error messages as profile pictures (T0145.007: Stock Image Account Imagery). The text posted by the accounts also appeared to have been translated using automation (T0085.008: Machine Translated Text). |