mirror of
https://github.com/DISARMFoundation/DISARMframeworks.git
synced 2024-12-19 21:04:19 -05:00
23 KiB
23 KiB
Technique T0143.002: Fabricated Persona
-
Summary: An individual or institution pretending to have a persona without any legitimate claim to that persona is presenting a fabricated persona, such as a person who presents themselves as a member of a country’s military without having worked in any capacity with the military (T0143.002: Fabricated Persona, T0097.105: Military Personnel).
Sometimes real people can present entirely fabricated personas; they can use real names and photos on social media while also pretending to have credentials or traits they don’t have in real life. -
Belongs to tactic stage: TA16
Incident | Descriptions given for this incident |
---|---|
I00069 Uncharmed: Untangling Iran's APT42 Operations | “In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.” In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) |
I00074 The Tactics & Tropes of the Internet Research Agency | “The Black Matters Facebook Page [operated by Russia’s Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDN’T BELIEVE THE MEDIA / SO I BECAME ONE”]” In this example an asset controlled by Russia’s Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests | “In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.” In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors | “Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.” In an effort to make the Oriental Review’s stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them). In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
I00078 Meta’s September 2020 Removal of Coordinated Inauthentic Behavior | “[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States. “This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.” Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here. Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
I00079 Three thousand fake tanks | “The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.” In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona). This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. |
I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook | “One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powell’s Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites. “Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.” The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). |
I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says | “Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group. ““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants’ difficult lives in Europe,” Meta said. “They also posted about Poland’s strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian government’s handling of the border crisis and off-platform videos alleging migrant abuse in Europe.”” In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
I00086 #WeAreNotSafe – Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media | In this report accounts were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023”. “A core component of the detection methodology was applying qualitative linguistic analysis. This involved checking the fingerprint of language, syntax, and style used in the comments and profile of the suspected account. Each account bio consistently incorporated a combination of specific elements: emojis, nationality, location, educational institution or occupation, age, and a personal quote, sports team or band. The recurrence of this specific formula across multiple accounts hinted at a standardized template for bio construction.” This example shows how actors can follow a templated formula to present a persona on social media platforms (T0143.002: Fabricated Persona, T0144.002: Persona Template). |
I00089 Hackers Use Fake Facebook Profiles of Attractive Women to Spread Viruses, Steal Passwords | “On Facebook, Rita, Alona and Christina appeared to be just like the millions of other U.S citizens sharing their lives with the world. They discussed family outings, shared emojis and commented on each other's photographs. “In reality, the three accounts were part of a highly-targeted cybercrime operation, used to spread malware that was able to steal passwords and spy on victims. “Hackers with links to Lebanon likely ran the covert scheme using a strain of malware dubbed "Tempting Cedar Spyware," according to researchers from Prague-based anti-virus company Avast, which detailed its findings in a report released on Wednesday. “In a honey trap tactic as old as time, the culprits' targets were mostly male, and lured by fake attractive women. “In the attack, hackers would send flirtatious messages using Facebook to the chosen victims, encouraging them to download a second , booby-trapped, chat application known as Kik Messenger to have "more secure" conversations. Upon analysis, Avast experts found that "many fell for the trap.”” In this example threat actors took on the persona of a romantic suitor on Facebook, directing their targets to another platform (T0097:109 Romantic Suitor Persona, T0145.006: Attractive Person Account Imagery, T0143.002: Fabricated Persona). |
I00091 Facebook uncovers Chinese network behind fake expert | “Earlier in July [2021], an account posing as a Swiss biologist called Wilson Edwards had made statements on Facebook and Twitter that the United States was applying pressure on the World Health Organization scientists who were studying the origins of Covid-19 in an attempt to blame the virus on China. “State media outlets, including CGTN, Shanghai Daily and Global Times, had cited the so-called biologist based on his Facebook profile. “However, the Swiss embassy said in August that the person likely did not exist, as the Facebook account was opened only two weeks prior to its first post and only had three friends. “It added "there was no registry of a Swiss citizen with the name "Wilson Edwards" and no academic articles under the name", and urged Chinese media outlets to take down any mention of him. [...] “It also said that his profile photo also appeared to have been generated using machine-learning capabilities.” In this example an account created on Facebook presented itself as a Swiss biologist to present a narrative related to COVID-19 (T0143.002: Fabricated Persona, T0097.106: Researcher Persona). It used an AI-Generated profile picture to disguise itself (T0145.002: AI-Generated Account Imagery). |
I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators | “A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta. “The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited. “London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”. In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
I00096 China ramps up use of AI misinformation | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors: On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwan’s Jan 2024 presidential election] Terry Gou – who also founded electronics giant Foxconn – in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated – Gou had of course made no such endorsement. Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account Asset, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona). Spamouflage also exploited AI-powered video platform CapCut – which is owned by TikTok backers ByteDance – to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan. Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account Asset, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker | Ahead of the 2019 UK Election during a leader’s debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”: The evening of the 19th November 2019 saw the first of three Leaders’ Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something I’ve done in the past for certain shows. In some cases I just can’t watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. That’s short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers. That is, until a few minutes into the debate. All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself. The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”. The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing Asset, T0146.003: Verified Account Asset, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
I00121 Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts | The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign. [...] The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIV’s assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation. [...] All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the sender’s IP address. In this example, threat actors used gmail accounts (T0146.001: Free Account Asset, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
I00125 The Agency | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation: [The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncé’s recent single “7/11” played in the background, an apparent attempt to establish the video’s contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport. Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account Asset, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed Asset). A video was created which appeared to support the campaign’s narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
Counters | Response types |
---|
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW