DISARMframeworks/generated_pages/techniques/T0145.001.md

6.4 KiB
Raw Blame History

Technique T0145.001: Copy Account Imagery

  • Summary: Account imagery copied from an existing account.

    Analysts may use reverse image search tools to try to identify previous uses of account imagery (e.g. a profile picture) by other accounts.

    Threat Actors have been known to copy existing accounts imagery to impersonate said accounts, or to provide imagery for unrelated accounts which arent intended to impersonate the original assets owner.

    Associated Techniques and Sub-techniques
    T0143.003: Impersonated Persona: Actors may copy existing accounts imagery in an attempt to impersonate them.
    T0143.004: Parody Persona: Actors may copy existing accounts imagery as part of a parody of that account.

  • Belongs to tactic stage: TA15

Incident Descriptions given for this incident
I00070 Eli Lilly Clarifies Its Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter “Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.

“[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).

The parody account tweeted “we are excited to announce insulin is free now.””


In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification.
I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests “Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates photographs and, in some cases, plagiarized tweets from the real individuals accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.

“For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for Californias 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengoods official account earlier that month”

[...]

“In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New Yorks 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butlers website, jineeabutlerforcongress[.]com.”


In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content).
I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media “In the wake of the Hamas attack on October 7th, the Israel Defense Forces (IDF) Information Security Department revealed a campaign of Instagram accounts impersonating young, attractive Israeli women who were actively engaging Israeli soldiers, attempting to extract information through direct messages.

[...]

“Some profiles underwent a reverse-image search of their photos to ascertain their authenticity. Many of the images searched were found to be appropriated from genuine social media profiles or sites such as Pinterest. When this was the case, the account was marked as confirmed to be inauthentic. One innovative method involves using photos that are initially frames from videos, which allows for evading reverse searches in most cases . This is seen in Figure 4, where an image uploaded by an inauthentic account was a screenshot taken from a TikTok video.”


In this example accounts associated with an influence operation used account imagery showing “young, attractive Israeli women” (T0145.006: Attractive Person Account Imagery), with some of these assets taken from existing accounts not associated with the operation (T0145.001: Copy Account Imagery).
I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation “In 2017, Tanya O'Carroll, a technology and human rights adviser for Amnesty International, published an investigation of the political impact of bots and trolls in Mexico (OCarroll, 2017). An article by the BBC describes a video showing the operation of a "troll farm" in Mexico, where people were tweeting in support of Enrique Peña Nieto of the PRI in 2012 (Martinez, 2018).

“According to a report published by El País, the main target of parties online strategies are young people, including 14 million new voters who are expected to play a decisive role in the outcome of the July 2018 election (Peinado et al., 2018). Thus, one of the strategies employed by these bots was the use of profile photos of attractive people from other countries (Soloff, 2017).”


In this example accounts copied the profile pictures of attractive people from other countries (T0145.001: Copy Account Imagery, T0145.006: Attractive Person Account Imagery).
Counters Response types

DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW