<td>Note: In each case, depending on the platform there may be a way to identify a fence-sitter. For example, online polls may have a neutral option or a "somewhat this-or-that" option, and may reveal who voted for that to all visitors. This information could be of use to data analysts. In TA08-11, the engagement level of victims could be identified to detect and respond to increasing engagement.</td>
<td>Original Comment: Shortcomings: intentional falsehood. Doesn't solve accuracy. Can't be mandatory. Technique should be in terms of "strategic innoculation", raising the standards of what people expect in terms of evidence when consuming news.</td>
<td>Unsure I understood the original intention or what it applied to. Therefore the techniques listed (10, 39, 43, 57, 61) are under my interpretation - which is that we want to track ignorant agents who fall into the enemy's trap and show a cost to financing/reposting/helping the adversary via public shaming or other means.</td>
<td></td>
<td>TA06 Develop Content</td>
<td>D01</td>
</tr>
<tr>
<td><ahref="detections/F00039.md">F00039</a></td>
<td>standards to track image/ video deep fakes - industry</td>
<td></td>
<td></td>
<td>TA06 Develop Content</td>
<td>D01</td>
</tr>
<tr>
<td><ahref="detections/F00040.md">F00040</a></td>
<td>Unalterable metadata signature on origins of image and provenance</td>
<td>*Deplatform People: This technique needs to be a bit more specific to distinguish it from "account removal" or DDOS and other techniques that get more specific when applied to content. For example, other ways of deplatforming people include attacking their sources of funds, their allies, their followers, etc.</td>
<td>I assume this was a transcript error. Otherwise, "Identify Susceptible Influences" as in the various methods of influences that may work against a victim could also be a technique. Nope, wasn't a transcript error: original note says influencers, as in find people of influence that might be targetted.</td>
<td>a developing methodology for identifying statistical differences in how social groups use language and quantifying how common those statistical differences are within a larger population. In essence, it hypothesises how much affinity might exist for a specific group within a general population, based on the language its members employ</td>
<td>Track Russian media and develop analytic methods.</td>
<td>To effectively counter Russian propaganda, it will be critical to track Russian influence efforts. The information requirements are varied and include the following: • Identify fake-news stories and their sources. • Understand narrative themes and content that pervade various Russian media sources. • Understand the broader Russian strategy that underlies tactical propaganda messaging.</td>
<td>Local influencers detected via Twitter networks are likely local influencers in other online and off-line channels as well. In addition, the content and themes gleaned from Russia and Russia-supporting populations, as well as anti-Russia activists, likely swirl in other online and off-line mediums as well.</td>
<td></td>
<td></td>
<td>D01</td>
</tr>
<tr>
<td><ahref="detections/F00072.md">F00072</a></td>
<td>network analysis to identify central users in the pro-Russia activist community.</td>
<td>It is possible that some of these are bots or trolls and could be flagged for suspension for violating Twitter’s terms of service.</td>
<td></td>
<td></td>
<td>D01</td>
</tr>
<tr>
<td><ahref="detections/F00073.md">F00073</a></td>
<td>collect intel/recon on black/covert content creators/manipulators</td>
<td>Players at the level of covert attribution, referred to as “black” in the grayscale of deniability, produce content on user-generated media, such as YouTube, but also add fear-mongering commentary to and amplify content produced by others and supply exploitable content to data dump websites. These activities are conducted by a network of trolls, bots, honeypots, and hackers.</td>
<td>brand ambassador programmes could be used with influencers across a variety of social media channels. It could also target other prominent experts, such as academics, business leaders, and other potentially prominent people. Authorities must ultimately take care in implementing such a programme given the risk that contact with U.S. or NATO authorities might damage influencer reputations. Engagements must consequently be made with care, and, if possible, government interlocutors should work through local NGOs.</td>
<td>significant amounts of quality open-source information are now available and should be leveraged to build products and analysis prior to problem prioritisation in the areas of observation, attribution, and intent. Successfully distinguishing the grey zone campaign signal through the global noise requires action through the entirety of the national security community. Policy, process, and tools must all adapt and evolve to detect, discern, and act upon a new type of signal</td>
<td>Target audience connected to "useful idiots rather than the specific profiles because - The active presence of such sources complicates targeting of Russian propaganda, given that it is often difficult to discriminate between authentic views and opinions on the internet and those disseminated by the Russian state.</td>
<td>Grey zone threats are challenging given that warning requires detection of a weak signal through global noise and across threat vectors and regional boundaries.Three interconnected grey zone elements characterise the nature of the activity: Temporality: The nature of grey zone threats truly requires a “big picture view” over long timescales and across regions and functional topics. Attribution: requiring an “almost certain” or “nearly certain analytic assessment before acting costs time and analytic effort Intent: judgement of adversarial intent to conduct grey zone activity. Indeed, the purpose of countering grey zone threats is to deter adversaries from fulfilling their intent to act. While attribution is one piece of the puzzle, closing the space around intent often means synthesising multiple relevant indicators and warnings, including the state’s geopolitical ambitions, military ties, trade and investment, level of corruption, and media landscape, among others.</td>
<td>Develop an intelligence-based understanding of foreign actors’ motivations, psychologies, and societal and geopolitical contexts. Leverage artificial intelligence to identify patterns and infer competitors’ intent</td>
<td>United States has not adequately adapted its information indicators and thresholds for warning policymakers to account for grey zone tactics. Competitors have undertaken a marked shift to slow-burn, deceptive, non-military, and indirect challenges to U.S. interests. Relative to traditional security indicators and warnings, these are more numerous and harder to detect and make it difficult for analysts to infer intent.</td>
<td>Revitalise an “active measures working group,”</td>
<td>Recognise campaigns from weak signals, including rivals’ intent, capability, impact, interactive effects, and impact on U.S. interests... focus on adversarial covert action aspects of campaigning.</td>
<td>"Grey zone" is second level of content producers and circulators, composed of outlets with uncertain attribution. This category covers conspiracy websites, far-right or far-left websites, news aggregators, and data dump websites</td>
<td>This might include working with relevant technology firms to ensure that contracted analytic support is available. Contracted support is reportedly valuable because technology to monitor social media data is continually evolving, and such firms can provide the expertise to help identify and analyse trends, and they can more effectively stay abreast of the changing systems and develop new models as they are required</td>