New version V1.6 of Red Framework to expand roster of non-content assets and improve interoperability with Meta Online Operations Kill Chain

This commit is contained in:
Stephen Campbell 2024-11-19 16:02:48 -05:00
parent 4f14d5f2c9
commit 964938bd15
207 changed files with 5628 additions and 1073 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,44 +1,77 @@
P01,P01,P02,P02,P02,P03,P03,P03,P03,P04,P01,P02,P02,P02,P03,P03
TA01,TA02,TA05,TA06,TA07,TA08,TA09,TA10,TA11,TA12,TA13,TA14,TA15,TA16,TA17,TA18
T0073,T0002,T0016,T0015,T0029,T0020,T0114,T0017,T0059,T0132,T0072,T0003,T0007,T0097,T0039,T0047
T0074,T0066,T0018,T0023,T0043,T0042,T0114.001,T0017.001,T0060,T0132.001,T0072.001,T0004,T0010,T0097.100,T0049,T0048
T0074.001,T0075,T0101,T0023.001,T0043.001,T0044,T0114.002,T0057,T0128,T0132.002,T0072.002,T0022,T0013,T0097.101,T0049.001,T0048.001
T0074.002,T0075.001,T0102,T0023.002,T0043.002,T0045,T0115,T0057.001,T0128.001,T0132.003,T0072.003,T0022.001,T0014,T0097.102,T0049.002,T0048.002
T0074.003,T0076,T0102.001,T0084,T0103,T0046,T0115.001,T0057.002,T0128.002,T0133,T0072.004,T0022.002,T0014.001,T0097.103,T0049.003,T0048.003
T0074.004,T0077,T0102.002,T0084.001,T0103.001,,T0115.002,T0061,T0128.003,T0133.001,T0072.005,T0040,T0014.002,T0097.104,T0049.004,T0048.004
,T0078,T0102.003,T0084.002,T0103.002,,T0115.003,T0126,T0128.004,T0133.002,T0080,T0068,T0065,T0097.105,T0049.005,T0123
,T0079,,T0084.003,T0104,,T0116,T0126.001,T0128.005,T0133.003,T0080.001,T0082,T0090,T0097.106,T0049.006,T0123.001
,T0135,,T0084.004,T0104.001,,T0116.001,T0126.002,T0129,T0133.004,T0080.002,T0083,T0090.001,T0097.107,T0049.007,T0123.002
,T0135.001,,T0085,T0104.002,,T0117,T0127,T0129.001,T0133.005,T0080.003,,T0090.002,T0097.108,T0049.008,T0123.003
,T0135.002,,T0085.001,T0104.003,,,T0127.001,T0129.002,T0134,T0080.004,,T0090.003,T0097.109,T0118,T0123.004
,T0135.003,,T0085.003,T0104.004,,,T0127.002,T0129.003,T0134.001,T0080.005,,T0090.004,T0097.110,T0119,T0124
,T0135.004,,T0085.004,T0104.005,,,,T0129.004,T0134.002,T0081,,T0091,T0097.111,T0119.001,T0124.001
,T0136,,T0085.005,T0104.006,,,,T0129.005,,T0081.001,,T0091.001,T0097.112,T0119.002,T0124.002
,T0136.001,,T0085.006,T0105,,,,T0129.006,,T0081.002,,T0091.002,T0097.200,T0119.003,T0124.003
,T0136.002,,T0085.007,T0105.001,,,,T0129.007,,T0081.003,,T0091.003,T0097.201,T0120,T0125
,T0136.003,,T0085.008,T0105.002,,,,T0129.008,,T0081.004,,T0092,T0097.202,T0120.001,
,T0136.004,,T0086,T0105.003,,,,T0129.009,,T0081.005,,T0092.001,T0097.203,T0120.002,
,T0136.005,,T0086.001,T0106,,,,T0129.010,,T0081.006,,T0092.002,T0097.204,T0121,
,T0136.006,,T0086.002,T0106.001,,,,T0130,,T0081.007,,T0092.003,T0097.205,T0121.001,
,T0136.007,,T0086.003,T0107,,,,T0130.001,,T0081.008,,T0093,T0097.206,T0122,
,T0136.008,,T0086.004,T0108,,,,T0130.002,,,,T0093.001,T0097.207,,
,T0137,,T0087,T0109,,,,T0130.003,,,,T0093.002,T0097.208,,
,T0137.001,,T0087.001,T0110,,,,T0130.004,,,,T0094,T0098,,
,T0137.002,,T0087.002,T0111,,,,T0130.005,,,,T0094.001,T0098.001,,
,T0137.003,,T0088,T0111.001,,,,T0131,,,,T0094.002,T0098.002,,
,T0137.004,,T0088.001,T0111.002,,,,T0131.001,,,,T0095,T0100,,
,T0137.005,,T0088.002,T0111.003,,,,T0131.002,,,,T0096,T0100.001,,
,T0137.006,,T0089,T0112,,,,,,,,T0096.001,T0100.002,,
,T0138,,T0089.001,,,,,,,,,T0096.002,T0100.003,,
,T0138.001,,T0089.003,,,,,,,,,T0113,T0143,,
,T0138.002,,,,,,,,,,,T0141,T0143.001,,
,T0138.003,,,,,,,,,,,T0141.001,T0143.002,,
,T0139,,,,,,,,,,,T0141.002,T0143.003,,
,T0139.001,,,,,,,,,,,T0145,T0143.004,,
,T0139.002,,,,,,,,,,,T0145.001,T0144,,
,T0139.003,,,,,,,,,,,T0145.002,T0144.001,,
,T0140,,,,,,,,,,,T0145.003,T0144.002,,
,T0140.001,,,,,,,,,,,T0145.004,,,
,T0140.002,,,,,,,,,,,T0145.005,,,
,T0140.003,,,,,,,,,,,T0145.006,,,
,,,,,,,,,,,,T0145.007,,,
T0073,T0002,T0016,T0015,T0029,T0020,T0114,T0017,T0059,T0132,T0072,T0003,T0010,T0097,T0039,T0047
T0074,T0066,T0018,T0015.001,T0107,T0042,T0114.001,T0017.001,T0060,T0132.001,T0072.001,T0004,T0014,T0097.100,T0049,T0048
T0074.001,T0075,T0101,T0015.002,T0109,T0044,T0114.002,T0057,T0128,T0132.002,T0072.002,T0022,T0014.001,T0097.101,T0049.001,T0048.001
T0074.002,T0075.001,T0102,T0023,T0110,T0045,T0115,T0057.001,T0128.001,T0132.003,T0072.003,T0022.001,T0014.002,T0097.102,T0049.002,T0048.002
T0074.003,T0076,T0102.001,T0023.001,T0111,T0046,T0115.001,T0057.002,T0128.002,T0133,T0072.004,T0022.002,T0065,T0097.103,T0049.003,T0048.003
T0074.004,T0077,T0102.002,T0023.002,T0111.001,,T0115.002,T0061,T0128.003,T0133.001,T0072.005,T0040,T0091,T0097.104,T0049.004,T0048.004
,T0078,T0102.003,T0084,T0111.002,,T0115.003,T0126,T0128.004,T0133.002,T0080,T0068,T0091.001,T0097.105,T0049.005,T0123
,T0079,,T0084.001,T0111.003,,T0116,T0126.001,T0128.005,T0133.003,T0080.001,T0082,T0091.002,T0097.106,T0049.006,T0123.001
,T0135,,T0084.002,T0151,,T0116.001,T0126.002,T0129,T0133.004,T0080.002,T0083,T0091.003,T0097.107,T0049.007,T0123.002
,T0135.001,,T0084.003,T0151.001,,T0117,T0127,T0129.001,T0133.005,T0080.003,,T0092,T0097.108,T0049.008,T0123.003
,T0135.002,,T0084.004,T0151.002,,,T0127.001,T0129.002,T0134,T0080.004,,T0092.001,T0097.109,T0118,T0123.004
,T0135.003,,T0085,T0151.003,,,T0127.002,T0129.003,T0134.001,T0080.005,,T0092.002,T0097.110,T0119,T0124
,T0135.004,,T0085.001,T0151.004,,,,T0129.004,T0134.002,T0081,,T0092.003,T0097.111,T0119.001,T0124.001
,T0136,,T0085.003,T0151.005,,,,T0129.005,,T0081.001,,T0093,T0097.112,T0119.002,T0124.002
,T0136.001,,T0085.004,T0151.006,,,,T0129.006,,T0081.002,,T0093.001,T0097.200,T0119.003,T0124.003
,T0136.002,,T0085.005,T0151.007,,,,T0129.007,,T0081.003,,T0093.002,T0097.201,T0120,T0125
,T0136.003,,T0085.006,T0151.008,,,,T0129.009,,T0081.004,,T0094,T0097.202,T0120.001,
,T0136.004,,T0085.007,T0151.009,,,,T0129.010,,T0081.005,,T0094.001,T0097.203,T0120.002,
,T0136.005,,T0085.008,T0151.010,,,,T0130,,T0081.006,,T0094.002,T0097.204,T0121,
,T0136.006,,T0086,T0151.011,,,,T0130.001,,T0081.007,,T0095,T0097.205,T0121.001,
,T0136.007,,T0086.001,T0151.012,,,,T0130.002,,T0081.008,,T0096,T0097.206,T0122,
,T0136.008,,T0086.002,T0151.013,,,,T0130.003,,,,T0096.001,T0097.207,,
,T0137,,T0086.003,T0151.014,,,,T0130.004,,,,T0096.002,T0097.208,,
,T0137.001,,T0086.004,T0151.015,,,,T0130.005,,,,T0113,T0098,,
,T0137.002,,T0087,T0151.016,,,,T0131,,,,T0145,T0098.001,,
,T0137.003,,T0087.001,T0151.017,,,,T0131.001,,,,T0145.001,T0098.002,,
,T0137.004,,T0087.002,T0152,,,,T0131.002,,,,T0145.002,T0100,,
,T0137.005,,T0088,T0152.001,,,,,,,,T0145.003,T0100.001,,
,T0137.006,,T0088.001,T0152.002,,,,,,,,T0145.004,T0100.002,,
,T0138,,T0088.002,T0152.003,,,,,,,,T0145.005,T0100.003,,
,T0138.001,,T0089,T0152.004,,,,,,,,T0145.006,T0143,,
,T0138.002,,T0089.001,T0152.005,,,,,,,,T0145.007,T0143.001,,
,T0138.003,,T0089.003,T0152.006,,,,,,,,,T0143.002,,
,T0139,,T0146,T0152.007,,,,,,,,,T0143.003,,
,T0139.001,,T0146.001,T0152.008,,,,,,,,,T0143.004,,
,T0139.002,,T0146.002,T0152.009,,,,,,,,,T0144,,
,T0139.003,,T0146.003,T0152.010,,,,,,,,,T0144.001,,
,T0140,,T0146.004,T0152.011,,,,,,,,,T0144.002,,
,T0140.001,,T0146.005,T0152.012,,,,,,,,,,,
,T0140.002,,T0146.006,T0153,,,,,,,,,,,
,T0140.003,,T0146.007,T0153.001,,,,,,,,,,,
,,,T0147,T0153.002,,,,,,,,,,,
,,,T0147.001,T0153.003,,,,,,,,,,,
,,,T0147.002,T0153.004,,,,,,,,,,,
,,,T0147.003,T0153.005,,,,,,,,,,,
,,,T0147.004,T0153.006,,,,,,,,,,,
,,,T0148,T0153.007,,,,,,,,,,,
,,,T0148.001,T0154,,,,,,,,,,,
,,,T0148.002,T0154.001,,,,,,,,,,,
,,,T0148.003,T0154.002,,,,,,,,,,,
,,,T0148.004,T0155,,,,,,,,,,,
,,,T0148.005,T0155.001,,,,,,,,,,,
,,,T0148.006,T0155.002,,,,,,,,,,,
,,,T0148.007,T0155.003,,,,,,,,,,,
,,,T0148.008,T0155.004,,,,,,,,,,,
,,,T0148.009,T0155.005,,,,,,,,,,,
,,,T0149,T0155.006,,,,,,,,,,,
,,,T0149.001,T0155.007,,,,,,,,,,,
,,,T0149.002,,,,,,,,,,,,
,,,T0149.003,,,,,,,,,,,,
,,,T0149.004,,,,,,,,,,,,
,,,T0149.005,,,,,,,,,,,,
,,,T0149.006,,,,,,,,,,,,
,,,T0149.007,,,,,,,,,,,,
,,,T0149.008,,,,,,,,,,,,
,,,T0149.009,,,,,,,,,,,,
,,,T0150,,,,,,,,,,,,
,,,T0150.001,,,,,,,,,,,,
,,,T0150.002,,,,,,,,,,,,
,,,T0150.003,,,,,,,,,,,,
,,,T0150.004,,,,,,,,,,,,
,,,T0150.005,,,,,,,,,,,,
,,,T0150.006,,,,,,,,,,,,
,,,T0150.007,,,,,,,,,,,,
,,,T0150.008,,,,,,,,,,,,

1 P01 P01 P02 P02 P02 P03 P03 P03 P03 P04 P01 P02 P02 P02 P03 P03
2 TA01 TA02 TA05 TA06 TA07 TA08 TA09 TA10 TA11 TA12 TA13 TA14 TA15 TA16 TA17 TA18
3 T0073 T0002 T0016 T0015 T0029 T0020 T0114 T0017 T0059 T0132 T0072 T0003 T0007 T0010 T0097 T0039 T0047
4 T0074 T0066 T0018 T0023 T0015.001 T0043 T0107 T0042 T0114.001 T0017.001 T0060 T0132.001 T0072.001 T0004 T0010 T0014 T0097.100 T0049 T0048
5 T0074.001 T0075 T0101 T0023.001 T0015.002 T0043.001 T0109 T0044 T0114.002 T0057 T0128 T0132.002 T0072.002 T0022 T0013 T0014.001 T0097.101 T0049.001 T0048.001
6 T0074.002 T0075.001 T0102 T0023.002 T0023 T0043.002 T0110 T0045 T0115 T0057.001 T0128.001 T0132.003 T0072.003 T0022.001 T0014 T0014.002 T0097.102 T0049.002 T0048.002
7 T0074.003 T0076 T0102.001 T0084 T0023.001 T0103 T0111 T0046 T0115.001 T0057.002 T0128.002 T0133 T0072.004 T0022.002 T0014.001 T0065 T0097.103 T0049.003 T0048.003
8 T0074.004 T0077 T0102.002 T0084.001 T0023.002 T0103.001 T0111.001 T0115.002 T0061 T0128.003 T0133.001 T0072.005 T0040 T0014.002 T0091 T0097.104 T0049.004 T0048.004
9 T0078 T0102.003 T0084.002 T0084 T0103.002 T0111.002 T0115.003 T0126 T0128.004 T0133.002 T0080 T0068 T0065 T0091.001 T0097.105 T0049.005 T0123
10 T0079 T0084.003 T0084.001 T0104 T0111.003 T0116 T0126.001 T0128.005 T0133.003 T0080.001 T0082 T0090 T0091.002 T0097.106 T0049.006 T0123.001
11 T0135 T0084.004 T0084.002 T0104.001 T0151 T0116.001 T0126.002 T0129 T0133.004 T0080.002 T0083 T0090.001 T0091.003 T0097.107 T0049.007 T0123.002
12 T0135.001 T0085 T0084.003 T0104.002 T0151.001 T0117 T0127 T0129.001 T0133.005 T0080.003 T0090.002 T0092 T0097.108 T0049.008 T0123.003
13 T0135.002 T0085.001 T0084.004 T0104.003 T0151.002 T0127.001 T0129.002 T0134 T0080.004 T0090.003 T0092.001 T0097.109 T0118 T0123.004
14 T0135.003 T0085.003 T0085 T0104.004 T0151.003 T0127.002 T0129.003 T0134.001 T0080.005 T0090.004 T0092.002 T0097.110 T0119 T0124
15 T0135.004 T0085.004 T0085.001 T0104.005 T0151.004 T0129.004 T0134.002 T0081 T0091 T0092.003 T0097.111 T0119.001 T0124.001
16 T0136 T0085.005 T0085.003 T0104.006 T0151.005 T0129.005 T0081.001 T0091.001 T0093 T0097.112 T0119.002 T0124.002
17 T0136.001 T0085.006 T0085.004 T0105 T0151.006 T0129.006 T0081.002 T0091.002 T0093.001 T0097.200 T0119.003 T0124.003
18 T0136.002 T0085.007 T0085.005 T0105.001 T0151.007 T0129.007 T0081.003 T0091.003 T0093.002 T0097.201 T0120 T0125
19 T0136.003 T0085.008 T0085.006 T0105.002 T0151.008 T0129.008 T0129.009 T0081.004 T0092 T0094 T0097.202 T0120.001
20 T0136.004 T0086 T0085.007 T0105.003 T0151.009 T0129.009 T0129.010 T0081.005 T0092.001 T0094.001 T0097.203 T0120.002
21 T0136.005 T0086.001 T0085.008 T0106 T0151.010 T0129.010 T0130 T0081.006 T0092.002 T0094.002 T0097.204 T0121
22 T0136.006 T0086.002 T0086 T0106.001 T0151.011 T0130 T0130.001 T0081.007 T0092.003 T0095 T0097.205 T0121.001
23 T0136.007 T0086.003 T0086.001 T0107 T0151.012 T0130.001 T0130.002 T0081.008 T0093 T0096 T0097.206 T0122
24 T0136.008 T0086.004 T0086.002 T0108 T0151.013 T0130.002 T0130.003 T0093.001 T0096.001 T0097.207
25 T0137 T0087 T0086.003 T0109 T0151.014 T0130.003 T0130.004 T0093.002 T0096.002 T0097.208
26 T0137.001 T0087.001 T0086.004 T0110 T0151.015 T0130.004 T0130.005 T0094 T0113 T0098
27 T0137.002 T0087.002 T0087 T0111 T0151.016 T0130.005 T0131 T0094.001 T0145 T0098.001
28 T0137.003 T0088 T0087.001 T0111.001 T0151.017 T0131 T0131.001 T0094.002 T0145.001 T0098.002
29 T0137.004 T0088.001 T0087.002 T0111.002 T0152 T0131.001 T0131.002 T0095 T0145.002 T0100
30 T0137.005 T0088.002 T0088 T0111.003 T0152.001 T0131.002 T0096 T0145.003 T0100.001
31 T0137.006 T0089 T0088.001 T0112 T0152.002 T0096.001 T0145.004 T0100.002
32 T0138 T0089.001 T0088.002 T0152.003 T0096.002 T0145.005 T0100.003
33 T0138.001 T0089.003 T0089 T0152.004 T0113 T0145.006 T0143
34 T0138.002 T0089.001 T0152.005 T0141 T0145.007 T0143.001
35 T0138.003 T0089.003 T0152.006 T0141.001 T0143.002
36 T0139 T0146 T0152.007 T0141.002 T0143.003
37 T0139.001 T0146.001 T0152.008 T0145 T0143.004
38 T0139.002 T0146.002 T0152.009 T0145.001 T0144
39 T0139.003 T0146.003 T0152.010 T0145.002 T0144.001
40 T0140 T0146.004 T0152.011 T0145.003 T0144.002
41 T0140.001 T0146.005 T0152.012 T0145.004
42 T0140.002 T0146.006 T0153 T0145.005
43 T0140.003 T0146.007 T0153.001 T0145.006
44 T0147 T0153.002 T0145.007
45 T0147.001 T0153.003
46 T0147.002 T0153.004
47 T0147.003 T0153.005
48 T0147.004 T0153.006
49 T0148 T0153.007
50 T0148.001 T0154
51 T0148.002 T0154.001
52 T0148.003 T0154.002
53 T0148.004 T0155
54 T0148.005 T0155.001
55 T0148.006 T0155.002
56 T0148.007 T0155.003
57 T0148.008 T0155.004
58 T0148.009 T0155.005
59 T0149 T0155.006
60 T0149.001 T0155.007
61 T0149.002
62 T0149.003
63 T0149.004
64 T0149.005
65 T0149.006
66 T0149.007
67 T0149.008
68 T0149.009
69 T0150
70 T0150.001
71 T0150.002
72 T0150.003
73 T0150.004
74 T0150.005
75 T0150.006
76 T0150.007
77 T0150.008

View File

@ -24,7 +24,6 @@
| Counters these Techniques |
| ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) |

View File

@ -23,7 +23,6 @@
| Counters these Techniques |
| ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) |

View File

@ -23,7 +23,6 @@
| Counters these Techniques |
| ------------------------- |
| [T0043 Chat Apps](../../generated_pages/techniques/T0043.md) |

File diff suppressed because it is too large Load Diff

View File

@ -29,7 +29,6 @@ The report adds that although officially the Russian government asserted its neu
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000011 Fake FB groups + dark content |
| [T0010 Cultivate Ignorant Agents](../../generated_pages/techniques/T0010.md) | IT00000016 cultivate, manipulate, exploit useful idiots |
| [T0018 Purchase Targeted Advertisements](../../generated_pages/techniques/T0018.md) | IT00000010 Targeted FB paid ads |
| [T0029 Online Polls](../../generated_pages/techniques/T0029.md) | IT00000013 manipulate social media "online polls"? |

View File

@ -20,10 +20,9 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000029 Fake twitter profiles to amplify |
| [T0015 Create Hashtags and Search Artefacts](../../generated_pages/techniques/T0015.md) | IT00000027 Create and use hashtag |
| [T0039 Bait Influencer](../../generated_pages/techniques/T0039.md) | IT00000030 bait journalists/media/politicians |
| [T0043 Chat Apps](../../generated_pages/techniques/T0043.md) | IT00000025 Use SMS/text messages |
| [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) | IT00000025 Use SMS/text messages |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -20,7 +20,6 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000039 FB pages |
| [T0045 Use Fake Experts](../../generated_pages/techniques/T0045.md) | IT00000036 Using "expert" |

View File

@ -20,7 +20,6 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000045 FB pages/groups/profiles |
| [T0010 Cultivate Ignorant Agents](../../generated_pages/techniques/T0010.md) | IT00000044 cultivate, manipulate, exploit useful idiots (Alex Jones... drives conspiracy theories; false flags, crisis actors) |
| [T0020 Trial Content](../../generated_pages/techniques/T0020.md) | IT00000048 4Chan/8Chan - trial content |
| [T0039 Bait Influencer](../../generated_pages/techniques/T0039.md) | IT00000049 journalist/media baiting |

View File

@ -20,7 +20,6 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000058 Fake FB groups + dark content |
| [T0010 Cultivate Ignorant Agents](../../generated_pages/techniques/T0010.md) | IT00000063 cultivate, manipulate, exploit useful idiots |
| [T0016 Create Clickbait](../../generated_pages/techniques/T0016.md) | IT00000073 Click-bait (economic actors) fake news sites (ie: Denver Guardian; Macedonian teens) |
| [T0018 Purchase Targeted Advertisements](../../generated_pages/techniques/T0018.md) | IT00000057 Targeted FB paid ads |

View File

@ -20,7 +20,6 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000078 Fake FB groups/pages/profiles + dark content |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -20,7 +20,6 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000092 Fake FB groups/pages/profiles |
| [T0010 Cultivate Ignorant Agents](../../generated_pages/techniques/T0010.md) | IT00000104 cultivate, manipulate, exploit useful idiots (Alex Jones... drives conspiracy theories) |
| [T0020 Trial Content](../../generated_pages/techniques/T0020.md) | IT00000102 4Chan/8Chan - trial content |
| [T0046 Use Search Engine Optimisation](../../generated_pages/techniques/T0046.md) | IT00000103 SEO optimisation/manipulation ("key words") |

View File

@ -21,7 +21,6 @@ While there is history to Irans information/influence operations, starting wi
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) | IT00000171 Fake FB groups/pages/profiles + dark content (non-paid advertising) |
| [T0022 Leverage Conspiracy Theory Narratives](../../generated_pages/techniques/T0022.md) | IT00000174 Memes... anti-Isreal/USA/West, conspiracy narratives |
| [T0046 Use Search Engine Optimisation](../../generated_pages/techniques/T0046.md) | IT00000172 SEO optimisation/manipulation ("key words") |

View File

@ -21,9 +21,9 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0104.002 Dating App](../../generated_pages/techniques/T0104.002.md) | IT00000214 _"In the days leading up to the UKs [2017] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes._<br /> <br />_"Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each others profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions."_ <br /> <br />This incident matches T0104.002: Dating App, as users of Tinder were targeted in an attempt to persuade users to vote for a particular party in the upcoming election, rather than for the purpose of connecting those who were authentically interested in dating each other. |
| [T0141.001 Acquire Compromised Account](../../generated_pages/techniques/T0141.001.md) | IT00000240 <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation (T0141.001: Acquire Compromised Account). The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000242 <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation (T0141.001: Acquire Compromised Account). The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000242 <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.007: Rented, T0151.017: Dating Platform). |
| [T0150.007 Rented](../../generated_pages/techniques/T0150.007.md) | IT00000240 <I>“In the days leading up to the UKs [2019] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes. [...]<br><br> “The activists maintain that the project was meant to foster democratic engagement. But screenshots of the bots activity expose a harsher reality. Images of conversations between real users and these bots, posted on i-D, Mashable, as well as on Fowler and Goodmans public Twitter accounts, show that the bots did not identify themselves as automated accounts, instead posing as the user whose profile they had taken over. While conducting research for this story, it turned out that a number of [the reporters friends] living in Oxford had interacted with the bot in the lead up to the election and had no idea that it was not a real person.”</i><br><br> In this example people offered up their real accounts for the automation of political messaging; the actors convinced the users to give up access to their accounts to use in the operation. The actors maintained the accounts existing persona, and presented themselves as potential romantic suitors for legitimate platform users (T0097:109 Romantic Suitor Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.007: Rented, T0151.017: Dating Platform). |
| [T0151.017 Dating Platform](../../generated_pages/techniques/T0151.017.md) | IT00000214 _"In the days leading up to the UKs [2017] general election, youths looking for love online encountered a whole new kind of Tinder nightmare. A group of young activists built a Tinder chatbot to co-opt profiles and persuade swing voters to support Labour. The bot accounts sent 30,000-40,000 messages to targeted 18-25 year olds in battleground constituencies like Dudley North, which Labour ended up winning by only 22 votes._<br /> <br />_"Tinder is a dating app where users swipe right to indicate attraction and interest in a potential partner. If both people swipe right on each others profile, a dialogue box becomes available for them to privately chat. After meeting their crowdfunding goal of only £500, the team built a tool which took over and operated the accounts of recruited Tinder-users. By upgrading the profiles to Tinder Premium, the team was able to place bots in any contested constituency across the UK. Once planted, the bots swiped right on all users in the attempt to get the largest number of matches and inquire into their voting intentions."_ <br /> <br />This incident matches T0151.017: Dating Platform, as users of Tinder were targeted in an attempt to persuade users to vote for a particular party in the upcoming election, rather than for the purpose of connecting those who were authentically interested in dating each other. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -21,7 +21,7 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0141.001 Acquire Compromised Account](../../generated_pages/techniques/T0141.001.md) | IT00000215 _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_ <br /> <br />This example demonstrates how threat actors can use _T0141.001: Acquire Compromised Account_ to distribute inauthentic content while exploiting the legitimate account holders persona. |
| [T0097.110 Party Official Persona](../../generated_pages/techniques/T0097.110.md) | IT00000215 _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_ <br /> <br />This example demonstrates how threat actors can use compromised accounts to distribute inauthentic content while exploiting the legitimate account holders persona (T0097.110: Party Official Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.005: Compromised). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -21,7 +21,7 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0141.002 Acquire Compromised Website](../../generated_pages/techniques/T0141.002.md) | IT00000216 _"In the early hours of 24 May 2017, a news story appeared on the website of Qatar's official news agency, QNA, reporting that the country's emir, Sheikh Tamim bin Hamad al-Thani, had made an astonishing speech."_ <br /> <br />_"[…]_ <br /> <br />_"Qatar claimed that the QNA had been hacked. And they said the hack was designed to deliberately spread fake news about the country's leader and its foreign policies. The Qataris specifically blamed UAE, an allegation later repeated by a Washington Post report which cited US intelligence sources. The UAE categorically denied those reports._ <br /> <br />_"But the story of the emir's speech unleashed a media free-for-all. Within minutes, Saudi and UAE-owned TV networks - Al Arabiya and Sky News Arabia - picked up on the comments attributed to al-Thani. Both networks accused Qatar of funding extremist groups and of destabilising the region."_ <br /> <br />This incident demonstrates how threat actors used _T0141.002: Acquire Compromised Website_ to allow for an inauthentic narrative to be given a level of credibility which caused significant political fallout. |
| [T0150.005 Compromised](../../generated_pages/techniques/T0150.005.md) | IT00000216 _"In the early hours of 24 May 2017, a news story appeared on the website of Qatar's official news agency, QNA, reporting that the country's emir, Sheikh Tamim bin Hamad al-Thani, had made an astonishing speech."_ <br /> <br />_"[…]_ <br /> <br />_"Qatar claimed that the QNA had been hacked. And they said the hack was designed to deliberately spread fake news about the country's leader and its foreign policies. The Qataris specifically blamed UAE, an allegation later repeated by a Washington Post report which cited US intelligence sources. The UAE categorically denied those reports._ <br /> <br />_"But the story of the emir's speech unleashed a media free-for-all. Within minutes, Saudi and UAE-owned TV networks - Al Arabiya and Sky News Arabia - picked up on the comments attributed to al-Thani. Both networks accused Qatar of funding extremist groups and of destabilising the region."_ <br /> <br />This incident demonstrates how threat actors used a compromised website to allow for an inauthentic narrative to be given a level of credibility which caused significant political fallout (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.004: Website, T0150.005: Compromised). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -21,10 +21,10 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0043.001 Use Encrypted Chat Apps](../../generated_pages/techniques/T0043.001.md) | IT00000219 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0088.001 Develop AI-Generated Audio (Deepfakes)](../../generated_pages/techniques/T0088.001.md) | IT00000220 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0097.100 Individual Persona](../../generated_pages/techniques/T0097.100.md) | IT00000221 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000222 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0088.001 Develop AI-Generated Audio (Deepfakes)](../../generated_pages/techniques/T0088.001.md) | IT00000220 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0097.100 Individual Persona](../../generated_pages/techniques/T0097.100.md) | IT00000221 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000222 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br>In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) | IT00000219 <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br>In this example attackers created an account on WhatsApp which impersonated the CEO of lastpass (T0097.100: Individual Persona, T0143.003: Impersonated Persona, T0146: Account, T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel). They used this asset to target an employee using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -24,9 +24,7 @@
| [T0097.100 Individual Persona](../../generated_pages/techniques/T0097.100.md) | IT00000231 <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [T0097.103 Activist Persona](../../generated_pages/techniques/T0097.103.md) | IT00000227 <I>“In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”</i><br><br>In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) |
| [T0097.107 Researcher Persona](../../generated_pages/techniques/T0097.107.md) | IT00000228 <I>“In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”</i><br><br>In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) |
| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | IT00000223 <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T0143.003: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [T0097.207 NGO Persona](../../generated_pages/techniques/T0097.207.md) | IT00000232 <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [T0141.001 Acquire Compromised Account](../../generated_pages/techniques/T0141.001.md) | IT00000226 <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T0143.003: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) | IT00000229 <I>“In March 2023, [Iranian state-sponsored cyber espionage actor] APT42 sent a spear-phishing email with a fake Google Meet invitation, allegedly sent on behalf of Mona Louri, a likely fake persona leveraged by APT42, claiming to be a human rights activist and researcher. Upon entry, the user was presented with a fake Google Meet page and asked to enter their credentials, which were subsequently sent to the attackers.”</i><br><br>In this example APT42, an Iranian state-sponsored cyber espionage actor, created an account which presented as a human rights activist (T0097.103: Activist Persona) and researcher (T0097.107: Researcher Persona). The analysts assert that it was likely the persona was fabricated (T0143.002: Fabricated Persona) |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000230 <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |

View File

@ -22,12 +22,12 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0085.004 Develop Document](../../generated_pages/techniques/T0085.004.md) | IT00000324 <i>“On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.<br><br> [...]<br><br> The letter is not dated, and Dmytro Kulebas signature seems to be copied from a publicly available letter signed by him in 2021.”</i><br><br> In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). |
| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | IT00000238 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0097.108 Expert Persona](../../generated_pages/techniques/T0097.108.md) | IT00000239 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | IT00000238 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0097.108 Expert Persona](../../generated_pages/techniques/T0097.108.md) | IT00000239 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | IT00000327 <i>“On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.<br><br> [...]<br><br> The letter is not dated, and Dmytro Kulebas signature seems to be copied from a publicly available letter signed by him in 2021.”</i><br><br> In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). |
| [T0097.206 Government Institution Persona](../../generated_pages/techniques/T0097.206.md) | IT00000326 <i>“On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.<br><br> [...]<br><br> The letter is not dated, and Dmytro Kulebas signature seems to be copied from a publicly available letter signed by him in 2021.”</i><br><br> In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). |
| [T0141.001 Acquire Compromised Account](../../generated_pages/techniques/T0141.001.md) | IT00000236 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000237 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) | IT00000237 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [T0150.005 Compromised](../../generated_pages/techniques/T0150.005.md) | IT00000236 <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -21,8 +21,10 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | IT00000244 <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i><br><br> In this example a website managed by an actor previously sanctioned by the US department of treasury has been configured to redirect to another website; Katehon (T0129.008: Redirect URLs).<br><br> Katehon presents itself as a geopolitical think tank in English (T0097.204: Think Tank Persona), but does not maintain this persona when presenting itself to a Russian speaking audience. |
| [T0129.008 Redirect URLs](../../generated_pages/techniques/T0129.008.md) | IT00000243 <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i><br><br> In this example a website managed by an actor previously sanctioned by the US department of treasury has been configured to redirect to another website; Katehon (T0129.008: Redirect URLs).<br><br> Katehon presents itself as a geopolitical think tank in English (T0097.204: Think Tank Persona), but does not maintain this persona when presenting itself to a Russian speaking audience. |
| [T0097.204 Think Tank Persona](../../generated_pages/techniques/T0097.204.md) | IT00000244 <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [T0149.004 Redirecting Domain](../../generated_pages/techniques/T0149.004.md) | IT00000243 <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [T0150.004 Repurposed](../../generated_pages/techniques/T0150.004.md) | IT00000345 <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [T0155.004 Geoblocked](../../generated_pages/techniques/T0155.004.md) | IT00000346 <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -21,11 +21,11 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0043.001 Use Encrypted Chat Apps](../../generated_pages/techniques/T0043.001.md) | IT00000294 <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0043.001: Use Encrypted Chat Apps) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | IT00000337 <i>“After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”</i><br><br>In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).<br><br>The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. |
| [T0097.203 Fact Checking Organisation Persona](../../generated_pages/techniques/T0097.203.md) | IT00000295 <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0043.001: Use Encrypted Chat Apps) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [T0097.203 Fact Checking Organisation Persona](../../generated_pages/techniques/T0097.203.md) | IT00000295 <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [T0131 Exploit TOS/Content Moderation](../../generated_pages/techniques/T0131.md) | IT00000338 <i>“After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”</i><br><br>In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).<br><br>The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. |
| [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | IT00000336 <i>“After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”</i><br><br>In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).<br><br>The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. |
| [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) | IT00000294 <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -21,9 +21,9 @@
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | IT00000343 <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0141.001: Acquire Compromised Account).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [T0129.006 Deny Involvement](../../generated_pages/techniques/T0129.006.md) | IT00000344 <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0141.001: Acquire Compromised Account).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | IT00000342 <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0141.001: Acquire Compromised Account).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [T0097.111 Government Official Persona](../../generated_pages/techniques/T0097.111.md) | IT00000343 <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account, T0150.001: Newly Created, T0150.005: Compromised).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [T0129.006 Deny Involvement](../../generated_pages/techniques/T0129.006.md) | IT00000344 <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account, T0150.001: Newly Created, T0150.005: Compromised).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) | IT00000342 <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account, T0150.001: Newly Created, T0150.005: Compromised).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,33 @@
# Incident I00096: China ramps up use of AI misinformation
* **Summary:** <i>The Chinese government is ramping up its use of content generated by artificial intelligence (AI) as it seeks to further its geopolitical goals through misinformation, including targeting the upcoming US presidential election, according to intelligence published by the Microsoft Threat Analysis Centre (MTAC).</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0087.001 Develop AI-Generated Videos (Deepfakes)](../../generated_pages/techniques/T0087.001.md) |  IT00000352 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0088.001 Develop AI-Generated Audio (Deepfakes)](../../generated_pages/techniques/T0088.001.md) |  IT00000347 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) |  IT00000354 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0097.110 Party Official Persona](../../generated_pages/techniques/T0097.110.md) |  IT00000348 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0115 Post Content](../../generated_pages/techniques/T0115.md) |  IT00000350 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) |  IT00000353 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0152.006 Video Platform](../../generated_pages/techniques/T0152.006.md) |  IT00000349 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) |  IT00000351 The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,30 @@
# Incident I00097: Report: Not Just Algorithms
* **Summary:** <i>Many of the systems and elements that platforms build into their products create safety risks for end-users. However, only a very modest selection have been identified for regulatory scrutiny. As the [Australian] government reviews the Basic Online Safety Expectations and Online Safety Act [in 2024], the role of all systems and elements in creating risks need to be comprehensively addressed.<br><br>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0018 Purchase Targeted Advertisements](../../generated_pages/techniques/T0018.md) |  IT00000358 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Ad approval systems can create risks. We created 12 fake ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)<br><br>Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.<br><br>Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.<br><br>Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or look like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.</i><br><br>Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000355 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed fake accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000357 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed fake accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
| [T0153.005 Online Advertising Platform](../../generated_pages/techniques/T0153.005.md) |  IT00000359 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Ad approval systems can create risks. We created 12 fake ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)<br><br>Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.<br><br>Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.<br><br>Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or look like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.</i><br><br>Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). |
| [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) |  IT00000356 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed fake accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,28 @@
# Incident I00098: Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them
* **Summary:** <i>Extremist actors are exploiting online gaming sites to disseminate violent ideologies, network with likeminded people, and perpetrate real-world harm. The sites where these actors operate include both those offering online video games and “gaming-adjacent” platforms, which host content and discussions related to gaming.<br><br>This report draws on existing literature; fresh interviews with gamers, gaming company executives, and experts; and findings from a multinational survey of gamers conducted in January 2023.<br><br>Online gaming is an enormous industry unto itself. In 2022, the global video game industry generated revenue of almost $200 billion and provided entertainment to more than three billion consumers around the world. The largest technology companies—including Microsoft, Amazon, Apple, Facebook, and Google—are heavily invested in gaming as owners of major game studios and their blockbuster titles, dominant distribution platforms, and popular gaming-adjacent sites. <br><br>A growing body of evidence shows that bad actors exploit basic features of video games and adjacent platforms to channel hate-based rhetoric, network with potential sympathizers, and mobilize for action—sometimes with deadly consequences. The relative ease with which extremists have been able to manipulate gaming spaces points to the need for urgent action by industry actors to avoid further harm. Although some gaming companies have made recent investments in content moderation technologies and systems, most companies are still far behind in terms of adequately governing and mitigating abuse of their platforms. <br><br>This call to address extremist exploitation became more urgent in April 2023 in the wake of media reports that the large gaming-adjacent platform, Discord, had been used by a young U.S. air national guardsman for the reckless and allegedly illegal sharing of top-secret military documents, which then were spread to other online sites. Separately, Microsoft's president, Brad Smith, revealed in an interview in April that his company had detected Russian operatives seeking to infiltrate discussion groups on Discord made up of people who play Microsoft's popular online game Minecraft. <br><br>Yet another reason to pay attention to the ways gaming spaces have been misused is that the technologies that help make video games so appealing are poised to become far more common. Immersive and interactive features of games are precursors of the "metaverse" platforms that Meta (formerly Facebook) and other companies are trying to develop. Heeding the popularity of gaming, these companies are pouring billions of dollars into the creation of a fully immersive 3-D Internet. Addressing the extremist exploitation of gaming spaces today will better prepare the industry to usher in new technologies while preventing harm to individuals and societies. </i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0147.001 Game](../../generated_pages/techniques/T0147.001.md) |  IT00000360 This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod). Extremists also use communication features available in online games to recruit new members. |
| [T0147.002 Game Mod](../../generated_pages/techniques/T0147.002.md) |  IT00000362 This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod). Extremists also use communication features available in online games to recruit new members. |
| [T0151.015 Online Game Platform](../../generated_pages/techniques/T0151.015.md) |  IT00000361 This report looks at how extremists exploit games and gaming adjacent platforms:<br><br><i>Ethnic Cleansing, a first-person shooter game created by the U.S. white supremacist organization National Alliance in 2002, was advertised as follows: “[T]he Race War has already begun. Your character, Will, runs through a ghetto blasting away at various blacks and spics to attempt to gain entrance to the subway system...where the jews have hidden to avoid the carnage. Then you get to blow away jews as they scream Oy Vey! on your way to the command center.”<br><br>While games like Ethnic Cleansing —and others of a similar genocidal variety, including titles like Shoot the Blacks, Hatred, and Muslim Massacre —generate some press attention and controversy, they tend to be ignored by the large majority of gamers, who consider them “crude” and “badly designed.” Nevertheless, these games are still available in some corners of the Internet and have been downloaded by hundreds of thousands of users. <br><br>A second strategy involves the modification (“modding”) of existing video games to twist the narrative into an extremist message or fantasy. One example is the Islamic State terrorist groups modding of the first-person shooter video game, ARMA III, to make Islamic fighters the heroic protagonists rather than the villains. With their powerful immersive quality, these video games have, in some instances, been effective at inspiring and allegedly training extremists to perpetrate realworld attacks.<br><br>Third, extremist actors have used in-game chat functions to open lines of communication with ideological sympathizers and potential recruits. Although the lack of in-game communication data has made it impossible for academic researchers to track the presence of extremists in a systematic way, there is enough anecdotal evidence—including evidence obtained from police investigation files—to infer that in-game chatrooms can and do function as “radicalization funnels” in at least some cases.</i><br><br>White supremacists created a game aligned with their ideology (T0147.001: Game). The Islamic State terrorist group created a mod of the game ARMA 3 (T0151.015: Online Game Platform, T0147.002: Game Mod). Extremists also use communication features available in online games to recruit new members. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,29 @@
# Incident I00099: More Women Are Facing The Reality Of Deepfakes, And Theyre Ruining Lives
* **Summary:** <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) |  IT00000364 <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access). |
| [T0152.004 Website](../../generated_pages/techniques/T0152.004.md) |  IT00000365 <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access). |
| [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) |  IT00000363 <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access). |
| [T0155.005 Paid Access](../../generated_pages/techniques/T0155.005.md) |  IT00000366 <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,28 @@
# Incident I00100: Why ThisPersonDoesNotExist (and its copycats) need to be restricted
* **Summary:** <i>You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidias publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. Its also irresponsible and needs to be restricted immediately.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) |  IT00000369 <i>You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidias publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. Its also irresponsible and needs to be restricted immediately.<br><br>[...]<br><br>Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.<br><br>Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.<br><br>Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that its been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.<br><br>Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammers identity after the fact. Perhaps the scammer used an old classmates photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.<br><br>The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human whos never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos dont give law enforcement much to work with.</i><br><br>ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). |
| [T0146.006 Open Access Platform](../../generated_pages/techniques/T0146.006.md) |  IT00000367 <i>You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidias publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. Its also irresponsible and needs to be restricted immediately.<br><br>[...]<br><br>Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.<br><br>Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.<br><br>Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that its been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.<br><br>Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammers identity after the fact. Perhaps the scammer used an old classmates photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.<br><br>The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human whos never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos dont give law enforcement much to work with.</i><br><br>ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). |
| [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) |  IT00000368 <i>You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidias publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. Its also irresponsible and needs to be restricted immediately.<br><br>[...]<br><br>Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.<br><br>Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.<br><br>Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that its been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.<br><br>Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammers identity after the fact. Perhaps the scammer used an old classmates photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.<br><br>The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human whos never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos dont give law enforcement much to work with.</i><br><br>ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,29 @@
# Incident I00101: Pro-Putin Disinformation Warriors Take War of Aggression to Reddit
* **Summary:** <i>I open my Reddit inbox to a new message: “You have been permanently banned from participating in r/ABoringDystopia because your comment violates this communitys rules,” it states.<br><br>My crime? Pointing out that Russias acts of mass rape, torture, and murder against Ukrainians, along with the forcible deportation and re-educating of children, was a funny way for Russia to encourage them to be “neutral” rather than pro-Western. “Your submission has been removed as it appears to be misinformation or misleading” stated a further, public comment from the moderator team.<br><br>Russian rape, torture, and murder in Ukraine is of course, extremely well-documented. Meanwhile, a comment in response saying “Ukraine has done this too” has been allowed to remain, as has a further comment claiming that Ukraine and Russia are “as close as you can get” as nations (a Russian propaganda favorite is to describe them as “brotherly”). These comments clearly disseminating actual disinformation remained on the widely-seen post which had 1,300 upvotes.<br><br>The board, r/ABoringDystopia, “a subreddit for chronicling how Advanced Capitalist Society is not only dystopic but also incredibly boring”, has 781,000 members and while currently primarily focused on Israel and Gaza, recent posts about Ukraine have titles such as “Ukraine death toll spirals, NATO demands more must be willing to die”.<br><br>This is just one of several Reddit communities that have apparently had moderator teams taken over by people pushing a clear anti-Ukraine agenda, with a concurrent, visible spike in pro-Russia propaganda in certain communities in recent months, on what the company calls the “front page of the internet”.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0124 Suppress Opposition](../../generated_pages/techniques/T0124.md) |  IT00000373 This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [T0146.004 Administrator Account](../../generated_pages/techniques/T0146.004.md) |  IT00000372 This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [T0150.005 Compromised](../../generated_pages/techniques/T0150.005.md) |  IT00000371 This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
| [T0151.011 Community Sub-Forum](../../generated_pages/techniques/T0151.011.md) |  IT00000370 This report looks at changes content posted to communities on Reddit (called Subreddits) after teams of voluntary moderators are replaced with what appear to be pro-Russian voices:<br><br><i>The r/antiwar subreddit appears to be a very recent takeover target. With 12,900 members it is not the largest community on Reddit, but this does place it squarely within the top 5% of all communities in terms of membership.<br><br>Three months ago a new moderator team was instated by subreddit head u/democracy101. Any posts documenting Russian aggression in Ukraine are now swiftly removed, while the board has been flooded with posts about how Ukraine is losing, or how American “neocons wrecked” the country.<br><br>The pinned post from moderator u/n0ahbody proclaims: “People who call for an end to Russian aggressions but not the Western aggressions Russia is reacting to dont really want peace.” This user takes the view that any negative opinion about Russia is “shaped by what the fanatically Russophobic MSM wants you to think,” and that the United States is not threatened by its neighbors. Russia is.”<br><br>When u/n0ahbody took over the sub, the user posted a triumphant and vitriolic diatribe in another pro-Russia subreddit with some 33,500 members, r/EndlessWar. “We are making progress. We are purging the sub of all NAFO and NAFO-adjacent elements. Hundreds of them have been banned over the last 24 hours for various rule infractions, for being NAFO or NAFO-adjacent,” the user said, referencing the grassroots, pro-Ukrainian North Atlantic Fella Organization (NAFO) meme movement.<br><br>Several former users have reported they have indeed been banned from r/antiwar since the change in moderators. “If this subreddit cannot be explicitly against the invasion of Ukraine it will never truly be anti-war,” wrote one user Halcyon_Rein, in the antiwar subreddit on September 6. They then edited the post to say, “Edit: btw, I got f**king banned for this 💀💀💀”</i><br><br>A community hosted on Reddit was taken over by new moderators (T0151.011: Community Sub-Forum, T0150.005: Compromised). These moderators removed content posted to the community which favoured Ukraine over Russia (T0146.004: Administrator Account, T0151.011: Community Sub-Forum, T0124: Suppress Opposition). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,35 @@
# Incident I00102: Ignore The Poway Synagogue Shooters Manifesto: Pay Attention To 8chans /pol/ Board
* **Summary:** <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0084.004 Appropriate Content](../../generated_pages/techniques/T0084.004.md) |  IT00000377 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0085.004 Develop Document](../../generated_pages/techniques/T0085.004.md) |  IT00000378 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0101 Create Localised Content](../../generated_pages/techniques/T0101.md) |  IT00000375 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000382 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0129.006 Deny Involvement](../../generated_pages/techniques/T0129.006.md) |  IT00000383 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0146.006 Open Access Platform](../../generated_pages/techniques/T0146.006.md) |  IT00000376 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000381 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0151.012 Image Board Platform](../../generated_pages/techniques/T0151.012.md) |  IT00000374 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0152.005 Paste Platform](../../generated_pages/techniques/T0152.005.md) |  IT00000380 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
| [T0152.010 File Hosting Platform](../../generated_pages/techniques/T0152.010.md) |  IT00000379 <i>On April 27, 2019, at around 11:30 a.m. local time, a young man with a semi-automatic rifle walked into the Chabad of Poway Synagogue in Poway, California. He opened fire, killing one worshipper and wounding three others. In the hours since the shooting, a manifesto, believed to be written by the shooter, began circulating online. Evidence has also surfaced that, like the Christchurch Mosque shooter, this killer began his rampage with a post on 8chans /pol/ board.<br><br>Although both of these attacks may seem different, since they targeted worshippers of different faiths, both shooters were united by the same fascist ideology. They were also both radicalized in the same place: 8chans /pol/ board.<br><br>This has been corroborated by posts on the board itself, where “anons,” as the posters call themselves, recirculated the shooters since-deleted post. In it, the alleged shooter claims to have been “lurking” on the site for a year and a half. He includes a link to a livestream of his rampage — which thankfully does not appear to have worked — and he also includes a pastebin link to his manifesto.<br><br>The very first response to his announcement was another anon cheering him on and telling him to “get the high score,” AKA, kill a huge number of people.</i><br><br>Before carrying out a mass shooting, the shooter posted a thread to 8chans /pol/ board. The post directed users to a variety of different platforms (T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms); a Facebook account on which the shooter attempted to livestream the shooting (T0146: Account, T0151.001: Social Media Platform); and a manifesto they had written hosted on pastebin (T0146.006: Open Access Platform, T0152.005: Paste Platform, T0115: Post Content) and uploaded to the file sharing platform Mediafire (T0152.010: File Hosting Platform, T0085.004: Develop Document).<br><br>The report looks deeper into 8chans /pol/ board:<br><br><i>8chan is a large website, which includes a number of different discussion groups about everything from anime to left-wing politics. /pol/ is one particularly active board on the website, and it is best described as a gathering place for extremely online neo-Nazis.<br><br>[...]<br><br>Ive browsed /pol/ on an almost daily basis since the Christchurch shooting. It has not been difficult to find calls for violence. On Monday, March 25 of this year, I ran across evidence of anons translating the Christchurch shooters manifesto into other languages in an attempt to inspire more shootings across the globe.<br><br>This tactic can work, and todays shooting is proof. The Poway Synagogue shooter directly cited the Christchurch shooter as his inspiration, saying he decided to carry out his attack roughly two weeks after that shooting. On /pol/, many anons refer to the Christchurch shooter, Brenton Tarrant, as “Saint Tarrant,” complete with medieval-inspired iconography.</i><br><br>Manifestos posted to 8chan are translated and reshared by other platform users (T0101: Create Localised Content, T0146.006: Open Access Platform, T0151.012: Image Board Platform, T0115: Post Content, T0084.004: Appropriate Content).<br><br><i>When I began looking through /pol/ right after the Poway Synagogue shooting, I came across several claims that the shootings had been a “false flag” aimed at making the message board look bad.<br><br>When Bellingcat tweeted out a warning about shitposting and the shooters manifesto, in the immediate wake of the attack, probable anons even commented on the Tweet in an attempt to deny that a channer had been behind the attack.<br><br>This is a recognizable pattern that occurs in the wake of any crimes committed by members of the board. While the initial response to the Christchurch shooters massacre thread was riotous glee, in the days after the shooting many anons began to claim the attack had been a false flag. This actually sparked significant division and debate between the members of /pol/. In the below image, a user mocks other anons for being unable to “believe something in your favor is real.” Another anon responds, “As the evidence comes out, its [sic] quite clear that this was a false flag.”<br><br>In his manifesto, the Poway Synagogue shooter even weighed in on this debate, accusing other anons who called the Christchurch and Tree of Life Synagogue shootings “false flags” to merely have been scared: “They cant fathom that there are brave White men alive who have the willpower and courage it takes to say, Fuck my life—Im willing to sacrifice everything for the benefit of my race.’”</i><br><br>Platform users deny that their platform has been used by mass shooters to publish their manifestos (T0129.006: Deny Involvement). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,29 @@
# Incident I00103: The racist AI deepfake that fooled and divided a community
* **Summary:** <i>When an audio clip appeared to show a local school principal making derogatory comments, it went viral online, sparked death threats against the educator and sent ripples through a suburb outside the city of Baltimore. But it was soon exposed as a fake, manipulated by artificial intelligence - so why do people still believe its real?</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0088.001 Develop AI-Generated Audio (Deepfakes)](../../generated_pages/techniques/T0088.001.md) |  IT00000385 <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
| [T0146 Account](../../generated_pages/techniques/T0146.md) |  IT00000387 <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
| [T0149.005 Server](../../generated_pages/techniques/T0149.005.md) |  IT00000384 <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
| [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) |  IT00000386 <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,30 @@
# Incident I00104: Macron Campaign Hit With “Massive and Coordinated” Hacking Attack
* **Summary:** <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0115 Post Content](../../generated_pages/techniques/T0115.md) |  IT00000391 <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.<br><br>At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.<br><br>“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”<br><br>The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”</i><br><br>Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000392 <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.<br><br>At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.<br><br>“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”<br><br>The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”</i><br><br>Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. |
| [T0146.007 Automated Account](../../generated_pages/techniques/T0146.007.md) |  IT00000389 <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.<br><br>At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.<br><br>“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”<br><br>The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”</i><br><br>Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. |
| [T0151.012 Image Board Platform](../../generated_pages/techniques/T0151.012.md) |  IT00000388 <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.<br><br>At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.<br><br>“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”<br><br>The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”</i><br><br>Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. |
| [T0152.005 Paste Platform](../../generated_pages/techniques/T0152.005.md) |  IT00000390 <i>A massive trove of documents purporting to contain thousands of emails and other files from the [2017 presidential] campaign of Emmanuel Macron—the French centrist candidate squaring off against right-wing nationalist Marine Le Pen—was posted on the internet Friday afternoon. The Macron campaign says that at least some of the documents are fake. The document dump came just over a day before voting is set to begin in the final round of the election and mere hours before candidates are legally required to stop campaigning.<br><br>At about 2:35 p.m. ET, a post appeared on the 4chan online message board announcing the leak. The documents appear to include emails, internal memos, and screenshots of purported banking records.<br><br>“In this pastebin are links to torrents of emails between Macron, his team and other officials, politicians as well as original documents and photos,” the anonymous 4chan poster wrote. “This was passed on to me today so now I am giving it to you, the people. The leak is massvie and released in the hopes that the human search engine here will be able to start sifting through the contents and figure out exactly what we have here.”<br><br>The Macron campaign issued a statement Friday night saying it was the victim of a “massive and coordinated” hacking attack. That campaign said the leak included some fake documents that were intended “to sow doubt and misinformation.”</i><br><br>Actors posted a to 4chan a link (T0151.012: Image Board Platform, T0146.006: Open Access Platform, T0115: Post Content, T0122: Direct Users to Alternative Platforms) to text content hosted on pastebin (T0152.005: Paste Platform, T0146.006: Open Access Platform, T0115: Post Content), which contained links to download stolen and fabricated documents. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,32 @@
# Incident I00105: Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors
* **Summary:** <i>Since the live-streamed racist and antisemitic attacks in Christchurch, New Zealand, and Halle, Germany, in 2019 and details of the perpetrators profiles on gaming platforms became known, a discussion about the use of gaming platforms and content by extremist actors has intensified. However, despite this increased interest, reliable research results about the field are still scarce. Nevertheless, there is clear evidence that extremist actors are active in these spaces. It is, therefore, essential that we better understand extremist content, groups, and channels on these platforms.<br><br>The RadiGaMe project (radicalisation on gaming platforms and messenger services), implemented by a German Research Alliance consisting of eight research institutions, civil security institutions and civil society organisations, seeks to contribute to closing this important research gap. The Centre Technology and Society (ZTG) at the Technical University of Berlin, the Peace Research Institute Frankfurt (PRIF), the Institute for Democracy and Civil Society Jena (IDZ), the Ludwig Maximilian University of Munich (LMU), the Ruhr University Bochum (RUB), the Centre for Applied Research on Deradicalisation Berlin (modus|zad), the Berlin Criminal Investigation Department (LKA 53) and Munich Innovation Labs (MIL) are involved in the project. It is supported by four additional institutions from the fields of civil security, prevention work and democracy promotion.In the first stage of the project, we conducted an initial exploration of 20 gaming and gaming-adjacent digital platforms to gain insights into their functionality, relevance for extremist actors, access options for researchers, and the type of extremist and fringe content posted in these digital spaces. This Insight reports the results of this exploration for eleven platforms across three categories: Communication platforms, gaming platforms, and platforms used to disseminate modifications of existing (popular) digital games (so-called mods).</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0147.001 Game](../../generated_pages/techniques/T0147.001.md) |  IT00000397 <i>In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br>Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB]. <br><br>[...]<br><br>[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.</i><br><br>Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game). |
| [T0147.002 Game Mod](../../generated_pages/techniques/T0147.002.md) |  IT00000398 In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br><i>Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.<br><br>[...]<br><br>During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.<br><br>Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.</i><br><br>Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod). |
| [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) |  IT00000395 <i>In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br>Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB]. <br><br>[...]<br><br>[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.</i><br><br>Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game). |
| [T0151.009 Legacy Online Forum Platform](../../generated_pages/techniques/T0151.009.md) |  IT00000394 In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br><i>Gamer Uprising Forums (GUF) [is an online discussion platform using the classic forum structure] aimed directly at gamers. It is run by US Neo-Nazi Andrew Anglin and explicitly targets politically right-wing gamers. This forum mainly includes antisemitic, sexist, and racist topics, but also posts on related issues such as esotericism, conspiracy narratives, pro-Russian propaganda, alternative medicine, Christian religion, content related to the incel- and manosphere, lists of criminal offences committed by non-white people, links to right-wing news sites, homophobia and trans-hostility, troll guides, anti-leftism, ableism and much more. Most noticeable were the high number of antisemitic references. For example, there is a thread with hundreds of machine-generated images, most of which feature openly antisemitic content and popular antisemitic references. Many users chose explicitly antisemitic avatars. Some of the usernames also provide clues to the users ideologies and profiles feature swastikas as a type of progress bar and indicator of the users activity in the forum.<br><br>The GUFs front page contains an overview of the forum, user statistics, and so-called “announcements”. In addition to advice-like references, these feature various expressions of hateful ideologies. At the time of the exploration, the following could be read there: “Jews are the problem!”, “Women should be raped”, “The Jews are going to be required to return stolen property”, “Immigrants will have to be physically removed”, “Console gaming is for n******” and “Anger is a womanly emotion”. New users have to prove themselves in an area for newcomers referred to in imageboard slang as the “Newfag Barn”. Only when the newcomers posts have received a substantial number of likes from established users, are they allowed to post in other parts of the forum. It can be assumed that this will also lead to competitions to outdo each other in posting extreme content. However, it is always possible to view all posts and content on the site. In any case, it can be assumed that the platform hardly addresses milieus that are not already radicalised or at risk of radicalisation and is therefore deemed relevant for radicalisation research. However, the number of registered users is low (typical for radicalised milieus) and, hence, the platform may only be of interest when studying a small group of highly radicalised individuals.</i><br><br>Gamer Uprising Forum is a legacy online forum, with access gated behind approval of existing platform users (T0155.003: Approval Gated, T0151.009: Legacy Online Forum Platform). |
| [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) |  IT00000396 <i>In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br>Indie DB [is a platform that serves] to present indie games, which are titles from independent, small developer teams, which can be discussed and downloaded [Indie DB]. <br><br>[...]<br><br>[On Indie DB we] found antisemitic, Islamist, sexist and other discriminatory content during the exploration. Both games and comments were located that made positive references to National Socialism. Radicalised users seem to use the opportunities to network, create groups, and communicate in forums. In addition, a number of member profiles with graphic propaganda content could be located. We found several games with propagandistic content advertised on the platform, which caused apparently radicalised users to form a fan community around those games. For example, there are titles such as the antisemitic Fursan Al-Aqsa: The Knights of the Al-Aqsa Mosque, which has won the Best Hardcore Game award at the Game Connection America 2024 Game Development Awards and which has received antisemitic praise on its review page. In the game, players need to target members of the Israeli Defence Forces, and attacks by Palestinian terrorist groups such as Hamas or Lions Den can be re-enacted. These include bomb attacks, beheadings and the re-enactment of the terrorist attack in Israel on 7 October 2023. We, therefore, deem Indie DB to be relevant for further analyses of extremist activities in digital gaming spaces.</i><br><br>Indie DB is an online software delivery platform on which users can create groups, and participate in discussion forums (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0151.009: Legacy Online Forum Platform). The platform hosted games which allowed players to reenact terrorist attacks (T0147.001: Game). |
| [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) |  IT00000399 In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br><i>Gamebanana and Mod DB are so-called modding platforms that allow users to post their modifications of existing (popular) games. In the process of modding, highly radicalised content can be inserted into games that did not originally contain it. All of these platforms also have communication functions and customisable profiles.<br><br>[...]<br><br>During the explorations, several modifications with hateful themes were located, including right-wing extremist, racist, antisemitic and Islamist content. This includes mods that make it possible to play as terrorists or National Socialists. So-called “skins” (textures that change the appearance of models in the game) for characters from first-person shooters are particularly popular and contain references to National Socialism or Islamist terrorist organisations. Although some of this content could be justified with reference to historical accuracy and realism, the user profiles of the creators and commentators often reveal political motivations. Names with neo-Nazi codes or the use of avatars showing members of the Wehrmacht or the Waffen SS, for example, indicate a certain degree of positive appreciation or fascination with right-wing ideology, as do affirmations in the comment columns.<br><br>Mod DB in particular has attracted public attention in the past. For example, a mod for the game Half-Life 2 made it possible to play a school shooting with the weapons used during the attacks at Columbine High School (1999) and Virginia Polytechnic Institute and State University (2007). Antisemitic memes and jokes are shared in several groups on the platform. It seems as if users partially connect with each other because of shared political views. There were also indications that Islamist and right-wing extremist users network on the basis of shared views on women, Jews or homosexuals. In addition to relevant usernames and avatars, we found profiles featuring picture galleries, backgrounds and banners dedicated to the SS. Extremist propaganda and radicalisation processes on modding platforms have not been explored yet, but our exploration suggests these digital spaces to be highly relevant for our field.</i><br><br>Mod DB is a platform which allows users to upload mods for games, which other users can download (T0152.009: Software Delivery Platform, T0147.002: Game Mod). |
| [T0155.003 Approval Gated](../../generated_pages/techniques/T0155.003.md) |  IT00000393 In this report, researchers look at online platforms commonly used by people who play videogames, looking at how these platforms can contribute to radicalisation of gamers:<br><br><i>Gamer Uprising Forums (GUF) [is an online discussion platform using the classic forum structure] aimed directly at gamers. It is run by US Neo-Nazi Andrew Anglin and explicitly targets politically right-wing gamers. This forum mainly includes antisemitic, sexist, and racist topics, but also posts on related issues such as esotericism, conspiracy narratives, pro-Russian propaganda, alternative medicine, Christian religion, content related to the incel- and manosphere, lists of criminal offences committed by non-white people, links to right-wing news sites, homophobia and trans-hostility, troll guides, anti-leftism, ableism and much more. Most noticeable were the high number of antisemitic references. For example, there is a thread with hundreds of machine-generated images, most of which feature openly antisemitic content and popular antisemitic references. Many users chose explicitly antisemitic avatars. Some of the usernames also provide clues to the users ideologies and profiles feature swastikas as a type of progress bar and indicator of the users activity in the forum.<br><br>The GUFs front page contains an overview of the forum, user statistics, and so-called “announcements”. In addition to advice-like references, these feature various expressions of hateful ideologies. At the time of the exploration, the following could be read there: “Jews are the problem!”, “Women should be raped”, “The Jews are going to be required to return stolen property”, “Immigrants will have to be physically removed”, “Console gaming is for n******” and “Anger is a womanly emotion”. New users have to prove themselves in an area for newcomers referred to in imageboard slang as the “Newfag Barn”. Only when the newcomers posts have received a substantial number of likes from established users, are they allowed to post in other parts of the forum. It can be assumed that this will also lead to competitions to outdo each other in posting extreme content. However, it is always possible to view all posts and content on the site. In any case, it can be assumed that the platform hardly addresses milieus that are not already radicalised or at risk of radicalisation and is therefore deemed relevant for radicalisation research. However, the number of registered users is low (typical for radicalised milieus) and, hence, the platform may only be of interest when studying a small group of highly radicalised individuals.</i><br><br>Gamer Uprising Forum is a legacy online forum, with access gated behind approval of existing platform users (T0155.003: Approval Gated, T0151.009: Legacy Online Forum Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,31 @@
# Incident I00106: Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation
* **Summary:** <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0068 Respond to Breaking News Event or Active Crisis](../../generated_pages/techniques/T0068.md) |  IT00000403 <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [T0086.002 Develop AI-Generated Images (Deepfakes)](../../generated_pages/techniques/T0086.002.md) |  IT00000402 <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000404 <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [T0148.007 eCommerce Platform](../../generated_pages/techniques/T0148.007.md) |  IT00000405 <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000400 <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
| [T0151.003 Online Community Page](../../generated_pages/techniques/T0151.003.md) |  IT00000401 <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,29 @@
# Incident I00107: The Lies Russia Tells Itself
* **Summary:** <i>In early September [2024], the infamous Russian disinformation project known as Doppelganger hit the news again. Doppelganger—a scheme to disseminate fake articles, videos, and polls about polarizing political and cultural issues in the United States, as well as in France, Germany, and Ukraine—was first exposed in 2022 and widely covered in the Western press. The project cloned entire news organizations websites—complete with logos and the bylines of actual journalists—and planted its own fake stories, memes, and cartoons, luring casual Internet users to the sites via social media posts, often automated ones.<br><br>Tech companies and research labs had carefully traced, documented, and often removed some of Doppelgangers online footprints, and even exposed the private Moscow firm mostly responsible for the campaigns: the Social Design Agency. But the disinformation campaigns persisted, and on September 4, in a move to counter them, the U.S. Department of Justice announced that it had seized 32 Internet domains behind the Doppelganger campaign—and published an unprecedented 277-page FBI affidavit that included 190 pages of internal SDA documents likely sourced by American spies. Then, 12 days later, the German daily Süddeutsche Zeitung reported that, in late August, it had received from an anonymous source, large amounts of authentic internal SDA documents. A day before the FBI released its affidavit and the accompanying files—some of which overlapped with the leaked ones—Süddeutsche Zeitung asked me to comment on the leak for its investigation, because I have researched and written about disinformation and political warfare for almost ten years. I inquired whether its source might allow me to have the entire 2.4 gigabytes of leaked SDA documents, and the source agreed.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) |  IT00000409 <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) |  IT00000408 <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [T0149.003 Lookalike Domain](../../generated_pages/techniques/T0149.003.md) |  IT00000407 <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [T0152.003 Website Hosting Platform](../../generated_pages/techniques/T0152.003.md) |  IT00000406 <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,33 @@
# Incident I00108: How you thought you support the animals and you ended up funding white supremacists
* **Summary:** <i>In cooperation with French media Le Monde (in English here), the EU DisinfoLab helped expose a French white supremacist network that uses deceptive Facebook pages to attract visitors on their website to generate revenue from online advertisements, and sell racist products as a means to support their activities.<br><br>We uncovered a network of at least 16 Facebook pages hiding their white nationalist/white supremacist and racist views behind misleading names, calling users to support policemen, firefighters, or armed forces, and sometimes even victims of terrorism. On Facebook alone, these pages collectively reach an audience of 375,000 people.<br><br>This extremist network, called Suavelos, is also very active on other platforms such as Twitter, YouTube, VKontakte, Minds and Telegram, promoting hate speech against non-white populations. It also organises summer camps for white populations, admitting to the use of a non-for-profit organisation to hide its illegal agenda, which facilitates its ability to be able to collect and redistribute money.<br><br>[...]<br><br>In parallel, payment and monetisation companies like PayPal, Google Double Click, Taboola or Tipeee also helped this network to raise funding. These platforms have failed to scrutinise their users and have been hijacked by Suavelos for at least one year. In doing so, the platforms have allowed this group to pursue its illegal activities.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0092 Build Network](../../generated_pages/techniques/T0092.md) |  IT00000410 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000415 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000411 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0151.003 Online Community Page](../../generated_pages/techniques/T0151.003.md) |  IT00000413 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0152.003 Website Hosting Platform](../../generated_pages/techniques/T0152.003.md) |  IT00000416 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0152.004 Website](../../generated_pages/techniques/T0152.004.md) |  IT00000417 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0153.005 Online Advertising Platform](../../generated_pages/techniques/T0153.005.md) |  IT00000418 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
| [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) |  IT00000412 <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,41 @@
# Incident I00109: Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda
* **Summary:** <i>[In this report EU Disinfo Lab] uncovered a network of 16 French Facebook pages hiding their white nationalist/white supremacist and racist views behind misleading names, calling users to support policemen, firefighters, or armed forces, and sometimes even victims of terrorism. <br><br>This extremist network, called Suavelos, is also very active on other platforms such as Twitter, YouTube, VKontakte, Minds, Telegram, promoting hate speech against non-white populations. This extremist network also organises summer camps for white populations, admitting to the use of a non-for-profit organisation to hide its illegal agenda, which facilitates its ability to be able to collect and redistribute money. <br><br>[EU Disinfo Lab] also uncovered some of their funding sources, from web advertisements on their websites to online shops and donations.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0061 Sell Merchandise](../../generated_pages/techniques/T0061.md) |  IT00000430 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0097.207 NGO Persona](../../generated_pages/techniques/T0097.207.md) |  IT00000431 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0146 Account](../../generated_pages/techniques/T0146.md) |  IT00000424 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0148.003 Payment Processing Platform](../../generated_pages/techniques/T0148.003.md) |  IT00000421 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0148.004 Payment Processing Capability](../../generated_pages/techniques/T0148.004.md) |  IT00000420 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0148.004 Payment Processing Capability](../../generated_pages/techniques/T0148.004.md) |  IT00000429 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0149.001 Domain](../../generated_pages/techniques/T0149.001.md) |  IT00000428 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0149.005 Server](../../generated_pages/techniques/T0149.005.md) |  IT00000433 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0149.006 IP Address](../../generated_pages/techniques/T0149.006.md) |  IT00000434 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0150.006 Purchased](../../generated_pages/techniques/T0150.006.md) |  IT00000432 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000427 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000425 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0151.009 Legacy Online Forum Platform](../../generated_pages/techniques/T0151.009.md) |  IT00000422 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0152.004 Website](../../generated_pages/techniques/T0152.004.md) |  IT00000419 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0152.006 Video Platform](../../generated_pages/techniques/T0152.006.md) |  IT00000426 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
| [T0155 Gated Asset](../../generated_pages/techniques/T0155.md) |  IT00000423 This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at the Suavelos website, and the content it links out to.<br><br><i>In going back to Suavelos main page, we also found: A link to a page on a web shop: alabastro.eu; A link to a page to donate money to the founders through Tipee and to the website through PayPal; [and] a link to a private forum that gathers 3.000 members: oppidum.suavelos.eu;</i><br><br>Suavelos linked out to an online store which it controlled (T0152.004: Website, T0148.004: Payment Processing Capability), and to accounts on payment processing platforms PayPal and Tipee (T0146: Account, T0148.003: Payment Processing Platform). <br><br>The Suavelos website also hosted a private forum (T0151.009: Legacy Online Forum Platform, T0155: Gated Asset), and linked out to a variety of assets it controlled on other online platforms: accounts on Twitter (T0146: Account, T0151.008: Microblogging Platform), YouTube (T0146: Account, T0152.006: Video Platform), Instagram and VKontakte (T0146: Account, T0151.001: Social Media Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,34 @@
# Incident I00110: How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities
* **Summary:** <i>According to the European Commission “Crowdfunding is an emerging alternative form of financing that connects those who can give, lend or invest money directly with those who need financing for a specific project. It usually refers to public online calls to contribute finance to specific projects.” During COVID-19 lockdowns, these platforms saw a significant increase in their traffic.<br><br>Our previous investigation regarding a French white supremacist network already called these services to our attention. This extremist network was using a French crowdfunding platform called Tipeee as a way to monetise polarising content and disinformation. We investigated more deeply how these platforms address disinformation on their services. For this blogpost, we had a closer look at Kickstarter, Patreon, GoFundMe, Indiegogo and Tipeee.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0017 Conduct Fundraising](../../generated_pages/techniques/T0017.md) |  IT00000435 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0085.005 Develop Book](../../generated_pages/techniques/T0085.005.md) | IT00000442 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0087 Develop Video-Based Content](../../generated_pages/techniques/T0087.md) |  IT00000436 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) | IT00000439 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0121.001 Bypass Content Blocking](../../generated_pages/techniques/T0121.001.md) |  IT00000440 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0146 Account](../../generated_pages/techniques/T0146.md) | IT00000438 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0148.006 Crowdfunding Platform](../../generated_pages/techniques/T0148.006.md) |  IT00000437 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0148.007 eCommerce Platform](../../generated_pages/techniques/T0148.007.md) |  IT00000443 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [T0152.012 Subscription Service Platform](../../generated_pages/techniques/T0152.012.md) | IT00000441 The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,32 @@
# Incident I00111: Patreon Is Bankrolling Climate Change Deniers While We All Burn
* **Summary:** <i>As large parts of the U.S. Northwest and Canada are experiencing a historic “heat dome,” extreme droughts, and wildfires, climate change deniers are making thousands of dollars on Patreon by spreading conspiracies to their followers about an impending ice age.<br><br>A new report provided exclusively to VICE News by Advance Democracy, a nonprofit that conducts public-interest research and investigations, identified at least half a dozen major influencers who are spreading the false claim about an impending ice age that will fix the Earths global warming crisis. And theyre making thousands of dollars every month by doing so.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0085 Develop Text-Based Content](../../generated_pages/techniques/T0085.md) |  IT00000444 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [T0087 Develop Video-Based Content](../../generated_pages/techniques/T0087.md) |  IT00000445 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [T0088 Develop Audio-Based Content](../../generated_pages/techniques/T0088.md) |  IT00000446 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [T0146 Account](../../generated_pages/techniques/T0146.md) |  IT00000447 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [T0151.014 Comments Section](../../generated_pages/techniques/T0151.014.md) |  IT00000449 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [T0152.012 Subscription Service Platform](../../generated_pages/techniques/T0152.012.md) |  IT00000448 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [T0155.006 Subscription Access](../../generated_pages/techniques/T0155.006.md) |  IT00000450 In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,28 @@
# Incident I00112: Patreon allows disinformation and conspiracies to be monetised in Spain
* **Summary:** <i>Despite the recent measures taken by Patreon to combat QAnon-related disinformation, EU DisinfoLab has found that content linked to these theories continues to circulate and be monetised on the crowdfunding platform in Spain.<br><br>Recent measures taken by Patreon are exclusively targeting QAnon creators, but do not cover disinformation in a broader sense. We observed that other conspiratorial content on COVID-19 denialism or 5G is also being monetised.<br><br>Some users claim explicitly to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Therefore, Patreon seems to become a safe haven for disinformation and extreme content that have been removed for violating the policies of other platforms.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0121.001 Bypass Content Blocking](../../generated_pages/techniques/T0121.001.md) |  IT00000452 In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account, T0148.003: Payment Processing Platform). |
| [T0148.003 Payment Processing Platform](../../generated_pages/techniques/T0148.003.md) |  IT00000451 In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account, T0148.003: Payment Processing Platform). |
| [T0152.012 Subscription Service Platform](../../generated_pages/techniques/T0152.012.md) |  IT00000453 In this report EU DisinfoLab identified 17 Spanish accounts that monetise and/or spread QAnon content or other conspiracy theories on Patreon. Content produced by these accounts go against Patreons stated policy of removing creators that “advance disinformation promoting the QAnon conspiracy theory”. EU DisinfoLab found:<br><br>In most cases, the creators monetise the content directly on Patreon (posts are only accessible for people sponsoring the creators) but there are also cases of indirect monetization (monetization through links leading to other platforms), an aspect that was flagged and analysed by Eu DisinfoLab in the mentioned previous report.<br><br>Some creators display links that redirects users to other platforms such as YouTube or LBRY where they can monetise their content. Some even offer almost all of their videos for free by redirecting to their YouTube channel.<br><br>Another modus operandi is for the creators to advertise on Patreon that they are looking for financing through PayPal or provide the authors email to explore other financing alternatives.<br><br>[...]<br><br>Sometimes the content offered for a fee on Patreon is freely accessible on other platforms. Creators openly explain that they seek voluntary donation on Patreon, but that their creations will be public on YouTube. This means that the model of these platforms does not always involve a direct monetisation of the content. Creators who have built a strong reputation previously on other platforms can use Patreon as a platform to get some sponsorship which is not related to the content and give them more freedom to create.<br><br>Some users explicitly claim to use Patreon as a secondary channel to evade “censorship” by other platforms such as YouTube. Patreon seems to be perceived as a safe haven for disinformation and fringe content that has been suppressed for violating the policies of other platforms. For example, Jorge Guerra points out how Patreon acts as a back-up channel to go to in case of censorship by YouTube. Meanwhile, Alfa Mind openly claims to use Patreon to publish content that is not allowed on YouTube. “Exclusive access to videos that are prohibited on YouTube. These videos are only accessible on Patreon, as their content is very explicit and shocking”, are offered to the patrons who pay €3 per month.</i><br><br>In spite of Patreons stated policy, actors use accounts on their platform to generate revenue or donations for their content, and to provide a space to host content which was removed from other platforms (T0146: Account, T0152.012: Subscription Service Platform, T0121.001: Bypass Content Bocking).<br><br>Some actors were observed accepting donations via PayPal (T0146: Account, T0148.003: Payment Processing Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,35 @@
# Incident I00113: Inside the Shadowy World of Disinformation for Hire in Kenya
* **Summary:** <i>This research examines how Kenyan journalists, judges, and other members of civil society are facing coordinated disinformation and harassment campaigns on Twitter — and that Twitter is doing very little to stop it. The research provides a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya. <br><br>We conducted the research over the course of three months using tools like Sprinklr, Gephi and Trendinalia. We also interviewed influencers who participated in the campaigns, and collected a vast trove of screenshots, memes, and other evidence. In total, our research uncovered at least nine different disinformation campaigns consisting of over 23,000 tweets and 3,700 participating accounts.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0121 Manipulate Platform Algorithm](../../generated_pages/techniques/T0121.md) |  IT00000459 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [T0129.005 Coordinate on Encrypted/Closed Networks](../../generated_pages/techniques/T0129.005.md) |  IT00000456 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [T0146.003 Verified Account](../../generated_pages/techniques/T0146.003.md) |  IT00000461 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account, T0150.007: Rented, T0150.004: Repurposed, T00145.006: Attractive Person Account Imagery). |
| [T0148.001 Online Banking Platform](../../generated_pages/techniques/T0148.001.md) |  IT00000455 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [T0148.002 Bank Account](../../generated_pages/techniques/T0148.002.md) |  IT00000454 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [T0150.004 Repurposed](../../generated_pages/techniques/T0150.004.md) |  IT00000463 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account, T0150.007: Rented, T0150.004: Repurposed, T00145.006: Attractive Person Account Imagery). |
| [T0150.007 Rented](../../generated_pages/techniques/T0150.007.md) |  IT00000462 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing <i>“a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”.</i> The report touches upon how actors gained access to twitter accounts, and what personas they presented:<br><br><i>Verified accounts are complicit. One influencer we spoke to mentioned that the people who own coveted “blue check” accounts will often rent them out for disinformation campaigns. These verified accounts can improve the campaigns chances of trending. Says one interviewee: “The owner of the account usually receives a cut of the campaign loot”.<br><br>[...]<br><br>Many of the accounts we examined appear to give an aura of authenticity, but in reality they are not authentic. Simply looking at their date of creation wont give you a hint as to their purpose. We had to dig deeper. The profile pictures and content of some of the accounts gave us the answers we were looking for. A common tactic these accounts utilize is using suggestive pictures of women to bait men into following them, or at least pay attention. In terms of content, many of these accounts tweeted off the same hashtags for days on end and will constantly retweet a specific set of accounts.</i><br><br>Actors participating in this operation rented out verified Twitter accounts (in 2021 a checkmark on Twitter verified a users identity), which were repurposed and used updated account imagery (T0146.003: Verified Account, T0150.007: Rented, T0150.004: Repurposed, T00145.006: Attractive Person Account Imagery). |
| [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) |  IT00000458 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [T0151.007 Chat Broadcast Group](../../generated_pages/techniques/T0151.007.md) |  IT00000457 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000460 Researchers at Mozilla examined influence operations targeting Kenyan citizens on Twitter in 2021, providing “a grim window into the booming and shadowy industry of Twitter influencers for political hire here in Kenya”, and giving insight into operations operationalisation:<br><br><i>In our interviews with one of the influencers, they informed us of the agile tactics they use to organize and avoid detection. For example, when its time to carry out the campaign the influencers would be added to a Whatsapp group. Here, they received direction about what to post, the hashtags to use, which tweets to engage with and who to target. Synchronizing the tweets was also incredibly important for them. Its what enables them to achieve their goal of trending on Twitter and gain amplification.<br><br>[...]<br><br>They revealed to us that those participating in the exercise are paid roughly between $10 and $15 to participate in three campaigns per day. Each campaign execution involves tweeting about the hashtags of the day until it appears on the trending section of Twitter. Additionally, some individuals have managed to reach retainer level and get paid about $250 per month. Their job is to make sure the campaigns are executed on a day-by-day basis with different hashtags.</i><br><br>An M-PESA account (T0148.002: Bank Account, T0148.001: Online Banking Platform) was used to pay campaign participants.<br><br>Participants were organised in WhatsApp groups (T0129.005: Coordinate on Encrypted/Closed Networks, T0151.007: Chat Broadcast Group, T0151.004: Chat Platform), in which they planned how to get campaign content trending on Twitter (T0121: Manipulate Platform Algorithm, T0151.008: Microblogging Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,27 @@
# Incident I00115: How Facebook shapes your feed
* **Summary:** <i>Facebooks news feed algorithm has been blamed for fanning sectarian hatred, steering users toward extremism and conspiracy theories, and incentivizing politicians to take more divisive stands. Its in the spotlight thanks to waves of revelations from the Facebook Papers and testimony from whistleblower Frances Haugen, who argues its at the core of the companys problems.<br><br>But how exactly does it work, and what makes it so influential?<br><br>While the phrase “the algorithm” has taken on sinister, even mythical overtones, it is, at its most basic level, a system that decides a posts position on the news feed based on predictions about each users preferences and tendencies. The details of its design determine what sorts of content thrive on the worlds largest social network, and what types languish — which in turn shapes the posts we all create, and the ways we interact on its platform.<br><br>Facebook doesnt release comprehensive data on the actual proportions of posts in any given users feed, or on Facebook as a whole. And each users feed is highly personalized to their behaviors. But a combination of internal Facebook documents, publicly available information and conversations with Facebook insiders offers a glimpse into how different approaches to the algorithm can dramatically alter the categories of content that tend to flourish.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000471 This 2021 report by The Washington Post explains the mechanics of Facebooks algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):<br><br>In its early years, Facebooks algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebooks news feed algorithm.<br><br>Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold peoples attention.<br><br>In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they werent interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.<br><br>Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.<br><br>The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didnt help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.<br><br>[...]<br><br>Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.<br><br>One takeaway is that Facebooks algorithm isnt a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.</i> |
| [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) |  IT00000472 This 2021 report by The Washington Post explains the mechanics of Facebooks algorithm (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm):<br><br>In its early years, Facebooks algorithm prioritized signals such as likes, clicks and comments to decide which posts to amplify. Publishers, brands and individual users soon learned how to craft posts and headlines designed to induce likes and clicks, giving rise to what came to be known as “clickbait.” By 2013, upstart publishers such as Upworthy and ViralNova were amassing tens of millions of readers with articles designed specifically to game Facebooks news feed algorithm.<br><br>Facebook realized that users were growing wary of misleading teaser headlines, and the company recalibrated its algorithm in 2014 and 2015 to downgrade clickbait and focus on new metrics, such as the amount of time a user spent reading a story or watching a video, and incorporating surveys on what content users found most valuable. Around the same time, its executives identified video as a business priority, and used the algorithm to boost “native” videos shared directly to Facebook. By the mid-2010s, the news feed had tilted toward slick, professionally produced content, especially videos that would hold peoples attention.<br><br>In 2016, however, Facebook executives grew worried about a decline in “original sharing.” Users were spending so much time passively watching and reading that they werent interacting with each other as much. Young people in particular shifted their personal conversations to rivals such as Snapchat that offered more intimacy.<br><br>Once again, Facebook found its answer in the algorithm: It developed a new set of goal metrics that it called “meaningful social interactions,” designed to show users more posts from friends and family, and fewer from big publishers and brands. In particular, the algorithm began to give outsize weight to posts that sparked lots of comments and replies.<br><br>The downside of this approach was that the posts that sparked the most comments tended to be the ones that made people angry or offended them, the documents show. Facebook became an angrier, more polarizing place. It didnt help that, starting in 2017, the algorithm had assigned reaction emoji — including the angry emoji — five times the weight of a simple “like,” according to company documents.<br><br>[...]<br><br>Internal documents show Facebook researchers found that, for the most politically oriented 1 million American users, nearly 90 percent of the content that Facebook shows them is about politics and social issues. Those groups also received the most misinformation, especially a set of users associated with mostly right-leaning content, who were shown one misinformation post out of every 40, according to a document from June 2020.<br><br>One takeaway is that Facebooks algorithm isnt a runaway train. The company may not directly control what any given user posts, but by choosing which types of posts will be seen, it sculpts the information landscape according to its business priorities. Some within the company would like to see Facebook use the algorithm to explicitly promote certain values, such as democracy and civil discourse. Others have suggested that it develop and prioritize new metrics that align with users values, as with a 2020 experiment in which the algorithm was trained to predict what posts they would find “good for the world” and “bad for the world,” and optimize for the former.</i> |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,33 @@
# Incident I00116: Blue-tick scammers target consumers who complain on X
* **Summary:** <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.205 Business Persona](../../generated_pages/techniques/T0097.205.md) |  IT00000476 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000477 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) |  IT00000478 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0146.002 Paid Account](../../generated_pages/techniques/T0146.002.md) |  IT00000473 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0146.003 Verified Account](../../generated_pages/techniques/T0146.003.md) |  IT00000474 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0146.005 Lookalike Account ID](../../generated_pages/techniques/T0146.005.md) |  IT00000475 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0150.001 Newly Created](../../generated_pages/techniques/T0150.001.md) |  IT00000480 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000479 <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,25 @@
# Incident I00117: The El Paso Shooting and the Gamification of Terror
* **Summary:** <i>On August 3, 2019, at around 11am local time, initial police reports indicated that a gunman had walked into an El Paso Wal-Mart and opened fire. As of the publication of this article, at least eighteen people have died and several others have been injured. One victim was a four-month old infant. As weve seen with two other mass shootings this year, the killer announced the start of his rampage on 8chans /pol board.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,31 @@
# Incident I00118: War Thunder players are once again leaking sensitive military technology information on a video game forum
* **Summary:** <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0089.001 Obtain Authentic Documents](../../generated_pages/techniques/T0089.001.md) |  IT00000481 <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [T0097.105 Military Personnel Persona](../../generated_pages/techniques/T0097.105.md) |  IT00000483 <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [T0115 Post Content](../../generated_pages/techniques/T0115.md) |  IT00000484 <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) |  IT00000485 <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [T0146 Account](../../generated_pages/techniques/T0146.md) |  IT00000482 <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
| [T0151.009 Legacy Online Forum Platform](../../generated_pages/techniques/T0151.009.md) |  IT00000486 <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,33 @@
# Incident I00119: Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns
* **Summary:** <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0089 Obtain Private Documents](../../generated_pages/techniques/T0089.md) |  IT00000489 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0097.100 Individual Persona](../../generated_pages/techniques/T0097.100.md) |  IT00000488 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) |  IT00000490 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0143.001 Authentic Persona](../../generated_pages/techniques/T0143.001.md) |  IT00000491 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0150.003 Pre-Existing](../../generated_pages/techniques/T0150.003.md) |  IT00000494 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0152.001 Blogging Platform](../../generated_pages/techniques/T0152.001.md) |  IT00000492 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0152.002 Blog](../../generated_pages/techniques/T0152.002.md) |  IT00000493 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [T0153.001 Email Platform](../../generated_pages/techniques/T0153.001.md) |  IT00000487 <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,30 @@
# Incident I00120: factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker
* **Summary:** <i>The evening of the 19th November 2019 [Ahead of the United Kingdoms 2019 general election] saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were “watching” (er, “twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide “snippets of news and commentary from CCHQ” to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.203 Fact Checking Organisation Persona](../../generated_pages/techniques/T0097.203.md) |  IT00000497 Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) |  IT00000498 Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [T0146.003 Verified Account](../../generated_pages/techniques/T0146.003.md) |  IT00000499 Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [T0150.003 Pre-Existing](../../generated_pages/techniques/T0150.003.md) |  IT00000496 Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000495 Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,28 @@
# Incident I00121: Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts
* **Summary:** <i>This report exposes a large-scale, cross-country, multi-platform disinformation campaign designed to spread pro-Russian propaganda in the West, with clear indicators of foreign interference and information manipulation (FIMI). The narratives promoted by the actors are aligned with Russian interests, which is a hallmark of FIMI. At the time of writing, this operation is still ongoing.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) |  IT00000501 <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
| [T0146.001 Free Account](../../generated_pages/techniques/T0146.001.md) |  IT00000500 <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
| [T0153.001 Email Platform](../../generated_pages/techniques/T0153.001.md) |  IT00000502 <i>The unique aspect of Operation Overload is a barrage of emails sent to newsrooms and fact-checkers across Europe. The authors of these messages urge recipients to verify content allegedly found online. The email subject lines often include an incitement to verify the claims briefly described in the message body. This is followed by a short list of links directing recipients to posts on Telegram, X, or known pro-Russian websites, including Pravda and Sputnik. <br><br>We have collected 221 emails sent to 20 organisations. The organisations mostly received identical emails urging them to fact-check specific false stories, which demonstrates that the emails were sent as part of a larger coordinated campaign.<br><br>[...]<br><br>The authors of the emails do not hide their intention to see the fake content widely spread. In February 2024, a journalist at the German outlet CORRECTIV engaged with the sender of one of the emails, providing feedback on the narratives which were originally sent. CORRECTIV received a response from the same Gmail address, initially expressing respect and trust in CORRECTIVs assessment, while asking: “is it possible for your work to be seen by as many people as possible?”, thereby clearly stating the goal of the operation.<br><br>[...]<br><br>All the emails come from authors posing as concerned citizens. All emails are sent with Gmail accounts, which is typical for personal use. This makes it challenging to identify the individuals behind these emails, as anyone can open a Gmail account for free. The email headers indicate that the messages were sent from the Gmail interface, not from a personal client which would disclose the senders IP address.</i><br><br>In this example, threat actors used gmail accounts (T0146.001: Free Account, T0097.100: Individual Persona, T0143.002: Fabricated Persona, T0153.001: Email Platform) to target journalists and fact-checkers, with the apparent goal of having them amplify operation narratives through fact checks. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,32 @@
# Incident I00122: The Extreme Right on Discord
* **Summary:** <i>This briefing is part of ISDs Gaming and Extremism Series exploring the role online gaming plays in the strategy of far-right extremists in the UK and globally. This is part of a broader programme on the Future of Extremism being delivered by ISD in the second half of 2021, charting the transformational shifts in the extremist threat landscape two decades on from 9/11, and the policy strategies required to counter the next generation of extremist threats. It provides a snapshot overview of the extreme rights use of Discord.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0048 Harass](../../generated_pages/techniques/T0048.md) |  IT00000508 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [T0049.005 Conduct Swarming](../../generated_pages/techniques/T0049.005.md) |  IT00000509 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [T0057 Organise Events](../../generated_pages/techniques/T0057.md) |  IT00000506 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [T0126.002 Facilitate Logistics or Support for Attendance](../../generated_pages/techniques/T0126.002.md) |  IT00000507 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) |  IT00000503 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [T0151.005 Chat Community Server](../../generated_pages/techniques/T0151.005.md) |  IT00000504 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [T0151.006 Chat Room](../../generated_pages/techniques/T0151.006.md) |  IT00000505 Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,34 @@
# Incident I00123: The Extreme Right on Steam
* **Summary:** <i>This briefing is part of ISDs Gaming and Extremism Series exploring the role online gaming plays in the strategy of far-right extremists in the UK and globally. This is part of a broader programme on the Future of Extremism being delivered by ISD in the second half of 2021, charting the transformational shifts in the extremist threat landscape two decades on from 9/11, and the policy strategies required to counter the next generation of extremist threats. It provides a snapshot overview of the extreme rights use of Steam.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0048 Harass](../../generated_pages/techniques/T0048.md) |  IT00000515 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |
| [T0049.005 Conduct Swarming](../../generated_pages/techniques/T0049.005.md) |  IT00000514 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000518 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis. <br><br>[...]<br><br>This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms</i><br><br>Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). |
| [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) |  IT00000511 Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):<br><br><i>The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.<br><br>The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The groups overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.</i> |
| [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) |  IT00000513 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |
| [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) |  IT00000517 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis. <br><br>[...]<br><br>This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms</i><br><br>Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). |
| [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) |  IT00000510 Analysis of communities on the gaming platform Steam showed that groups who are known to have engaged in acts of terrorism used Steam to host social communities (T0152.009: Software Delivery Platform, T0151.002: Online Community Group):<br><br><i>The first is a Finnish-language group which was set up to promote the Nordic Resistance Movement (NRM). NRM are the only group in the sample examined by ISD known to have engaged in terrorist attacks. Swedish members of the group conducted a series of bombings in Gothenburg in 2016 and 2017, and several Finnish members are under investigation in relation to both violent attacks and murder.<br><br>The NRM Steam group does not host content related to gaming, and instead seems to act as a hub for the movement. The groups overview section contains a link to the official NRM website, and users are encouraged to find like-minded people to join the group. The group is relatively small, with 87 members, but at the time of writing, it appeared to be active and in use. Interestingly, although the group is in Finnish language, it has members in common with the English language channels identified in this analysis. This suggests that Steam may help facilitate international exchange between right-wing extremists.</i> |
| [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) |  IT00000512 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |
| [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) |  IT00000516 ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>A number of groups were observed encouraging members to join conversations on outside platforms. These include links to Telegram channels connected to white supremacist marches, and media outlets, forums and Discord servers run by neo-Nazis. <br><br>[...]<br><br>This off-ramping activity demonstrates how rather than sitting in isolation, Steam fits into the wider extreme right wing online ecosystem, with Steam groups acting as hubs for communities and organizations which span multiple platforms. Accordingly, although the platform appears to fill a specific role in the building and strengthening of communities with similar hobbies and interests, it is suggested that analysis seeking to determine the risk of these communities should focus on their activity across platforms</i><br><br>Social Groups on Steam were used to drive new people to other neo-Nazi controlled community assets (T0122: Direct Users to Alternative Platforms, T0152.009: Software Delivery Platform, T0151.002: Online Community Group). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,26 @@
# Incident I00124: Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook
* **Summary:** <i>The report presents a mapping of the computer game domain and an analysis of threats and vulnerabilities in relation to improper information influence such as the spread of disinformation from a foreign power. It is based on known cases of undue influence in or through computer games or game-related platforms, based on a review of available literature.<br><br>The report identifies more than 40 influence techniques that have successfully targeted the gaming domain and which, by extension, could be used for improper information influence by threat actors from foreign powers. [Translated from original in Swedish]</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0152.008 Live Streaming Platform](../../generated_pages/techniques/T0152.008.md) |  IT00000519 This report “Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook” looks at influence operations in relation to gaming. Part of the report looks at the use of gaming platforms, including DLive; a streaming platform (T0152.008: Live Streaming Platform); <br><br><i>“Like Twitch and YouTube, DLive is a video streaming service that enables users (also known as "streamers," or "content creators") to record themselves talking, playing video games, and other activities [...] DLive is built on blockchain technology, using its own currency directly through it rather than relying on advertising revenue.”<br><br>[...]<br><br>The emergence of blockchain technology has also created opportunities for reduced censorship. Due to the decentralised nature of blockchain platforms, content deletion from numerous servers takes longer than centralised systems. While DLive has community guidelines that forbid harassment or hate speech, it also allegedly provides users with protection from deplatforming, a practice where tech companies prevent individuals or groups from using their websites (Cohen, 2020). DLive's lack of content moderation and deplatforming has attracted far-right extremists and fringe streamers who have been barred from mainstream social media platforms like YouTube (Cohen, 2020; Gais & Edison Hayden, 2020). PewDiePie, one of YouTube's most popular content creators with nearly 94 million subscribers, moved exclusively to DLive in 2019. Although financial factors played a significant role in PewDiePie's decision, some have been drawn to DLive as a consequence of being deplatformed from other video streaming services (Gais & Edison Hayden, 2020).<br><br>According to recent findings from ISD, extremist groups have taken advantage of the relative lack of content moderation. The platform has been used to spread racist, sexist, and homophobic content, as well as conspiracy theories that would likely be banned on other platforms (Thomas, 2021). DLive is also known to have played a role in the events leading up to the January 6th Capitol insurrection, with far- right extremists livestreaming the event and receiving donations from viewers (Lakhani, 2021, p. 9). In response to the storming of the Capitol, DLive has implemented stricter content moderation policies, including demonetisation and the banning of influential figures associated with far-right extremism (ibid, p. 18). The findings of ISD´s analysis of DLive indicates that these actions reduced the "safe harbor" that extremists had previously enjoyed on DLive (Thomas, 2021). However, some claim that extremism still has a foothold on the platform despite these efforts to remove it (Schlegel, 2021b).</i> |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,31 @@
# Incident I00125: The Agency
* **Summary:** <i>From a nondescript office building in St. Petersburg, Russia, an army of well-paid “trolls” has tried to wreak havoc all around the Internet — and in real-life American communities.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0087 Develop Video-Based Content](../../generated_pages/techniques/T0087.md) |  IT00000524 In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [T0097.101 Local Persona](../../generated_pages/techniques/T0097.101.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [T0143.002 Fabricated Persona](../../generated_pages/techniques/T0143.002.md) |  IT00000521 In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [T0146 Account](../../generated_pages/techniques/T0146.md) |  IT00000520 In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [T0150.004 Repurposed](../../generated_pages/techniques/T0150.004.md) |  IT00000523 In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000522 In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,32 @@
# Incident I00126: Charming Kitten Updates POWERSTAR with an InterPlanetary Twist
* **Summary:** <i>Volexity often uncovers spear-phishing campaigns from Charming Kitten against its own customers. These observed spear-phishing attacks are typically aimed at credential harvesting rather than deploying malware. However, in a recently detected spear-phishing campaign, Volexity discovered that Charming Kitten was attempting to distribute an updated version of one of their backdoors, which Volexity calls POWERSTAR (also known as CharmPower).<br><br>This new version of POWERSTAR was analyzed by the Volexity team and led the to the discovery that Charming Kitten has been evolving their malware alongside their spear-phishing techniques. Notably, there have been improved operational security measures placed in the malware to make it more difficult to analyze and collect intelligence. Fortunately, Volexity had all the necessary pieces and was able to fully analyze this new POWERSTAR variant.<br><br>Volexity found the latest POWERSTAR variant to be more complex and assesses that it is likely supported by a custom server-side component, which automates simple actions for the malware operator. It is also notable that this latest version of the malware has a variety of interesting features, including the use of the InterPlanetary File System (IPFS), as well as remotely hosting its decryption function and configuration details on publicly accessible cloud hosting.<br><br>This blog post discusses Charming Kittens spear-phishing activity, but it largely focuses on detection and analysis of the new variant of the POWERSTAR backdoor.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0085.004 Develop Document](../../generated_pages/techniques/T0085.004.md) |  IT00000530 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [T0097.102 Journalist Persona](../../generated_pages/techniques/T0097.102.md) |  IT00000525 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [T0097.202 News Outlet Persona](../../generated_pages/techniques/T0097.202.md) |  IT00000526 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) |  IT00000527 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [T0147.003 Malware](../../generated_pages/techniques/T0147.003.md) |  IT00000531 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [T0149.002 Email Domain](../../generated_pages/techniques/T0149.002.md) |  IT00000529 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
| [T0149.003 Lookalike Domain](../../generated_pages/techniques/T0149.003.md) |  IT00000528 <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,27 @@
# Incident I00127: Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops
* **Summary:** <i>Iran has taken a page from the Russian playbook: Passing off military groups as civilians for the sake of PR and plausible deniability.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.104 Hacktivist Persona](../../generated_pages/techniques/T0097.104.md) |  IT00000532 <i>Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.<br><br>[...]<br><br>What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.<br><br>"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."<br><br>To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.<br><br>Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.</i><br><br>In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) |  IT00000533 <i>Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.<br><br>[...]<br><br>What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.<br><br>"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."<br><br>To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.<br><br>Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.</i><br><br>In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,31 @@
# Incident I00128: #TrollTracker: Outward Influence Operation From Iran
* **Summary:** <i>Facebook removed 783 assets — pages, groups, and accounts — that the company assessed to be associated with an Iran-based network for “coordinated inauthentic behavior.” The network targeted users in more than a dozen countries and posted content in at least eight different languages.<br><br>All but 97 pages were previously removed by a mixture of automated systems, spam detectors, or varied platform integrity team. Before the takedown, Facebook shared the remaining 97 active pages with @DFRLab.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0097.208 Social Cause Persona](../../generated_pages/techniques/T0097.208.md) |  IT00000534 <i>[Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the Peoples Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.<br><br>The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.<br><br>Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEKs movement in Iran in the mid-1990s. The file was embedded as a QR code on one of the pages images.</i><br><br>In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code), which took users to a document hosted on another website (T0152.004: Website). |
| [T0122 Direct Users to Alternative Platforms](../../generated_pages/techniques/T0122.md) |  IT00000537 <i>[Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the Peoples Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.<br><br>The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.<br><br>Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEKs movement in Iran in the mid-1990s. The file was embedded as a QR code on one of the pages images.</i><br><br>In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code), which took users to a document hosted on another website (T0152.004: Website). |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |  IT00000535 <i>[Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the Peoples Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.<br><br>The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.<br><br>Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEKs movement in Iran in the mid-1990s. The file was embedded as a QR code on one of the pages images.</i><br><br>In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code), which took users to a document hosted on another website (T0152.004: Website). |
| [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) |  IT00000536 <i>[Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the Peoples Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.<br><br>The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.<br><br>Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEKs movement in Iran in the mid-1990s. The file was embedded as a QR code on one of the pages images.</i><br><br>In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code), which took users to a document hosted on another website (T0152.004: Website). |
| [T0152.004 Website](../../generated_pages/techniques/T0152.004.md) |  IT00000539 <i>[Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the Peoples Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.<br><br>The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.<br><br>Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEKs movement in Iran in the mid-1990s. The file was embedded as a QR code on one of the pages images.</i><br><br>In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code), which took users to a document hosted on another website (T0152.004: Website). |
| [T0153.004 QR Code](../../generated_pages/techniques/T0153.004.md) |  IT00000538 <i>[Meta removed a network of assets for coordinated inauthentic behaviour. One page] in the network, @StopMEK, was promoting views against the Peoples Mujahedin of Iran (MEK), the largest and most active political opposition group against the Islamic Republic of Iran Leadership.<br><br>The content on the page drew narratives showing parallels between the Islamic State of Iraq and Syria (ISIS) and the MEK.<br><br>Apart from images and memes, the @StopMEK page shared a link to an archived report on how the United States was monitoring the MEKs movement in Iran in the mid-1990s. The file was embedded as a QR code on one of the pages images.</i><br><br>In this example a Facebook page presented itself as focusing on a political cause (T0097.208: Social Cause Persona, T0151.001: Social Media Platform, T0151.002: Online Community Group). Within the page it embedded a QR code (T0122: Direct Users to Alternative Platforms, T0153.004: QR Code), which took users to a document hosted on another website (T0152.004: Website). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,32 @@
# Incident I00129: Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison
* **Summary:** <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.</i>
* **incident type**:
* **Year started:**
* **Countries:** ,
* **Found via:**
* **Date added:**
| Reference | Pub Date | Authors | Org | Archive |
| --------- | -------- | ------- | --- | ------- |
| Technique | Description given for this incident |
| --------- | ------------------------- |
| [T0143.003 Impersonated Persona](../../generated_pages/techniques/T0143.003.md) |  IT00000544 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [T0146.004 Administrator Account](../../generated_pages/techniques/T0146.004.md) |  IT00000543 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [T0146.005 Lookalike Account ID](../../generated_pages/techniques/T0146.005.md) |  IT00000540 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [T0148.009 Cryptocurrency Wallet](../../generated_pages/techniques/T0148.009.md) |  IT00000546 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [T0150.005 Compromised](../../generated_pages/techniques/T0150.005.md) |  IT00000541 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [T0150.005 Compromised](../../generated_pages/techniques/T0150.005.md) |  IT00000545 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |  IT00000542 <i>An 18-year-old hacker who pulled off a huge breach in 2020, infiltrating several high profile Twitter accounts to solicit bitcoin transactions, has agreed to serve three years in prison for his actions.<br><br>Graham Ivan Clark, of Florida, was 17 years old at the time of the hack in July, during which he took over a number of major accounts including those of Joe Biden, Bill Gates and Kim Kardashian West.<br><br>Once he accessed them, Clark tweeted a link to a bitcoin address and wrote “all bitcoin sent to our address below will be sent back to you doubled!” According to court documents, Clark made more than $100,000 from the scheme, which his lawyers say he has since returned.<br><br>Clark was able to access the accounts after convincing an employee at Twitter he worked in the companys information technology department, according to the Tampa Bay Times.</i><br><br>In this example a threat actor gained access to Twitters customer service portal through social engineering (T0146.004: Administrator Account, T0150.005: Compromised, T0151.008: Microblogging Platform), which they used to take over accounts of public figures (T0146.003: Verified Account, T0143.003: Impersonated Persona, T0150.005: Compromised, T0151.008: Microblogging Platform).<br><br>The threat actor used these compromised accounts to trick their followers into sending bitcoin to their wallet (T0148.009: Cryptocurrency Wallet). |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -770,272 +770,272 @@
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00096.md">I00096</a></td>
<td>China ramps up use of AI misinformation</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00097.md">I00097</a></td>
<td>Report: Not Just Algorithms</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00098.md">I00098</a></td>
<td>Gaming The System: How Extremists Exploit Gaming Sites And What Can Be Done To Counter Them</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00099.md">I00099</a></td>
<td>More Women Are Facing The Reality Of Deepfakes, And Theyre Ruining Lives</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00100.md">I00100</a></td>
<td>Why ThisPersonDoesNotExist (and its copycats) need to be restricted</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00101.md">I00101</a></td>
<td>Pro-Putin Disinformation Warriors Take War of Aggression to Reddit</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00102.md">I00102</a></td>
<td>Ignore The Poway Synagogue Shooters Manifesto: Pay Attention To 8chans /pol/ Board</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00103.md">I00103</a></td>
<td>The racist AI deepfake that fooled and divided a community</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00104.md">I00104</a></td>
<td>Macron Campaign Hit With “Massive and Coordinated” Hacking Attack</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00105.md">I00105</a></td>
<td>Gaming the System: The Use of Gaming-Adjacent Communication, Game and Mod Platforms by Extremist Actors</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00106.md">I00106</a></td>
<td>Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00107.md">I00107</a></td>
<td>The Lies Russia Tells Itself</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00108.md">I00108</a></td>
<td>How you thought you support the animals and you ended up funding white supremacists</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00109.md">I00109</a></td>
<td>Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00110.md">I00110</a></td>
<td>How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00111.md">I00111</a></td>
<td>Patreon Is Bankrolling Climate Change Deniers While We All Burn</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00112.md">I00112</a></td>
<td>Patreon allows disinformation and conspiracies to be monetised in Spain</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00113.md">I00113</a></td>
<td>Inside the Shadowy World of Disinformation for Hire in Kenya</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00114.md">I00114</a></td>
<td>Carols Journey: What Facebook knew about how it radicalized users</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00115.md">I00115</a></td>
<td>How Facebook shapes your feed</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00116.md">I00116</a></td>
<td>Blue-tick scammers target consumers who complain on X</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00117.md">I00117</a></td>
<td>The El Paso Shooting and the Gamification of Terror</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00118.md">I00118</a></td>
<td>War Thunder players are once again leaking sensitive military technology information on a video game forum</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00119.md">I00119</a></td>
<td>Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00120.md">I00120</a></td>
<td>factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00121.md">I00121</a></td>
<td>Operation Overload: how pro-Russian actors flood newsrooms with fake content and seek to divert their efforts</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00122.md">I00122</a></td>
<td>The Extreme Right on Discord</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00123.md">I00123</a></td>
<td>The Extreme Right on Steam</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00124.md">I00124</a></td>
<td>Malign foreign interference and information influence on video game platforms: understanding the adversarial playbook</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00125.md">I00125</a></td>
<td>The Agency</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00126.md">I00126</a></td>
<td>Charming Kitten Updates POWERSTAR with an InterPlanetary Twist</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00127.md">I00127</a></td>
<td>Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00128.md">I00128</a></td>
<td>#TrollTracker: Outward Influence Operation From Iran</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td><a href="incidents/.md"></a></td>
<td></td>
<td><a href="incidents/I00129.md">I00129</a></td>
<td>Teen who hacked Joe Biden and Bill Gates' Twitter accounts sentenced to three years in prison</td>
<td></td>
<td></td>
<td></td>

View File

@ -17,6 +17,8 @@
| Techniques |
| ---------- |
| [T0015 Create Hashtags and Search Artefacts](../../generated_pages/techniques/T0015.md) |
| [T0015.001 Use Existing Hashtag](../../generated_pages/techniques/T0015.001.md) |
| [T0015.002 Create New Hashtag](../../generated_pages/techniques/T0015.002.md) |
| [T0023 Distort Facts](../../generated_pages/techniques/T0023.md) |
| [T0023.001 Reframe Context](../../generated_pages/techniques/T0023.001.md) |
| [T0023.002 Edit Open-Source Content](../../generated_pages/techniques/T0023.002.md) |
@ -47,6 +49,48 @@
| [T0089 Obtain Private Documents](../../generated_pages/techniques/T0089.md) |
| [T0089.001 Obtain Authentic Documents](../../generated_pages/techniques/T0089.001.md) |
| [T0089.003 Alter Authentic Documents](../../generated_pages/techniques/T0089.003.md) |
| [T0146 Account](../../generated_pages/techniques/T0146.md) |
| [T0146.001 Free Account](../../generated_pages/techniques/T0146.001.md) |
| [T0146.002 Paid Account](../../generated_pages/techniques/T0146.002.md) |
| [T0146.003 Verified Account](../../generated_pages/techniques/T0146.003.md) |
| [T0146.004 Administrator Account](../../generated_pages/techniques/T0146.004.md) |
| [T0146.005 Lookalike Account ID](../../generated_pages/techniques/T0146.005.md) |
| [T0146.006 Open Access Platform](../../generated_pages/techniques/T0146.006.md) |
| [T0146.007 Automated Account](../../generated_pages/techniques/T0146.007.md) |
| [T0147 Software](../../generated_pages/techniques/T0147.md) |
| [T0147.001 Game](../../generated_pages/techniques/T0147.001.md) |
| [T0147.002 Game Mod](../../generated_pages/techniques/T0147.002.md) |
| [T0147.003 Malware](../../generated_pages/techniques/T0147.003.md) |
| [T0147.004 Mobile App](../../generated_pages/techniques/T0147.004.md) |
| [T0148 Financial Instrument](../../generated_pages/techniques/T0148.md) |
| [T0148.001 Online Banking Platform](../../generated_pages/techniques/T0148.001.md) |
| [T0148.002 Bank Account](../../generated_pages/techniques/T0148.002.md) |
| [T0148.003 Payment Processing Platform](../../generated_pages/techniques/T0148.003.md) |
| [T0148.004 Payment Processing Capability](../../generated_pages/techniques/T0148.004.md) |
| [T0148.005 Subscription Processing Capability](../../generated_pages/techniques/T0148.005.md) |
| [T0148.006 Crowdfunding Platform](../../generated_pages/techniques/T0148.006.md) |
| [T0148.007 eCommerce Platform](../../generated_pages/techniques/T0148.007.md) |
| [T0148.008 Cryptocurrency Exchange Platform](../../generated_pages/techniques/T0148.008.md) |
| [T0148.009 Cryptocurrency Wallet](../../generated_pages/techniques/T0148.009.md) |
| [T0149 Online Infrastructure](../../generated_pages/techniques/T0149.md) |
| [T0149.001 Domain](../../generated_pages/techniques/T0149.001.md) |
| [T0149.002 Email Domain](../../generated_pages/techniques/T0149.002.md) |
| [T0149.003 Lookalike Domain](../../generated_pages/techniques/T0149.003.md) |
| [T0149.004 Redirecting Domain](../../generated_pages/techniques/T0149.004.md) |
| [T0149.005 Server](../../generated_pages/techniques/T0149.005.md) |
| [T0149.006 IP Address](../../generated_pages/techniques/T0149.006.md) |
| [T0149.007 VPN](../../generated_pages/techniques/T0149.007.md) |
| [T0149.008 Proxy IP Address](../../generated_pages/techniques/T0149.008.md) |
| [T0149.009 Internet Connected Physical Asset](../../generated_pages/techniques/T0149.009.md) |
| [T0150 Asset Origin](../../generated_pages/techniques/T0150.md) |
| [T0150.001 Newly Created](../../generated_pages/techniques/T0150.001.md) |
| [T0150.002 Dormant](../../generated_pages/techniques/T0150.002.md) |
| [T0150.003 Pre-Existing](../../generated_pages/techniques/T0150.003.md) |
| [T0150.004 Repurposed](../../generated_pages/techniques/T0150.004.md) |
| [T0150.005 Compromised](../../generated_pages/techniques/T0150.005.md) |
| [T0150.006 Purchased](../../generated_pages/techniques/T0150.006.md) |
| [T0150.007 Rented](../../generated_pages/techniques/T0150.007.md) |
| [T0150.008 Bulk Created](../../generated_pages/techniques/T0150.008.md) |

View File

@ -15,34 +15,63 @@
| Techniques |
| ---------- |
| [T0029 Online Polls](../../generated_pages/techniques/T0029.md) |
| [T0043 Chat Apps](../../generated_pages/techniques/T0043.md) |
| [T0043.001 Use Encrypted Chat Apps](../../generated_pages/techniques/T0043.001.md) |
| [T0043.002 Use Unencrypted Chats Apps](../../generated_pages/techniques/T0043.002.md) |
| [T0103 Livestream](../../generated_pages/techniques/T0103.md) |
| [T0103.001 Video Livestream](../../generated_pages/techniques/T0103.001.md) |
| [T0103.002 Audio Livestream](../../generated_pages/techniques/T0103.002.md) |
| [T0104 Social Networks](../../generated_pages/techniques/T0104.md) |
| [T0104.001 Mainstream Social Networks](../../generated_pages/techniques/T0104.001.md) |
| [T0104.002 Dating App](../../generated_pages/techniques/T0104.002.md) |
| [T0104.003 Private/Closed Social Networks](../../generated_pages/techniques/T0104.003.md) |
| [T0104.004 Interest-Based Networks](../../generated_pages/techniques/T0104.004.md) |
| [T0104.005 Use Hashtags](../../generated_pages/techniques/T0104.005.md) |
| [T0104.006 Create Dedicated Hashtag](../../generated_pages/techniques/T0104.006.md) |
| [T0105 Media Sharing Networks](../../generated_pages/techniques/T0105.md) |
| [T0105.001 Photo Sharing](../../generated_pages/techniques/T0105.001.md) |
| [T0105.002 Video Sharing](../../generated_pages/techniques/T0105.002.md) |
| [T0105.003 Audio Sharing](../../generated_pages/techniques/T0105.003.md) |
| [T0106 Discussion Forums](../../generated_pages/techniques/T0106.md) |
| [T0106.001 Anonymous Message Boards](../../generated_pages/techniques/T0106.001.md) |
| [T0107 Bookmarking and Content Curation](../../generated_pages/techniques/T0107.md) |
| [T0108 Blogging and Publishing Networks](../../generated_pages/techniques/T0108.md) |
| [T0109 Consumer Review Networks](../../generated_pages/techniques/T0109.md) |
| [T0110 Formal Diplomatic Channels](../../generated_pages/techniques/T0110.md) |
| [T0111 Traditional Media](../../generated_pages/techniques/T0111.md) |
| [T0111.001 TV](../../generated_pages/techniques/T0111.001.md) |
| [T0111.002 Newspaper](../../generated_pages/techniques/T0111.002.md) |
| [T0111.003 Radio](../../generated_pages/techniques/T0111.003.md) |
| [T0112 Email](../../generated_pages/techniques/T0112.md) |
| [T0151 Digital Community Hosting Asset](../../generated_pages/techniques/T0151.md) |
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) |
| [T0151.002 Online Community Group](../../generated_pages/techniques/T0151.002.md) |
| [T0151.003 Online Community Page](../../generated_pages/techniques/T0151.003.md) |
| [T0151.004 Chat Platform](../../generated_pages/techniques/T0151.004.md) |
| [T0151.005 Chat Community Server](../../generated_pages/techniques/T0151.005.md) |
| [T0151.006 Chat Room](../../generated_pages/techniques/T0151.006.md) |
| [T0151.007 Chat Broadcast Group](../../generated_pages/techniques/T0151.007.md) |
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) |
| [T0151.009 Legacy Online Forum Platform](../../generated_pages/techniques/T0151.009.md) |
| [T0151.010 Community Forum Platform](../../generated_pages/techniques/T0151.010.md) |
| [T0151.011 Community Sub-Forum](../../generated_pages/techniques/T0151.011.md) |
| [T0151.012 Image Board Platform](../../generated_pages/techniques/T0151.012.md) |
| [T0151.013 Question and Answer Platform](../../generated_pages/techniques/T0151.013.md) |
| [T0151.014 Comments Section](../../generated_pages/techniques/T0151.014.md) |
| [T0151.015 Online Game Platform](../../generated_pages/techniques/T0151.015.md) |
| [T0151.016 Online Game Session](../../generated_pages/techniques/T0151.016.md) |
| [T0151.017 Dating Platform](../../generated_pages/techniques/T0151.017.md) |
| [T0152 Digital Content Hosting Asset](../../generated_pages/techniques/T0152.md) |
| [T0152.001 Blogging Platform](../../generated_pages/techniques/T0152.001.md) |
| [T0152.002 Blog](../../generated_pages/techniques/T0152.002.md) |
| [T0152.003 Website Hosting Platform](../../generated_pages/techniques/T0152.003.md) |
| [T0152.004 Website](../../generated_pages/techniques/T0152.004.md) |
| [T0152.005 Paste Platform](../../generated_pages/techniques/T0152.005.md) |
| [T0152.006 Video Platform](../../generated_pages/techniques/T0152.006.md) |
| [T0152.007 Audio Platform](../../generated_pages/techniques/T0152.007.md) |
| [T0152.008 Live Streaming Platform](../../generated_pages/techniques/T0152.008.md) |
| [T0152.009 Software Delivery Platform](../../generated_pages/techniques/T0152.009.md) |
| [T0152.010 File Hosting Platform](../../generated_pages/techniques/T0152.010.md) |
| [T0152.011 Wiki Platform](../../generated_pages/techniques/T0152.011.md) |
| [T0152.012 Subscription Service Platform](../../generated_pages/techniques/T0152.012.md) |
| [T0153 Digital Content Delivery Asset](../../generated_pages/techniques/T0153.md) |
| [T0153.001 Email Platform](../../generated_pages/techniques/T0153.001.md) |
| [T0153.002 Link Shortening Platform](../../generated_pages/techniques/T0153.002.md) |
| [T0153.003 Shortened Link](../../generated_pages/techniques/T0153.003.md) |
| [T0153.004 QR Code](../../generated_pages/techniques/T0153.004.md) |
| [T0153.005 Online Advertising Platform](../../generated_pages/techniques/T0153.005.md) |
| [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) |
| [T0153.007 Direct Messaging](../../generated_pages/techniques/T0153.007.md) |
| [T0154 Digital Content Creation Asset](../../generated_pages/techniques/T0154.md) |
| [T0154.001 AI LLM Platform](../../generated_pages/techniques/T0154.001.md) |
| [T0154.002 AI Media Platform](../../generated_pages/techniques/T0154.002.md) |
| [T0155 Gated Asset](../../generated_pages/techniques/T0155.md) |
| [T0155.001 Password Gated](../../generated_pages/techniques/T0155.001.md) |
| [T0155.002 Invite Gated](../../generated_pages/techniques/T0155.002.md) |
| [T0155.003 Approval Gated](../../generated_pages/techniques/T0155.003.md) |
| [T0155.004 Geoblocked](../../generated_pages/techniques/T0155.004.md) |
| [T0155.005 Paid Access](../../generated_pages/techniques/T0155.005.md) |
| [T0155.006 Subscription Access](../../generated_pages/techniques/T0155.006.md) |
| [T0155.007 Encrypted Communication Channel](../../generated_pages/techniques/T0155.007.md) |

View File

@ -35,7 +35,6 @@
| [T0129.005 Coordinate on Encrypted/Closed Networks](../../generated_pages/techniques/T0129.005.md) |
| [T0129.006 Deny Involvement](../../generated_pages/techniques/T0129.006.md) |
| [T0129.007 Delete Accounts/Account Activity](../../generated_pages/techniques/T0129.007.md) |
| [T0129.008 Redirect URLs](../../generated_pages/techniques/T0129.008.md) |
| [T0129.009 Remove Post Origins](../../generated_pages/techniques/T0129.009.md) |
| [T0129.010 Misattribute Activity](../../generated_pages/techniques/T0129.010.md) |
| [T0130 Conceal Infrastructure](../../generated_pages/techniques/T0130.md) |

View File

@ -24,18 +24,11 @@ This Tactic was previously called Establish Social Assets.
| Techniques |
| ---------- |
| [T0007 Create Inauthentic Social Media Pages and Groups](../../generated_pages/techniques/T0007.md) |
| [T0010 Cultivate Ignorant Agents](../../generated_pages/techniques/T0010.md) |
| [T0013 Create Inauthentic Websites](../../generated_pages/techniques/T0013.md) |
| [T0014 Prepare Fundraising Campaigns](../../generated_pages/techniques/T0014.md) |
| [T0014.001 Raise Funds from Malign Actors](../../generated_pages/techniques/T0014.001.md) |
| [T0014.002 Raise Funds from Ignorant Agents](../../generated_pages/techniques/T0014.002.md) |
| [T0065 Prepare Physical Broadcast Capabilities](../../generated_pages/techniques/T0065.md) |
| [T0090 Create Inauthentic Accounts](../../generated_pages/techniques/T0090.md) |
| [T0090.001 Create Anonymous Accounts](../../generated_pages/techniques/T0090.001.md) |
| [T0090.002 Create Cyborg Accounts](../../generated_pages/techniques/T0090.002.md) |
| [T0090.003 Create Bot Accounts](../../generated_pages/techniques/T0090.003.md) |
| [T0090.004 Create Sockpuppet Accounts](../../generated_pages/techniques/T0090.004.md) |
| [T0091 Recruit Malign Actors](../../generated_pages/techniques/T0091.md) |
| [T0091.001 Recruit Contractors](../../generated_pages/techniques/T0091.001.md) |
| [T0091.002 Recruit Partisans](../../generated_pages/techniques/T0091.002.md) |
@ -55,9 +48,6 @@ This Tactic was previously called Establish Social Assets.
| [T0096.001 Create Content Farms](../../generated_pages/techniques/T0096.001.md) |
| [T0096.002 Outsource Content Creation to External Organisations](../../generated_pages/techniques/T0096.002.md) |
| [T0113 Employ Commercial Analytic Firms](../../generated_pages/techniques/T0113.md) |
| [T0141 Acquire Compromised Asset](../../generated_pages/techniques/T0141.md) |
| [T0141.001 Acquire Compromised Account](../../generated_pages/techniques/T0141.001.md) |
| [T0141.002 Acquire Compromised Website](../../generated_pages/techniques/T0141.002.md) |
| [T0145 Establish Account Imagery](../../generated_pages/techniques/T0145.md) |
| [T0145.001 Copy Account Imagery](../../generated_pages/techniques/T0145.001.md) |
| [T0145.002 AI-Generated Account Imagery](../../generated_pages/techniques/T0145.002.md) |

View File

@ -0,0 +1,17 @@
# Technique T0015.001: Use Existing Hashtag
* **Summary**: Use a dedicated, existing hashtag for the campaign/incident. This Technique covers behaviours previously documented by T0104.005: Use Hashtags, which has since been deprecated.
* **Belongs to tactic stage**: TA06
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| Counters | Response types |
| -------- | -------------- |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -0,0 +1,17 @@
# Technique T0015.002: Create New Hashtag
* **Summary**: Create a campaign/incident specific hashtag. This Technique covers behaviours previously documented by T0104.006: Create Dedicated Hashtag, which has since been deprecated.
* **Belongs to tactic stage**: TA06
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| Counters | Response types |
| -------- | -------------- |
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW

View File

@ -8,6 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00002 #VaccinateUS](../../generated_pages/incidents/I00002.md) | Promote "funding" campaign |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |

View File

@ -10,6 +10,7 @@
| [I00002 #VaccinateUS](../../generated_pages/incidents/I00002.md) | buy FB targeted ads |
| [I00005 Brexit vote](../../generated_pages/incidents/I00005.md) | Targeted FB paid ads |
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | Targeted FB paid ads |
| [I00097 Report: Not Just Algorithms](../../generated_pages/incidents/I00097.md) | <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Ad approval systems can create risks. We created 12 fake ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)<br><br>Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.<br><br>Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.<br><br>Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or look like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.</i><br><br>Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). |

View File

@ -8,6 +8,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00033 China 50cent Army](../../generated_pages/incidents/I00033.md) | cow online opinion leaders into submission, muzzling social media as a political force |
| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |

View File

@ -7,6 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |
| [I00123 The Extreme Right on Steam](../../generated_pages/incidents/I00123.md) | ISD conducted an investigation into the usage of social groups on Steam. Steam is an online platform used to buy and sell digital games, and includes the Steam community feature, which “allows users to find friends and join groups and discussion forums, while also offering in-game voice and text chat”. Actors have used Steams social capabilities to enable online harm campaigns:<br><br><i>One function of these Steam groups is the organisation of raids coordinated trolling activity against their political opponents. An example of this can be seen in a white power music group sharing a link to an Israeli Steam group, encouraging other members to “help me raid this juden [German word for Jew] group”. The comments section of said target group show that neo-Nazi and antisemitic comments were consistently posted in the group just two minutes after the instruction had been posted in the extremist group, highlighting the swiftness with which racially motivated harassment can be directed online.</i><br><br>Threat actors used social groups on Steam to organise harassment of targets (T0152.009: Software Delivery Platform, T0151.002: Online Community Group, T0049.005: Conduct Swarming, T0048: Harass). |

View File

@ -11,6 +11,7 @@
| [I00017 US presidential elections](../../generated_pages/incidents/I00017.md) | Digital to physical "organize+promote" rallies & events |
| [I00032 Kavanaugh](../../generated_pages/incidents/I00032.md) | Digital to physical "organize+promote" rallies & events? |
| [I00053 China Huawei CFO Arrest](../../generated_pages/incidents/I00053.md) | Events coordinated and promoted across media platforms, Extend digital the physical space… gatherings ie: support for Meng outside courthouse |
| [I00122 The Extreme Right on Discord](../../generated_pages/incidents/I00122.md) | Discord is an example of a T0151.004: Chat Platform, which allows users to create their own T0151.005: Chat Community Server. The Institute for Strategic Dialog (ISD) conducted an investigation into the extreme rights usage of Discord servers:<br><br><i>Discord is a free service accessible via phones and computers. It allows users to talk to each other in real time via voice, text or video chat and emerged in 2015 as a platform designed to assist gamers in communicating with each other while playing video games. The popularity of the platform has surged in recent years, and it is currently estimated to have 140 million monthly active users.<br><br>Chatrooms known as servers - in the platform can be created by anyone, and they are used for a range of purposes that extend far beyond gaming. Such purposes include the discussion of extreme right-wing ideologies and the planning of offline extremist activity. Ahead of the far-right Unite the Right rally in Charlottesville, Virginia, in August 2017, organisers used Discord to plan and promote events and posted swastikas and praised Hitler in chat rooms with names like “National Socialist Army” and “Führers Gas Chamber”.</i><br><br>In this example a Discord server was used to organise the 2017 Charlottesville Unite the Right rally. Chat rooms such in the server were used to discuss different topics related to the rally (T0057: Organise Events, T0126.002: Facilitate Logistics or Support for Attendance, T0151.004: Chat Platform, T0151.005: Chat Community Server, T0151.006: Chat Room).<br><br><i>Another primary activity engaged in the servers analysed are raids against other servers associated with political opponents, and in particular those that appear to be pro-LGBTQ. Raids are a phenomenon in which a small group of users will join a Discord server with the sole purpose of spamming the host with offensive or incendiary messages and content with the aim of upsetting local users or having the host server banned by Discord. On two servers examined here, raiding was their primary function.<br><br>Among servers devoted to this activity, specific channels were often created to host links to servers that users were then encouraged to raid. Users are encouraged to be as offensive as possible with the aim of upsetting or angering users on the raided server, and channels often had content banks of offensive memes and content to be shared on raided servers.<br><br>The use of raids demonstrates the gamified nature of extremist activity on Discord, where use of the platform and harassment of political opponents is itself turned into a type of real-life video game designed to strengthen in-group affiliation. This combined with the broader extremist activity identified in these channels suggests that the combative activity of raiding could provide a pathway for younger people to become more engaged with extremist activity.</i><br><br>Discord servers were used by members of the extreme right to coordinate harassment of targeted communities (T0048: Harass, T0049.005: Conduct Swarming, T0151.004: Chat Platform, T0151.005: Chat Community Server). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |

View File

@ -7,6 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00099 More Women Are Facing The Reality Of Deepfakes, And Theyre Ruining Lives](../../generated_pages/incidents/I00099.md) | <i>Last October, British writer Helen was alerted to a series of deepfakes on a porn site that appear to show her engaging in extreme acts of sexual violence. That night, the images replayed themselves over and over in horrific nightmares and she was gripped by an all-consuming feeling of dread. “Its like youre in a tunnel, going further and further into this enclosed space, where theres no light,” she tells Vogue. This feeling pervaded Helens life. Whenever she left the house, she felt exposed. On runs, she experienced panic attacks. Helen still has no idea who did this to her.<br><br>[...]<br><br>Amnesty International has been investigating the effects of abuse against women on Twitter, specifically in relation to how they act online thereafter. According to the charity, abuse creates what theyve called “the silencing effect” whereby women feel discouraged from participating online. The same can be said for victims of deepfakes.<br><br>Helen has never been afraid to use her voice, writing deeply personal accounts of postnatal depression. But the deepfakes created a feeling of shame so strong she thought shed be carrying this “dirty secret” forever, and so she stopped writing.<br><br>[...]<br><br>Meanwhile, deepfake communities are thriving. There are now dedicated sites, user-friendly apps and organised request procedures. Some sites allow you to commission custom deepfakes for £25, while on others you can upload a womans image and a bot will strip her naked.<br><br>“This violation is not something that should be normalised,” says Gibi, an ASMR artist with 3.13 million YouTube subscribers. Gibi has given up trying to keep tabs on the deepfakes of her. For Gibi, the most egregious part of all of this is the fact that people are “profiting off my face, doing something that I didnt consent to, like my suffering is your livelihood.” Shes even been approached by a company offering to remove the deepfakes — for £500 a video. This has to end. But how?</i><br><br>A website hosting pornographic content provided users the ability to create deepfake content (T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). <br><br>Another website enabled users to commission custom deepfakes (T0152.004: Website, T0148.004: Payment Processing Capability, T0086.002: Develop AI-Generated Images (Deepfakes), T0155.005: Paid Access). |
| [I00100 Why ThisPersonDoesNotExist (and its copycats) need to be restricted](../../generated_pages/incidents/I00100.md) | <i>You might have heard about the recent viral sensation, ThisPersonDoesNotExist.com, a website, launched two weeks ago, that uses Nvidias publicly available artificial intelligence technology to draw an invented, photo-realistic human being with each refresh. The tech is impressive and artistically evocative. Its also irresponsible and needs to be restricted immediately.<br><br>[...]<br><br>Prior to this technology, scammers faced three major risks when using fake photos. Each of these risks had the potential to put them out business, or in jail.<br><br>Risk #1: Someone recognizes the photo. While the odds of this are long-shot, it does happen.<br><br>Risk #2: Someone reverse image searches the photo with a service like TinEye or Google Image Search and finds that its been posted elsewhere. Reverse image search is one of the top anti-fraud measures recommended by consumer protection advocates.<br><br>Risk #3: If the crime is successful, law enforcement uses the fake photo to figure out the scammers identity after the fact. Perhaps the scammer used an old classmates photo. Perhaps their personal account follows the Instagram member they pilfered. And so on: people make mistakes.<br><br>The problem with AI-generated photos is that they carry none of these risks. No one will recognize a human whos never existed before. Google Image Search will return 0 results, possibly instilling a false sense of security in the searcher. And AI-generated photos dont give law enforcement much to work with.</i><br><br>ThisPersonDoesNotExist is an online platform which, when visited, produces AI generated images of peoples faces (T0146.006: Open Access Platform, T0154.002: AI Media Platform, T0086.002: Develop AI-Generated Images (Deepfakes)). |
| [I00106 Facebook Is Being Flooded With Gross AI-Generated Images of Hurricane Helene Devastation](../../generated_pages/incidents/I00106.md) | <i>As families desperately seek to find missing loved ones and communities grapple with immeasurable losses of both life and property in the wake of [2024s] Hurricane Helene, AI slop scammers appear to be capitalizing on the moment for personal gain.<br><br>A Facebook account called "Coastal Views" usually shares calmer AI imagery of nature-filled beachside scenes. The account's banner image showcases a signpost reading "OBX Live," OBX being shorthand for North Carolina's Outer Banks islands.<br><br>But starting this weekend, the account shifted its approach dramatically, as first flagged by a social media user on X.<br><br>Instead of posting "photos" of leaping dolphins and sandy beaches, the account suddenly started publishing images of flooded mountain neighborhoods, submerged houses, and dogs sitting on top of roofs.<br><br>But instead of spreading vital information to those affected by the natural disaster, or at the very least sharing real photos of the destruction, the account is seemingly trying to use AI to cash in on all the attention the hurricane has been getting.<br><br>The account links to an Etsy page for a business called" OuterBanks2023," where somebody who goes by "Alexandr" sells AI-generated prints of horses touching snouts with sea turtles, Santa running down the shoreline with a reindeer, and sunsets over ocean waves.</i><br><br>A Facebook page which presented itself as being associated with North Carolina which posted AI generated images changed to posting AI generated images of hurricane damage after Hurricane Helene hit North Carolina (T0151.003: Online Community Page, T0151.001: Social Media Platform, T0115: Post Content, T0086.002: Develop AI-Generated Images (Deepfakes), T0068: Respond to Breaking News Event or Active Crisis). <br><br>The account included links (T0122: Direct Users to Alternative Platforms) to an account on Etsy, which sold prints of AI generated images (T0146: Account, T0148.007: eCommerce Platform). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |

View File

@ -7,6 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View File

@ -7,7 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00103 The racist AI deepfake that fooled and divided a community](../../generated_pages/incidents/I00103.md) | <i>“I seriously don't understand why I have to constantly put up with these dumbasses here every day.”<br><br>So began what appeared to be a long tirade from the principal of Pikesville High School, punctuated with racist, antisemitic and offensive tropes. It sounded like it had been secretly recorded.<br><br>The speaker went on to bemoan “ungrateful black kids” and Jewish people in the community.<br><br>The clip, first posted in [January 2024], went viral nationally. But it really struck a nerve in the peaceful, leafy suburb of Pikesville, which has large black and Jewish populations, and in the nearby city of Baltimore, Maryland. Principal Eric Eiswert was put on paid administrative leave pending an investigation.<br><br>[...]<br><br>But what those sharing the clip didnt realise at the time was that another bombshell was about to drop: the clip was an AI-generated fake.<br><br>[...]<br><br>[In April 2024], Baltimore Police Chief Robert McCullough confirmed they now had “conclusive evidence that the recording was not authentic”.<br><br>And they believed they knew who made the fake.<br><br>Police charged 31-year-old Dazhon Darien, the schools athletics director, with several counts related to the fake video. Charges included theft, retaliating against a witness and stalking.<br><br>He was arrested at the airport, where police say he was planning to fly to Houston, Texas.<br><br>Police say that Mr Darien had been under investigation by Principal Eiswert over an alleged theft of $1,916 (£1,460) from the school. They also allege there had been “work performance challenges” and his contract was likely not to be renewed.<br><br>Their theory was that by creating the deepfake recording, he hoped to discredit the principal before he could be fired.<br><br>Investigators say they traced an email used to send the original video to a server connected to Mr Darien, and allege that he used Baltimore County Public Schools' computer network to access AI tools. He is due to stand trial in December 2024.</i><br><br>By associating Mr Darien to the server used to email the original AI generated audio, investigators link Darien to the fabricated content (T0149.005: Server, T0088.001: AI Generated Audio (Deepfakes)). They also assert that Darien used computers owned by the school to access platforms used to generate the audio (T0146: Account, T0154.002: AI Media Platform). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00111 Patreon Is Bankrolling Climate Change Deniers While We All Burn](../../generated_pages/incidents/I00111.md) | In this article VICE News discusses a report produced by Advance Democracy on people who use Patreon to spread the false claim that an impending ice age will reverse the harms of the ongoing climate crisis:<br><br><i>“The spread of climate misinformation is prolific on social media, as well as on sites like Patreon, where users are actually financially compensated through the platform for spreading falsehoods,” Daniel Jones, president of Advance Democracy, told VICE News.<br><br>“Companies hosting and promoting climate misinformation have a responsibility to take action to reduce dangerous misinformation, as falsehoods about climate science are every bit as dangerous as lies about vaccinations and disinformation about our elections.”<br><br>Patreon did not respond to VICE News request for comment on the reports findings.<br><br>One of the biggest accounts spreading climate conspiracies is ADAPT 2030, which is run by David DuByne, who has 1,100 followers on Patreon. He is currently making over $3,500 every month from his subscribers.<br><br>[The science DuByne relies on does not support his hypothesis. However,] this has not prevented DuByne and many others from preying on peoples fears about climate change to spread conspiracies about an impending ice age, which they say will miraculously fix all of earths climate problems.<br><br>DuByne offers seven different membership levels for supporters, beginning at just $1 per month.<br><br>The most expensive costs $100 a month, and gives patrons “a private 20-minute call with David DuByne once per month, to discuss your particular preparedness issues or concerns.” So far just two people are paying this amount.<br><br>The researchers also found at least eight other accounts on Patreon that have spread climate change conspiracy theories as part of wider conspiracy sharing, including baseless claims about COVID-19 and the legitimacy of Joe Bidens presidency. Some of these accounts are earning over $600 per month.</i><br><br>David DuByne created an account on Patreon, which he uses to post text, videos, and podcasts for his subscribers to discuss (T0085: Develop Text-Based Content, T0087: Develop Video-Based Content, T0088: Develop Audio-Based Content, T0146: Account, T0115: Post Content, T0152.012: Subscription Service Platform, T0151.014: Comments Section, T0155.006: Subscription Access). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00108 How you thought you support the animals and you ended up funding white supremacists](../../generated_pages/incidents/I00108.md) | <i>This article examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology:<br><br>Suavelos uses Facebook and other platforms to amplify its message. In order to bypass the platforms community standards and keep their public pages active, Facebook pages such as “I support the police” are a good vehicle to spread a specific agenda without claiming to be racist. In looking back at this Facebook page, we followed Facebooks algorithm for related pages and found suggested Facebook pages<br><br>[...]<br><br>This amplification strategy on Facebook is successful, as according to SimilarWeb figures, it attracts around 111,000 visits every month on the Suavelos.eu website.<br><br>[...]<br><br>Revenue through online advertisements can be achieved by different platforms through targeted advertisements, like Google Adsense or Doubleclick, or related and similar sponsored content, such as Taboola. Accordingly, Suavelos.eu uses both of these websites to display advertisements and consequently receives funding from such advertisements.<br><br>Once visitors are on the website supporting its advertisement revenue, Suavelos goal is to then turn these visitors into regular members of Suavelos network through donations or fees, or have them continue to support Suavelos. </i><br><br>Suevelos created a variety of pages on Facebook which presented as centring on prosocial causes. Facebooks algorithm helped direct users to these pages (T0092: Build Network, T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm, T0151.003: Online Community Page, T0143.208: Social Cause Persona).<br><br>Suevelos used these pages to generate traffic for their WordPress site (T0122: Direct Users to Alternative Platforms, T0152.003: Website Hosting Platform, T0152.004: Website), which used accounts on a variety of online advertising platforms to host adverts (T0146: Account, T0153.005: Online Advertising Platform). |

View File

@ -7,8 +7,9 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0043.001: Use Encrypted Chat Apps) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00068 Attempted Audio Deepfake Call Targets LastPass Employee](../../generated_pages/incidents/I00068.md) | <i>“While reports of [...] deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employees suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”</i><br><br> In this example attackers impersonated the CEO of LastPass (T0097.100: Individual Persona, T0143.003: Impersonated Persona), targeting one of its employees over WhatsApp (T0151.004: Chat Platform,T0155.007: Encrypted Communication Channel) using deepfaked audio (T0088.001: Develop AI-Generated Audio (Deepfakes)). |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |

View File

@ -7,12 +7,13 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“In addition to directly posting material on social media, we observed some personas in the network [of inauthentic accounts attributed to Iran] leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.”</i><br><br> In this example actors fabricated individuals who lived in areas which were being targeted for influence through the use of letters to local papers (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
| [I00078 Metas September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | <i>“[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.<br><br> “This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”</i><br><br> Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.<br><br> Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00081 Belarus KGB created fake accounts to criticize Poland during border crisis, Facebook parent company says](../../generated_pages/incidents/I00081.md) | <i>“Meta said it also removed 31 Facebook accounts, four groups, two events and four Instagram accounts that it believes originated in Poland and targeted Belarus and Iraq. Those allegedly fake accounts posed as Middle Eastern migrants posting about the border crisis. Meta did not link the accounts to a specific group.<br><br> ““These fake personas claimed to be sharing their own negative experiences of trying to get from Belarus to Poland and posted about migrants difficult lives in Europe,” Meta said. “They also posted about Polands strict anti-migrant policies and anti-migrant neo-Nazi activity in Poland. They also shared links to news articles criticizing the Belarusian governments handling of the border crisis and off-platform videos alleging migrant abuse in Europe.””</i><br><br> In this example accounts falsely presented themselves as having local insight into the border crisis narrative (T0097.101: Local Persona, T0143.002: Fabricated Persona). |
| [I00086 #WeAreNotSafe Exposing How a Post-October 7th Disinformation Network Operates on Israeli Social Media](../../generated_pages/incidents/I00086.md) | Accounts which were identified as part of “a sophisticated and extensive coordinated network orchestrating a disinformation campaign targeting Israeli digital spaces since October 7th, 2023” were presenting themselves as locals to Israel (T0097.101: Local Persona):<br><br><i>“Unlike usual low-effort fake accounts, these accounts meticulously mimic young Israelis. They stand out due to the extraordinary lengths taken to ensure their authenticity, from unique narratives to the content they produce to their seemingly authentic interactions.”<I> |
| [I00087 Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation](../../generated_pages/incidents/I00087.md) | <i>“Another actor operating in China is the American-based company Devumi. Most of the Twitter accounts managed by Devumi resemble real people, and some are even associated with a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to The New York Times (Confessore et al., 2018)).”</i><br><br>In this example accounts impersonated real locals while spreading operation narratives (T0143.003: Impersonated Persona, T0097.101: Local Persona). The impersonation included stealing the legitimate accounts profile pictures (T0145.001: Copy Account Imagery). |
| [I00125 The Agency](../../generated_pages/incidents/I00125.md) | In 2014 threat actors attributed to Russia spread the false narrative that a local chemical plant had leaked toxic fumes. This report discusses aspects of the operation:<br><br><i>[The chemical plant leak] hoax was just one in a wave of similar attacks during the second half of last year. On Dec. 13, two months after a handful of Ebola cases in the United States touched off a minor media panic, many of the same Twitter accounts used to spread the Columbian Chemicals hoax began to post about an outbreak of Ebola in Atlanta. [...] Again, the attention to detail was remarkable, suggesting a tremendous amount of effort. A YouTube video showed a team of hazmat-suited medical workers transporting a victim from the airport. Beyoncés recent single “7/11” played in the background, an apparent attempt to establish the videos contemporaneity. A truck in the parking lot sported the logo of the Hartsfield-Jackson Atlanta International Airport.</i><br><br>Accounts which previously presented as Louisiana locals were repurposed for use in a different campaign, this time presenting as locals to Atlanta, a place over 500 miles away from Louisiana and in a different timezone (T0146: Account, T0097.101: Local Persona, T0143.002: Fabricated Persona, T0151.008: Microblogging Platform, T0150.004: Repurposed). <br><br>A video was created which appeared to support the campaigns narrative (T0087: Develop Video-Based Content), with great attention given to small details which made the video appear more legitimate. |

View File

@ -10,6 +10,9 @@
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.<br><br> “The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”</i><br><br> In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. |
| [I00080 Hundreds Of Propaganda Accounts Targeting Iran And Qatar Have Been Removed From Facebook](../../generated_pages/incidents/I00080.md) | <i>“One example of a fake reporter account targeting Americans is “Jenny Powell,” a self-described Washington-based journalist, volunteer, and environmental activist. At first glance, Powells Twitter timeline looks like it belongs to a young and eager reporter amplifying her interests. But her profile photo is a stock image, and many of her links go to the propaganda sites.<br><br> “Powell, who joined the platform just last month, shares links to stories from major US news media outlets, retweets local news about Washington, DC, and regularly promotes content from The Foreign Code and The Economy Club. Other fake journalist accounts behaved similarly to Powell and had generic descriptions. One of the accounts, for a fake Bruce Lopez in Louisiana, has a bio that describes him as a “Correspondent Traveler (noun) (linking verb) (noun/verb/adjective),” which appears to reveal the formula used to write Twitter bios for the accounts.”</i><br><br> The Jenny Powel account used in this influence operation presents as both a journalist and an activist (T0097.102: Journalist Persona, T0097.103: Activist Persona, T0143.002: Fabricated Persona). This example shows how threat actors can easily follow a template to present a fabricated persona to their target audience (T0144.002: Persona Template). |
| [I00082 Metas November 2021 Adversarial Threat Report ](../../generated_pages/incidents/I00082.md) | <I>“[Meta] removed 41 Facebook accounts, five Groups, and four Instagram accounts for violating our policy against coordinated inauthentic behavior. This activity originated in Belarus and primarily targeted audiences in the Middle East and Europe.<br><br> “The core of this activity began in October 2021, with some accounts created as recently as mid-November. The people behind it used newly-created fake accounts — many of which were detected and disabled by our automated systems soon after creation — to pose as journalists and activists from the European Union, particularly Poland and Lithuania. Some of the accounts used profile photos likely generated using artificial intelligence techniques like generative adversarial networks (GAN). These fictitious personas posted criticism of Poland in English, Polish, and Kurdish, including pictures and videos about Polish border guards allegedly violating migrants rights, and compared Polands treatment of migrants against other countries. They also posted to Groups focused on the welfare of migrants in Europe. A few accounts posted in Russian about relations between Belarus and the Baltic States.”</i><br><br> This example shows how accounts identified as participating in coordinated inauthentic behaviour were presenting themselves as journalists and activists while spreading operation narratives (T0097.102: Journalist Persona, T0097.103: Activist Persona).<br><br> Additionally, analysts at Meta identified accounts which were participating in coordinated inauthentic behaviour that had likely used AI-Generated images as their profile pictures (T0145.002: AI-Generated Account Imagery). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |
| [I00119 Independent journalist publishes Trump campaign document hacked by Iran despite election interference concerns](../../generated_pages/incidents/I00119.md) | <i>An American journalist who runs an independent newsletter published a document [on 26 Sep 2024] that appears to have been stolen from Donald Trumps presidential campaign — the first public posting of a file that is believed to be part of a dossier that federal officials say is part of an Iranian effort to manipulate the [2024] U.S. election.<br><br>The PDF document is a 271-page opposition research file on former President Donald Trumps running mate, Sen. JD Vance, R-Ohio.<br><br>For more than two months, hackers who the U.S. says are tied to Iran have tried to persuade the American media to cover files they stole. No outlets took the bait.<br><br>But on Thursday, reporter Ken Klippenstein, who self-publishes on Substack after he left The Intercept this year, published one of the files.<br><br>[...]<br><br>Reporters who have received the documents describe the same pattern: An AOL account emails them files, signed by a person using the name “Robert,” who is reluctant to speak to their identity or reasons for wanting the documents to receive coverage.<br><br>NBC News was not part of the Robert personas direct outreach, but it has viewed its correspondence with a reporter at another publication.<br><br> One of the emails from the Robert persona previously viewed by NBC News included three large PDF files, each corresponding to Trumps three reported finalists for vice president. The Vance file appears to be the one Klippenstein hosts on his site.</i><br><br>In this example hackers attributed to Iran used the Robert persona to email journalists hacked documents (T0146: Account, T0097.100: Individual Persona, T0153.001: Email Platform).<br><br>The journalist Ken Kippenstien used his existing blog on substack to host a link to download the document (T0089: Obtain Private Documents, T0097.102: Journalist Persona, T0115: Post Content, T0143.001: Authentic Persona, T0152.001: Blogging Platform, T0152.002: Blog, T0150.003: Pre-Existing). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00127 Iranian APTs Dress Up as Hacktivists for Disruption, Influence Ops](../../generated_pages/incidents/I00127.md) | <i>Iranian state-backed advanced persistent threat (APT) groups have been masquerading as hacktivists, claiming attacks against Israeli critical infrastructure and air defense systems.<br><br>[...]<br><br>What's clearer are the benefits of the model itself: creating a layer of plausible deniability for the state, and the impression among the public that their attacks are grassroots-inspired. While this deniability has always been a key driver with state-sponsored cyberattacks, researchers characterized this instance as noteworthy for the effort behind the charade.<br><br>"We've seen a lot of hacktivist activity that seems to be nation-states trying to have that 'deniable' capability," Adam Meyers, CrowdStrike senior vice president for counter adversary operations said in a press conference this week. "And so these groups continue to maintain activity, moving from what was traditionally website defacements and DDoS attacks, into a lot of hack and leak operations."<br><br>To sell the persona, faketivists like to adopt the aesthetic, rhetoric, tactics, techniques, and procedures (TTPs), and sometimes the actual names and iconography associated with legitimate hacktivist outfits. Keen eyes will spot that they typically arise just after major geopolitical events, without an established history of activity, in alignment with the interests of their government sponsors.<br><br>Oftentimes, it's difficult to separate the faketivists from the hacktivists, as each might promote and support the activities of the other.</i><br><br>In this example analysts from CrowdStrike assert that hacker groups took on the persona of hacktivists to disguise the state-backed nature of their cyber attack campaign (T0097.104: Hacktivist Persona). At times state-backed hacktivists will impersonate existing hacktivist organisations (T0097.104: Hacktivist Persona, T0143.003: Impersonated Persona). |

View File

@ -7,6 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00118 War Thunder players are once again leaking sensitive military technology information on a video game forum](../../generated_pages/incidents/I00118.md) | <i>In an effort to prove that the developers behind a popular multiplayer vehicle combat game had made a mistake, a player went ahead and published classified British military documents about one of the real-life tanks featured in the game.<br><br>This truly bizarre turn of events recently occurred in the public forum for War Thunder, a free-to-player multiplayer combat sim featuring modern land, air, and sea craft. Getting a small detail wrong on a piece of equipment might not be a big deal for the average gamer, but for the War Thunder crowd it sure as hell is. With 25,000 devoted players, the game very much bills itself as the military vehicle combat simulator.<br><br>A player, who identified himself as a British tank commander, claimed that the games developers at Gaijin Entertainment had inaccurately represented the Challenger 2 main battle tank used by the British military.<br><br>The self-described tank commanders bio listed his location as Tidworth Camp in Wiltshire, England, according to the UK Defense Journal, which reported that the base is home to the Royal Tank Regiment, which fields Challenger 2 tanks.<br><br>The player, who went by the handle Pyrophoric, reportedly shared an image on the War Thunder forum of the tanks specs that were pulled from the Challenger 2s Army Equipment Support Publication, which is essentially a technical manual. <br><br>[...]<br><br>A moderator for the forum, whos handle is “Templar_”, explained that the developer had removed the material after they received confirmation from the Ministry of Defense that the document is still in fact classified.</i><br><br>A user of War Thunders forums posted confidential documents to win an argument (T0089.001: Obtain Authentic Documents, T0146: Account, T0097.105: Military Personnel Persona, T0115: Post Content, T0143.001: Authentic Persona, T0151.009: Legacy Online Forum Platform). |

View File

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts (T0141.001: Acquire Compromised Account) of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“The August 17 [2022] Telegram post [which contained a falsified letter from the Ukrainian Minister of Foreign Affairs asking Poland to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII] also contained screenshots of Facebook posts that appeared on two Facebook accounts belonging to Polish nationals Piotr Górka, an expert in the history of the Polish Air Force, and Dariusz Walusiak, a Polish historian and documentary maker. The Górka post suggested that he fully supported the Polish governments decision to change Belwederska Street to Stepan Bandera Street.<br><br> “In a statement to the DFRLab, Górka said his account was accessed without his consent. “This is not my post loaded to my Facebook page,” he explained. “My site was hacked, some days ago.” At the time of publishing, Piotr Górkas post and his Facebook account were no longer accessible.<br><br> “The post on Górkas Facebook page was shared by Dariusz Walusiaks Facebook account; the account also reposted it on the Facebook walls of more than twenty other Facebook users, including Adam Kalita, currently working at Krakow branch of the Institute of National Remembrance; Jan Kasprzyk, head of the Office for War Veterans and Victims of Oppression; and Alicja Kondraciuk, a Polish public figure living in Krakow.<br><br> “Walusiaks Facebook account is also no longer accessible. Given his work on Polish history and identity, it seems highly unlikely he would support the Bandera measure; the DFRLab has also reached out to him for comment.<br><br> “The fact that Joker DPRs Telegram post included screenshots of their Facebook posts raises the strong possibility that both Facebook accounts were compromised, and that hackers planted false statements on their pages that would seem out of character for them in order to gain further attention to the forged documents.”</I><br><br> In this example, threat actors used compromised accounts of Polish historians who have enough relevant knowledge to plausibly weigh in on the forged letters narrative (T0143.003: Impersonated Persona, T0097.101: Local Persona, T0097.108: Expert Persona, T0146: Account, T0150.005: Compromised, T0151.001: Social Media Platform). <br><br> This matches T0097.108: Expert Persona because the impersonation exploited Górka and Walusiaks existing personas as experts in Polish history. |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.”</i> In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).<br><br> This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. |

View File

@ -1,6 +1,6 @@
# Technique T0097.109: Romantic Suitor Persona
* **Summary**: A person with a romantic suitor persona presents themselves as seeking a romantic or physical connection with another person.<br><br>While presenting as seeking a romantic or physical connection is not an indication of inauthentic behaviour, threat actors can use dating apps, social media channels or dating websites to fabricate romantic suitors to lure targets they can blackmail, extract information from, deceive or trick into giving them money (T0143.002: Fabricated Persona, T0097.109: Romantic Suitor Persona).<br><br>Honeypotting in espionage and Big Butchering in scamming are commonly associated with romantic suitor personas.<br><br><b>Associated Techniques and Sub-techniques</b><br><b>T0104.002: Dating App:</b> Analysts can use this sub-technique for tagging cases where an account has been identified as using a dating platform.
* **Summary**: A person with a romantic suitor persona presents themselves as seeking a romantic or physical connection with another person.<br><br>While presenting as seeking a romantic or physical connection is not an indication of inauthentic behaviour, threat actors can use dating apps, social media channels or dating websites to fabricate romantic suitors to lure targets they can blackmail, extract information from, deceive or trick into giving them money (T0143.002: Fabricated Persona, T0097.109: Romantic Suitor Persona).<br><br>Honeypotting in espionage and Big Butchering in scamming are commonly associated with romantic suitor personas.<br><br><b>Associated Techniques and Sub-techniques</b><br><b>T0151.017: Dating Platform:</b> Analysts can use this sub-technique for tagging cases where an account has been identified as using a dating platform.
* **Belongs to tactic stage**: TA16

View File

@ -7,8 +7,10 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00065 'Ghostwriter' Influence Campaign: Unknown Actors Leverage Website Compromises and Fabricated Content to Push Narratives Aligned With Russian Security Interests](../../generated_pages/incidents/I00065.md) | _”Overall, narratives promoted in the five operations appear to represent a concerted effort to discredit the ruling political coalition, widen existing domestic political divisions and project an image of coalition disunity in Poland. In each incident, content was primarily disseminated via Twitter, Facebook, and/ or Instagram accounts belonging to Polish politicians, all of whom have publicly claimed their accounts were compromised at the times the posts were made."_ <br /> <br />This example demonstrates how threat actors can use compromised accounts to distribute inauthentic content while exploiting the legitimate account holders persona (T0097.110: Party Official Persona, T0143.003: Impersonated Persona, T0146: Account, T0150.005: Compromised). |
| [I00075 How Russia Meddles Abroad for Profit: Cash, Trolls and a Cult Leader](../../generated_pages/incidents/I00075.md) | <I>“In the campaigns final weeks, Pastor Mailhol said, the team of Russians made a request: Drop out of the race and support Mr. Rajoelina. He refused.<br><br> “The Russians made the same proposal to the history professor running for president, saying, “If you accept this deal you will have money” according to Ms. Rasamimanana, the professors campaign manager.<br><br> When the professor refused, she said, the Russians created a fake Facebook page that mimicked his official page and posted an announcement on it that he was supporting Mr. Rajoelina.”</i><br><br> In this example actors created online accounts styled to look like official pages to trick targets into thinking that the presidential candidate announced that they had dropped out of the election (T0097.110: Party Official Persona, T0143.003: Impersonated Persona) |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Some Twitter accounts in the network [of inauthentic accounts attributed to Iran] impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates photographs and, in some cases, plagiarized tweets from the real individuals accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.<br><br> “For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for Californias 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengoods official account earlier that month”<br><br> [...]<br><br> “In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New Yorks 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butlers website, jineeabutlerforcongress[.]com.”</I><br><br> In this example actors impersonated existing political candidates (T0097.110: Member of Political Party Persona, T0143.003: Impersonated Persona), strengthening the impersonation by copying legitimate accounts imagery (T0145.001: Copy Account Imagery), and copying its previous posts (T0084.002: Plagiarise Content). |
| [I00096 China ramps up use of AI misinformation](../../generated_pages/incidents/I00096.md) | The Microsoft Threat Analysis Centre (MTAC) published a report documenting the use of AI by pro-Chinese threat actors:<br><br><i>On 13 January, Spamouflage [(a Pro-Chinese Communist Party actor)] posted audio clips to YouTube of independent candidate [for Taiwans Jan 2024 presidential election] Terry Gou who also founded electronics giant Foxconn in which Gou endorsed another candidate in the race. This clip was almost certainly AI-generated, and it was swiftly removed by YouTube. A fake letter purporting to be from Gou, endorsing the same candidate, had already circulated Gou had of course made no such endorsement.</i><br><br>Here Spamoflage used an account on YouTube to post AI Generated audio impersonating an electoral candidate (T0146: Account, T0152.006: Video Platform, T0115: Post Content, T0088.001: Develop AI-Generated Audio (Deepfakes), T0143.003: Impersonated Persona, T0097.110: Party Official Persona).<br><br><i>Spamouflage also exploited AI-powered video platform CapCut which is owned by TikTok backers ByteDance to generate fake news anchors which were used in a variety of campaigns targeting the various presidential candidates in Taiwan.</i><br><br>Spamoflage created accounts on CapCut, which it used to create AI-generated videos of fabricated news anchors (T0146: Account, T0154.002: AI Media Platform, T0087.001: Develop AI-Generated Video (Deepfakes), T0143.002: Fabricated Persona, T0097.102: Journalist Persona). |

View File

@ -10,7 +10,7 @@
| [I00071 Russia-aligned hacktivists stir up anti-Ukrainian sentiments in Poland](../../generated_pages/incidents/I00071.md) | <i>“On August 16, 2022, pro-Kremlin Telegram channel Joker DPR (Джокер ДНР) published a forged letter allegedly written by Ukrainian Foreign Minister Dmytro Kuleba. In the letter, Kuleba supposedly asked relevant Polish authorities to rename Belwederska Street in Warsaw — the location of the Russian embassy building — as Stepan Bandera Street, in honor of the far-right nationalist who led the Ukrainian Insurgent Army during WWII.<br><br> [...]<br><br> The letter is not dated, and Dmytro Kulebas signature seems to be copied from a publicly available letter signed by him in 2021.”</i><br><br> In this example the Telegram channel Joker DPR published a forged letter (T0085.004: Develop Document) in which they impersonated the Ukrainian Minister of Foreign Affairs (T0097.111: Government Official Persona, T0143.003: Impersonated Persona), using Ministry letterhead (T0097.206: Government Institution Persona, T0143.003: Impersonated Persona). |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <i>“After the European Union banned Kremlin-backed media outlets and social media giants demoted their posts for peddling falsehoods about the war in Ukraine, Moscow has turned to its cadre of diplomats, government spokespeople and ministers — many of whom have extensive followings on social media — to promote disinformation about the conflict in Eastern Europe, according to four EU and United States officials.”</i><br><br>In this example authentic Russian government officials used their own accounts to promote false narratives (T0143.001: Authentic Persona, T0097.111: Government Official Persona).<br><br>The use of accounts managed by authentic Government / Diplomats to spread false narratives makes it harder for platforms to enforce content moderation, because of the political ramifications they may face for censoring elected officials (T0131: Exploit TOS/Content Moderation). For example, Twitter previously argued that official channels of world leaders are not removed due to the high public interest associated with their activities. |
| [I00085 Chinas large-scale media push: Attempts to influence Swedish media](../../generated_pages/incidents/I00085.md) | <i>“Four media companies Svenska Dagbladet, Expressen, Sveriges Radio, and Sveriges Television stated that they had been contacted by the Chinese embassy on several occasions, and that they, for instance, had been criticized on their publications, both by letters and e-mails.<br><br> The media company Svenska Dagbladet, had been contacted on several occasions in the past two years, including via e-mails directly from the Chinese ambassador to Sweden. Several times, China and the Chinese ambassador had criticized the media companys publications regarding the conditions in China. Individual reporters also reported having been subjected to criticism.<br><br> The tabloid Expressen had received several letters and e-mails from the embassy, e-mails containing criticism and threatening formulations regarding the coverage of the Swedish book publisher Gui Minhai, who has been imprisoned in China since 2015. Formulations such as “media tyranny” could be found in the e-mails.”</i><br><br> In this case, the Chinese ambassador is using their official role (T0143.001: Authentic Persona, T0097.111: Government Official Persona) to try to influence Swedish press. A government official trying to interfere in other countries' media activities could be a violation of press freedom. In this specific case, the Chinese diplomats are trying to silence criticism against China (T0139.002: Silence).” |
| [I00093 China Falsely Denies Disinformation Campaign Targeting Canadas Prime Minister](../../generated_pages/incidents/I00093.md) | <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0141.001: Acquire Compromised Account).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |
| [I00093 China Falsely Denies Disinformation Campaign Targeting Canadas Prime Minister](../../generated_pages/incidents/I00093.md) | <i>“On October 23, Canadas Foreign Ministry said it had discovered a disinformation campaign, likely tied to China, aimed at discrediting dozens of Canadian politicians, including Prime Minister Justin Trudeau.<br><br> “The ministry said the campaign took place in August and September. It used new and hijacked social media accounts to bulk-post messages targeting Canadian politicians (T0146: Account, T0150.001: Newly Created, T0150.005: Compromised).<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> ““Canada was a downright liar and disseminator of false information… Beijing has never meddled in another nations domestic affairs.”<br><br> “A Chinese Embassy in Canada spokesperson dismissed Canadas accusation as baseless.<br><br> “That is false.<br><br> “The Canadian government's report is based on an investigation conducted by its Rapid Response Mechanism cyber intelligence unit in cooperation with the social media platforms.<br><br> “The investigation exposed Chinas disinformation campaign dubbed “Spamouflage” -- for its tactic of using “a network of new or hijacked social media accounts that posts and increases the number of propaganda messages across multiple social media platforms including Facebook, X/Twitter, Instagram, YouTube, Medium, Reddit, TikTok, and LinkedIn.””</i><br><br> In this case a network of accounts attributed to China were identified operating on multiple platforms. The report was dismissed as false information by an official in the Chinese Embassy in Canada (T0143.001: Authentic Persona, T0097.111: Government Official Persona, T0129.006: Deny Involvement). |

View File

@ -7,12 +7,14 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <i>“Mandiant identified at least three clusters of infrastructure used by [Iranian state-sponsored cyber espionage actor] APT42 to harvest credentials from targets in the policy and government sectors, media organizations and journalists, and NGOs and activists. The three clusters employ similar tactics, techniques and procedures (TTPs) to target victim credentials (spear-phishing emails), but use slightly varied domains, masquerading patterns, decoys, and themes.<br><br> Cluster A: Posing as News Outlets and NGOs: <br>- Suspected Targeting: credentials of journalists, researchers, and geopolitical entities in regions of interest to Iran. <br>- Masquerading as: The Washington Post (U.S.), The Economist (UK), The Jerusalem Post (IL), Khaleej Times (UAE), Azadliq (Azerbaijan), and more news outlets and NGOs. This often involves the use of typosquatted domains like washinqtonpost[.]press. <br><br>“Mandiant did not observe APT42 target or compromise these organizations, but rather impersonate them.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, impersonated existing news organisations and NGOs (T0097.202 News Outlet Persona, T0097.207: NGO Persona, T0143.003: Impersonated Persona) in attempts to steal credentials from targets (T0141.001: Acquire Compromised Account), using elements of influence operations to facilitate their cyber attacks. |
| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | <i>“The Black Matters Facebook Page [operated by Russias Internet Research Agency] explored several visual brand identities, moving from a plain logo to a gothic typeface on Jan 19th, 2016. On February 4th, 2016, the person who ran the Facebook Page announced the launch of the website, blackmattersus[.]com, emphasizing media distrust and a desire to build Black independent media; [“I DIDNT BELIEVE THE MEDIA / SO I BECAME ONE”]”<i><br><br> In this example an asset controlled by Russias Internet Research Agency began to present itself as a source of “Black independent media”, claiming that the media could not be trusted (T0097.208: Social Cause Persona, T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
| [I00076 Network of Social Media Accounts Impersonates U.S. Political Candidates, Leverages U.S. and Israeli Media in Support of Iranian Interests](../../generated_pages/incidents/I00076.md) | <i>“Accounts in the network [of inauthentic accounts attributed to Iran], under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.<br><br> “The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.”</i><br><br> In this example actors fabricated journalists (T0097.102: Journalist Persona, T0143.002: Fabricated Persona) who worked at existing news outlets (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona) in order to conduct interviews with targeted individuals. |
| [I00077 Fronts & Friends: An Investigation into Two Twitter Networks Linked to Russian Actors](../../generated_pages/incidents/I00077.md) | <i>“Two accounts [in the second network of accounts taken down by Twitter] appear to have been operated by Oriental Review and the Strategic Culture Foundation, respectively. Oriental Review bills itself as an “open source site for free thinking”, though it trades in outlandish conspiracy theories and posts content bylined by fake people. Stanford Internet Observatory researchers and investigative journalists have previously noted the presence of content bylined by fake “reporter” personas tied to the GRU-linked front Inside Syria Media Center, posted on Oriental Review.”</i><br><br> In an effort to make the Oriental Reviews stories appear more credible, the threat actors created fake journalists and pretended they wrote the articles on their website (aka “bylined” them).<br><br> In DISARM terms, they fabricated journalists (T0143.002: Fabricated Persona, T0097.003: Journalist Persona), and then used these fabricated journalists to increase perceived legitimacy (T0097.202: News Outlet Persona, T0143.002: Fabricated Persona). |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“On January 4 [2017], a little-known news site based in Donetsk, Ukraine published an article claiming that the United States was sending 3,600 tanks to Europe as part of “the NATO war preparation against Russia”.<br><br> “Like much fake news, this story started with a grain of truth: the US was about to reinforce its armored units in Europe. However, the article converted literally thousands of other vehicles — including hundreds of Humvees and trailers — into tanks, building the US force into something 20 times more powerful than it actually was.<br><br> “The story caught on online. Within three days it had been repeated by a dozen websites in the United States, Canada and Europe, and shared some 40,000 times. It was translated into Norwegian; quoted, unchallenged, by Russian state news agency RIA Novosti; and spread among Russian-language websites.<br><br> “It was also an obvious fake, as any Google news search would have revealed. Yet despite its evident falsehood, it spread widely, and not just in directly Kremlin-run media. Tracking the spread of this fake therefore shines a light on the wider question of how fake stories are dispersed.”</i><br><br> Russian state news agency RIA Novosti presents themselves as a news outlet (T0097.202: News Outlet Persona). RIO Novosti is a real news outlet (T0143.001: Authentic Persona), but it did not carry out a basic investigation into the veracity of the narrative they published implicitly expected of institutions presenting themselves as news outlets.<br><br> We cant know how or why this narrative ended up being published by RIA Novosti, but we know that it presented a distorted reality as authentic information (T0023: Distort Facts), claiming that the US was sending 3,600 tanks, instead of 3,600 vehicles which included ~180 tanks. |
| [I00094 A glimpse inside a Chinese influence campaign: How bogus news websites blur the line between true and false](../../generated_pages/incidents/I00094.md) | Researchers identified websites managed by a Chinese marketing firm which presented themselves as news organisations.<br><br> <i>“On its official website, the Chinese marketing firm boasted that they were in contact with news organizations across the globe, including one in South Korea called the “Chungcheng Times.” According to the joint team, this outlet is a fictional news organization created by the offending company. The Chinese company sought to disguise the sites true identity and purpose by altering the name attached to it by one character—making it very closely resemble the name of a legitimate outlet operating out of Chungchengbuk-do.<br><br> “The marketing firm also established a news organization under the Korean name “Gyeonggido Daily,” which closely resembles legitimate news outlets operating out of Gyeonggi province such as “Gyeonggi Daily,” “Daily Gyeonggi Newspaper,” and “Gyeonggi N Daily.” One of the fake news sites was named “Incheon Focus,” a title that could be easily mistaken for the legitimate local news outlet, “Focus Incheon.” Furthermore, the Chinese marketing company operated two fake news sites with names identical to two separate local news organizations, one of which ceased operations in December 2022.<br><br> “In total, fifteen out of eighteen Chinese fake news sites incorporated the correct names of real regions in their fake company names. “If the operators had created fake news sites similar to major news organizations based in Seoul, however, the intended deception would have easily been uncovered,” explained Song Tae-eun, an assistant professor in the Department of National Security & Unification Studies at the Korea National Diplomatic Academy, to The Readable. “There is also the possibility that they are using the regional areas as an attempt to form ties with the local community; that being the government, the private sector, and religious communities.””</i><br><br> The firm styled their news site to resemble existing local news outlets in their target region (T0097.201: Local Institution Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona). |
| [I00107 The Lies Russia Tells Itself](../../generated_pages/incidents/I00107.md) | <i>The Moscow firm Social Design Agency (SDA) has been attributed as being behind a Russian disinformation project known as Doppelganger:<br><br>The SDAs deception work first surfaced in 2022, likely almost immediately after Doppelganger got off the ground. In April of that year, Meta, the parent company of Facebook and Instagram, disclosed in a quarterly report that it had removed from its platforms “a network of about 200 accounts operated from Russia.” By August 2022, German investigative journalists revealed that they had discovered forgeries of about 30 news sites, including many of the countrys biggest media outlets—Frankfurter Allgemeine, Der Spiegel, and Bild—but also Britains Daily Mail and Frances 20 Minutes. The sites had deceptive URLs such as www-dailymail-co-uk.dailymail.top. </i><br><br>As part of the SDAs work, they created many websites which impersonated existing media outlets. Sites used domain impersonation tactics to increase perceived legitimacy of their impersonations (T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0152.003: Website Hosting Platform, T0149.003: Lookalike Domain). |
| [I00110 How COVID-19 conspiracists and extremists use crowdfunding platforms to fund their activities](../../generated_pages/incidents/I00110.md) | The EU Disinfo Lab produced a report into disinformation published on crowdfunding platforms:<br><br><br><br><i>More worrisome is the direct monetisation of disinformation happening on crowdfunding platforms: on Kickstarter, we found a user openly raising money for a documentary project suggesting that COVID-19 is a conspiracy.</i><br><br>A Kickstarter user attempted to use the platform to fund production of a documentary (T0017: Conduct Fundraising, T0087: Develop Video-Based Content, T0146: Account, T0148.006: Crowdfunding Platform).<br><br><i>On Patreon, we found several instances of direct monetisation of COVID-19 disinformation, including posts promoting a device allegedly protecting against COVID-19 and 5G, as well as posts related to the “Plandemic” conspiracy video, which gained attention on YouTube before being removed by the platform.<br><br>We also found an account called “Stranger than fiction” entirely dedicated to disinformation, which openly states that their content was “Banned by screwtube and fakebook, our videos have been viewed over a billion times.”</i><br><br>The “Stranger than fiction” user presented itself as an alternative news source which had been banned from other platforms (T0146: Account, T0097.202: News Outlet Persona, T0121.001: Bypass Content Bocking, T0152.012: Subscription Service Platform).<br><br><i>On the US-based crowdfunding platform IndieGogo, EU DisinfoLab found a successful crowdfunding campaign of €133.903 for a book called Revolution Q. This book, now also available on Amazon, claims to be “Written for both newcomers and long-time QAnon fans alike, this book is a treasure-trove of information designed to help everyone weather The Storm.”</i><br><br>An IndieGogo account was used to gather funds to produce a book on QAnon (T0017: Conduct Fundraising, T0085.005: Develop Book, T0146: Account, T0148.006: Crowdfunding Platform), with the book later sold on Amazon marketplace (T0148.007: eCommerce Platform). |
| [I00126 Charming Kitten Updates POWERSTAR with an InterPlanetary Twist](../../generated_pages/incidents/I00126.md) | <i>The target of the recently observed [highly targeted spearphishing attack by “Charming Kitten”, a hacker group attributed to Iran] had published an article related to Iran. The publicity appears to have garnered the attention of Charming Kitten, who subsequently created an email address to impersonate a reporter of an Israeli media organization in order to send the target an email. Prior to sending malware to the target, the attacker simply asked if the target would be open to reviewing a document they had written related to US foreign policy. The target agreed to do so, since this was not an unusual request; they are frequently asked by journalists to review opinion pieces relating to their field of work.<br><br>In an effort to further gain the targets confidence, Charming Kitten continued the interaction with another benign email containing a list of questions, to which the target then responded with answers. After multiple days of benign and seemingly legitimate interaction, Charming Kitten finally sent a “draft report”; this was the first time anything opaquely malicious occurred. The “draft report” was, in fact, a password-protected RAR file containing a malicious LNK file. The password for the RAR file was provided in a subsequent email.</i><br><br>In this example, threat actors created an email address on a domain which impersonated an existing Israeli news organisation impersonating a reporter who worked there (T0097.102: Journalist Persona, T0097.202: News Outlet Persona, T0143.003: Impersonated Persona, T0149.003: Lookalike Domain, T0149.002: Email Domain) in order to convince the target to download a document containing malware (T0085.004: Develop Document, T0147.003: Malware). |

View File

@ -7,7 +7,8 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0043.001: Use Encrypted Chat Apps) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [I00084 Russia turns its diplomats into disinformation warriors](../../generated_pages/incidents/I00084.md) | <I>“[Russias social media] reach isn't the same as Russian state media, but they are trying to recreate what RT and Sputnik had done," said one EU official involved in tracking Russian disinformation. "It's a coordinated effort that goes beyond social media and involves specific websites."<br><br> “Central to that wider online playbook is a Telegram channel called Warfakes and an affiliated website. Since the beginning of the conflict, that social media channel has garnered more than 725,000 members and repeatedly shares alleged fact-checks aimed at debunking Ukrainian narratives, using language similar to Western-style fact-checking outlets.”</i><br><br> In this example a Telegram channel (T0151.004: Chat Platform, T0155.007: Encrypted Communication Channel) was established which presented itself as a source of fact checks (T0097.203: Fact Checking Organisation Persona). |
| [I00120 factcheckUK or fakecheckUK? Reinventing the political faction as the impartial factchecker](../../generated_pages/incidents/I00120.md) | Ahead of the 2019 UK Election during a leaders debate, the Conservative party rebranded their “Conservative Campaign Headquarters Press” account to “FactCheckUK”:<br><br><iThe evening of the 19th November 2019 saw the first of three Leaders Debates on ITV, starting at 8pm and lasting for an hour. Current Prime Minister and leader of the Conservatives, Boris Johnson faced off against Labour party leader, Jeremy Corbyn. Plenty of people will have been watching the debate live, but a good proportion were watching (er, twitching”?) via Twitter. This is something Ive done in the past for certain shows. In some cases I just cant watch or listen, but I can read, and in other cases, the commentary is far more interesting and entertaining than the show itself will ever be. This, for me, is just such a case. But very quickly, all eyes turned upon a modestly sized account with the handle @CCHQPress. Thats short for Conservative Campaign Headquarters Press. According to their (current!) Twitter bio, they are based in Westminster and they provide snippets of news and commentary from CCHQ to their 75k followers.<br><br>That is, until a few minutes into the debate.<br><br>All at once, like a person throwing off their street clothes to reveal some sinister new identity underneath, @CCHQPress abruptly shed its name, blue Conservative logo, Boris Johnson banner, and bio description. Moments later, it had entirely reinvented itself.<br><br>The purple banner was emblazoned with white font that read “✓ factcheckUK [with a “FROM CCQH” subheading]”.<br><br>The matching profile picture was a white tick in a purple circle. The bio was updated to: “Fact checking Labour from CCHQ”. And the name now read factcheckUK, with the customary Twitter blue (or white depending on your phone settings!) validation tick still after it</i><br><br>In this example an existing verified social media account on Twitter was repurposed to inauthentically present itself as a Fact Checking service (T0151.008: Microblogging Platform, T0150.003: Pre-Existing, T0146.003: Verified Account, T0097.203: Fact Checking Organisation Persona, T0143.002: Fabricated Persona). |

View File

@ -7,7 +7,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i><br><br> In this example a website managed by an actor previously sanctioned by the US department of treasury has been configured to redirect to another website; Katehon (T0129.008: Redirect URLs).<br><br> Katehon presents itself as a geopolitical think tank in English (T0097.204: Think Tank Persona), but does not maintain this persona when presenting itself to a Russian speaking audience. |
| [I00072 Behind the Dutch Terror Threat Video: The St. Petersburg "Troll Factory" Connection](../../generated_pages/incidents/I00072.md) | <I>“The creator of Geopolitika[.]ru is Aleksandr Dugin, who was sanctioned by the United States Department of Treasury in 2015 for his role in the Eurasian Youth Union “for being responsible for or complicit in actions or policies that threaten the peace, security, stability, or sovereignty or territorial integrity of Ukraine.”<br><br> [...]<br><br> “Currently, the website geopolika[.]ru redirects directly to another partner website, Katehon.<br><br> “Katehon poses itself as a think tank focused on geopolitics in an English edition of its website. In contrast, in Russian, it states its aim to develop “ideological, political, diplomatic, economic and military strategy for Russia of the future” with a special role of religion. The president of Katehons supervisory board is Konstantin Malofeev, a Russian millionaire with connections to the Russian orthodox church and presidential administration, who founded Tsargrad TV, a known source of disinformation. Malofeev was sanctioned by the U.S. Department of Treasury and the European Union in 2014 for material support and financial backing of Russian-backed separatists in eastern Ukraine. Another known figure from the board is Sergei Glaziev, former advisor to Putin in 20122019. Dugin is also on the board in the Russian edition of the website, whereas he is omitted in English.”</i>In this example a domain managed by an actor previously sanctioned by the US department of treasury has been reconfigured to redirect to another website; Katehon (T0149.004: Redirecting Domain, T0150.004: Repurposed).<br><br> Katehon presents itself as a geopolitical think tank in English, but does not maintain this persona when presenting itself to a Russian speaking audience (T0097.204: Think Tank Persona, T0152.004: Website, T0155.004: Geoblocked). |
| [I00074 The Tactics & Tropes of the Internet Research Agency](../../generated_pages/incidents/I00074.md) | <I>“[Russias Internet Research Agency, the IRA] pushed narratives with longform blog content. They created media properties, websites designed to produce stories that would resonate with those targeted. It appears, based on the data set provided by Alphabet, that the IRA may have also expanded into think tank-style communiques. One such page, previously unattributed to the IRA but included in the Alphabet data, was GI Analytics, a geopolitics blog with an international masthead that included American authors. This page was promoted via AdWords and YouTube videos; it has strong ties to more traditional Russian propaganda networks, which will be discussed later in this analysis. GI Analytics wrote articles articulating nuanced academic positions on a variety of sophisticated topics. From the sites About page:<br><br> ““Our purpose and mission are to provide high-quality analysis at a time when we are faced with a multitude of crises, a collapsing global economy, imperialist wars, environmental disasters, corporate greed, terrorism, deceit, GMO food, a migration crisis and a crackdown on small farmers and ranchers.””</i><br><br> In this example Alphabets technical indicators allowed them to assert that GI Analytics, which presented itself as a think tank, was a fabricated institution associated with Russias Internet Research Agency (T0097.204: Think Tank Persona, T0143.002: Fabricated Persona). |
| [I00078 Metas September 2020 Removal of Coordinated Inauthentic Behavior](../../generated_pages/incidents/I00078.md) | <i>“[Meta has] removed one Page, five Facebook accounts, one Group and three Instagram accounts for foreign or government interference which is coordinated inauthentic behavior on behalf of a foreign or government entity. This small network originated in Russia and focused primarily on Turkey and Europe, and also on the United States.<br><br> “This operation relied on fake accounts — some of which had been already detected and removed by our automated systems — to manage their Page and their Group, and to drive people to their site purporting to be an independent think-tank based primarily in Turkey. These accounts posed as locals based in Turkey, Canada and the US. They also recruited people to write for their website. This network had almost no following on our platforms when we removed it.”</i><br><br> Meta identified that a network of accounts originating in Russia were driving people off platform to a site which presented itself as a think-tank (T0097.204: Think Tank Persona). Meta did not make an attribution about the authenticity of this off-site think tank, so neither T0143.001: Authentic Persona or T0143.002: Fabricated Persona are used here.<br><br> Meta had access to technical data for accounts on its platform, and asserted that they were fabricated individuals posing as locals who recruited targets to write content for their website (T0097.101: Local Persona, T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00079 Three thousand fake tanks](../../generated_pages/incidents/I00079.md) | <i>“The sixth [website to repost a confirmed false narrative investigated in this report] is an apparent think tank, the Center for Global Strategic Monitoring. This website describes itself, in English apparently written by a non-native speaker, as a “nonprofit and nonpartisan research and analysis institution dedicated to providing insights of the think tank community publications”. It does, indeed, publish think-tank reports on issues such as Turkey and US-China relations; however, the reports are the work of other think tanks, often unattributed (the two mentioned in this sentence were actually produced by the Brookings Institution, although the website makes no mention of the fact). It also fails to provide an address, or any other contact details other than an email, and its (long) list of experts includes entries apparently copied and pasted from other institutions. Thus, the “think tank” website which shared the fake story appears to be a fake itself.”</i> In this example a website which amplified a false narrative presented itself as a think tank (T0097.204: Think Tank Persona).<br><br> This is an entirely fabricated persona (T0143.002: Fabricated Persona); it republished content from other think tanks without attribution (T0084.002: Plagiarise Content) and fabricated experts (T0097.108: Expert Persona, T0143.002: Fabricated Persona) to make it more believable that they were a real think tank. |

View File

@ -9,6 +9,7 @@
| -------- | -------------------- |
| [I00070 Eli Lilly Clarifies Its Not Offering Free Insulin After Tweet From Fake Verified Account—As Chaos Unfolds On Twitter](../../generated_pages/incidents/I00070.md) | <i>“Twitter Blue launched [November 2022], giving any users who pay $8 a month the ability to be verified on the site, a feature previously only available to public figures, government officials and journalists as a way to show they are who they claim to be.<br><br> “[A day after the launch], an account with the handle @EliLillyandCo labeled itself with the name “Eli Lilly and Company,” and by using the same logo as the company in its profile picture and with the verification checkmark, was indistinguishable from the real company (the picture has since been removed and the account has labeled itself as a parody profile).<br><br> The parody account tweeted “we are excited to announce insulin is free now.””</i><br><br> In this example an account impersonated the pharmaceutical company Eli Lilly (T0097.205: Business Persona, T0143.003: Impersonated Persona) by copying its name, profile picture (T0145.001: Copy Account Imagery), and paying for verification. |
| [I00095 Meta: Chinese disinformation network was behind London front company recruiting content creators](../../generated_pages/incidents/I00095.md) | <i>“A Chinese disinformation network operating fictitious employee personas across the internet used a front company in London to recruit content creators and translators around the world, according to Meta.<br><br> “The operation used a company called London New Europe Media, registered to an address on the upmarket Kensington High Street, that attempted to recruit real people to help it produce content. It is not clear how many people it ultimately recruited.<br><br> “London New Europe Media also “tried to engage individuals to record English-language videos scripted by the network,” in one case leading to a recording criticizing the United States being posted on YouTube, said Meta”.</i><br><br> In this example a front company was used (T0097.205: Business Persona) to enable actors to recruit targets for producing content (T0097.106: Recruiter Persona, T0143.002: Fabricated Persona). |
| [I00116 Blue-tick scammers target consumers who complain on X](../../generated_pages/incidents/I00116.md) | <i>Consumers who complain of poor customer service on X are being targeted by scammers after the social media platform formerly known as Twitter changed its account verification process.<br><br>Bank customers and airline passengers are among those at risk of phishing scams when they complain to companies via X. Fraudsters, masquerading as customer service agents, respond under fake X handles and trick victims into disclosing their bank details to get a promised refund.<br><br>They typically win the trust of victims by displaying the blue checkmark icon, which until this year denoted accounts that had been officially verified by X.<br><br>Changes introduced this year allow the icon to be bought by anyone who pays an £11 monthly fee for the sites subscription service, renamed this month from Twitter Blue to X Premium. Businesses that pay £950 a month receive a gold tick. Xs terms and conditions do not state whether subscriber accounts are pre-vetted.<br><br>Andrew Thomas was contacted by a scam account after posting a complaint to the travel platform Booking.com. “Id been trying since April to get a refund after our holiday flights were cancelled and finally resorted to X,” he said.<br><br>“I received a response asking me to follow them, and DM [direct message] them with a contact number. They then called me via WhatsApp asking for my reference number so they could investigate. Later they called back to say that I would be refunded via their payment partner for which Id need to download an app.”<br><br>Thomas became suspicious and checked the X profile. “It looked like the real thing, but I noticed that there was an unexpected hyphen in the Twitter handle and that it had only joined X in July 2023,” he said.</i><br><br>In this example a newly created paid account was created on X, used to direct users to other platforms (T0146.002: Paid Account, T0146.003: Verified Account, T0146.005: Lookalike Account ID, T0097.205: Business Persona, T0122: Direct Users to Alternative Platforms, T0143.003: Impersonated Persona, T0151.008: Microblogging Platform, T0150.001: Newly Created). |

View File

@ -8,6 +8,7 @@
| Incident | Descriptions given for this incident |
| -------- | -------------------- |
| [I00069 Uncharmed: Untangling Iran's APT42 Operations](../../generated_pages/incidents/I00069.md) | <I>“[Iranian state-sponsored cyber espionage actor] APT42 cloud operations attack lifecycle can be described in details as follows:<br> <br>- “Social engineering schemes involving decoys and trust building, which includes masquerading as legitimate NGOs and conducting ongoing correspondence with the target, sometimes lasting several weeks. <br>- The threat actor masqueraded as well-known international organizations in the legal and NGO fields and sent emails from domains typosquatting the original NGO domains, for example aspenlnstitute[.]org. <br>- The Aspen Institute became aware of this spoofed domain and collaborated with industry partners, including blocking it in SafeBrowsing, thus protecting users of Google Chrome and additional browsers. <br>- To increase their credibility, APT42 impersonated high-ranking personnel working at the aforementioned organizations when creating the email personas. <br>- APT42 enhanced their campaign credibility by using decoy material inviting targets to legitimate and relevant events and conferences. In one instance, the decoy material was hosted on an attacker-controlled SharePoint folder, accessible only after the victim entered their credentials. Mandiant did not identify malicious elements in the files, suggesting they were used solely to gain the victims trust.”</I><br><br> In this example APT42, an Iranian state-sponsored cyber espionage actor, created a domain impersonating the existing NGO The Aspen Institute (T0143.003: Impersonated Persona, T0097.207: NGO Persona). They increased the perceived legitimacy of the impersonation by also impersonating high-ranking employees of the NGO (T0097.100: Individual Persona, T0143.003: Impersonated Persona). |
| [I00109 Coordinated Facebook Pages Designed to Fund a White Supremacist Agenda](../../generated_pages/incidents/I00109.md) | This report examines the white nationalist group Suavelos use of Facebook to draw visitors to its website without overtly revealing their racist ideology. This section of the report looks at technical indicators associated with the Suavelos website, and attributions which can be made as a consequence:<i><br><br>[The Google AdSense tag set up on Suavelos.eu was also found on the following domains, indicating that they are controlled by the same actor;] Alabastro.eu: an online shop to buy “white nationalists” t-shirts [and] ARPAC.eu: the website of a registered non-profit organisation advocating to lift regulation on gun control in France.<br><br>Other domains attributed to Suavelos (T0149.001: Domain) reveal a website set up to sell merchandise (T0152.004: Website, T0148.004: Payment Processing Capability, T0061: Sell Merchandise), and a website hosting a registered French non-profit (T0152.004: Website, T0097.207: NGO Persona).<br><br>To learn more about the suavelos.eu domain, we collected the following data: The domain is hosted on OVH; The owners identity is protected; The IP Address of the server is 94.23.253.173, which is shared with 20 other domains. <br><br>The relative low number of websites hosted on this IP address could indicate that they all belong to the same people, and are hosted on the same private server.<.i><br><br>Suavelos registered a domain using the web hosting provider OVH (T0149.001: Domain, T0152.003: Website Hosting Platform, T0150.006: Purchased). The sites IP address reveals a server hosting other domains potentially owned by the actors (T0149.005: Server, T0149.006: IP Address). |

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More