mirror of
https://github.com/DISARMFoundation/DISARMframeworks.git
synced 2024-12-20 21:34:17 -05:00
31 lines
8.9 KiB
Markdown
31 lines
8.9 KiB
Markdown
# Incident I00097: Report: Not Just Algorithms
|
||
|
||
* **Summary:** <i>Many of the systems and elements that platforms build into their products create safety risks for end-users. However, only a very modest selection have been identified for regulatory scrutiny. As the [Australian] government reviews the Basic Online Safety Expectations and Online Safety Act [in 2024], the role of all systems and elements in creating risks need to be comprehensively addressed.<br><br>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.</i>
|
||
|
||
* **incident type**:
|
||
|
||
* **Year started:**
|
||
|
||
* **Countries:** ,
|
||
|
||
* **Found via:**
|
||
|
||
* **Date added:**
|
||
|
||
|
||
| Reference | Pub Date | Authors | Org | Archive |
|
||
| --------- | -------- | ------- | --- | ------- |
|
||
| [https://au.reset.tech/news/report-not-just-algorithms/](https://au.reset.tech/news/report-not-just-algorithms/) | 2024/03/24 | - | Reset Australia | [https://web.archive.org/web/20240527135516/https://au.reset.tech/news/report-not-just-algorithms/](https://web.archive.org/web/20240527135516/https://au.reset.tech/news/report-not-just-algorithms/) |
|
||
|
||
|
||
|
||
| Technique | Description given for this incident |
|
||
| --------- | ------------------------- |
|
||
| [T0018 Purchase Targeted Advertisements](../../generated_pages/techniques/T0018.md) | IT00000358 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)<br><br>Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.<br><br>Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.<br><br>Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.</i><br><br>Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). |
|
||
| [T0151.001 Social Media Platform](../../generated_pages/techniques/T0151.001.md) | IT00000355 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
|
||
| [T0151.008 Microblogging Platform](../../generated_pages/techniques/T0151.008.md) | IT00000357 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
|
||
| [T0153.005 Online Advertising Platform](../../generated_pages/techniques/T0153.005.md) | IT00000359 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Ad approval systems can create risks. We created 12 ‘fake’ ads that promoted dangerous weight loss techniques and behaviours. We tested to see if these ads would be approved to run, and they were. This means dangerous behaviours can be promoted in paid-for advertising. (Requests to run ads were withdrawn after approval or rejection, so no dangerous advertising was published as a result of this experiment.)<br><br>Specifically: On TikTok, 100% of the ads were approved to run; On Facebook, 83% of the ads were approved to run; On Google, 75% of the ads were approved to run.<br><br>Ad management systems can create risks. We investigated how platforms allow advertisers to target users, and found that it is possible to target people who may be interested in pro-eating disorder content.<br><br>Specifically: On TikTok: End-users who interact with pro-eating disorder content on TikTok, download advertisers’ eating disorder apps or visit their websites can be targeted; On Meta: End-users who interact with pro-eating disorder content on Meta, download advertisers’ eating disorder apps or visit their websites can be targeted; On X: End-users who follow pro- eating disorder accounts, or ‘look’ like them, can be targeted; On Google: End-users who search specific words or combinations of words (including pro-eating disorder words), watch pro-eating disorder YouTube channels and probably those who download eating disorder and mental health apps can be targeted.</i><br><br>Advertising platforms managed by TikTok, Facebook, and Google approved adverts to be displayed on their platforms. These platforms enabled users to deliver targeted advertising to potentially vulnerable platform users (T0018: Purchase Targeted Advertisements, T0153.005: Online Advertising Platform). |
|
||
| [T0153.006 Content Recommendation Algorithm](../../generated_pages/techniques/T0153.006.md) | IT00000356 <i>This report explores the role of four systems (recommender systems, content moderation systems, ad approval systems and ad management systems) in creating risks around eating disorders.<br><br>[...]<br><br>Content recommender systems can create risks. We created and primed ‘fake’ accounts for 16-year old Australians and found that some recommender systems will promote pro-eating disorder content to children.<br><br>Specifically: On TikTok, 0% of the content recommended was classified as pro-eating disorder content; On Instagram, 23% of the content recommended was classified as pro-eating disorder content; On X, 67% of content recommended was classified as pro-eating disorder content (and disturbingly, another 13% displayed self-harm imagery).</i><br><br>Content recommendation algorithms developed by Instagram (T0151.001: Social Media Platform, T0153.006: Content Recommendation Algorithm) and X (T0151.008: Microblogging Platform, T0153.006: Content Recommendation Algorithm) promoted harmful content to an account presenting as a 16 year old Australian. |
|
||
|
||
|
||
DO NOT EDIT ABOVE THIS LINE - PLEASE ADD NOTES BELOW |