Blog 2: Crafting Arguments
Published on:
War crimes: Do not delete
Even beyond war crimes, content should not simply disappear. It is essential to consider what should be preserved, where, and why, to ensure that valuable information and evidence are not lost.
Case Study reading:
AI: War crimes evidence erased by social media platforms
The article from BBC shows how platforms like Meta and YouTube are using AI to moderate content and remove videos and pictures they deem harmful. Many of the contents being taken down are graphic footage from Ukraine and Syria documenting atrocities and war crimes. Yet instead of these content being archived or restricted, these videos and images simply vanish within minutes of being posted. This case highlights the current issue between protecting users from traumatic material and preserving information that could be useful for legal reasons.
Argument
P1: Social media companies must moderate and remove violent or traumatic content to protect their users from harm.
P2: Automated systems are designed to flag and remove such material quickly so it doesn’t reach people that could be harmed.
P3: As a result, videos documenting atrocities, including war crimes, are deleted from public record and unavailable to victims or investigators.
C1: Therefore, social media companies’ content removal policies compromise justice by deleting potential evidence of war crimes.
Fallacy
This argument can have a false dilemma fallacy. It assumes the platforms must either (1) remove all violent content to protect users or (2) allow such content to circulate freely. In reality, there are others options, such as archiving sensitive material under restricted filters or keeping it in third-party databases so it can be accessed later by researchers, lawyers, or courts if needed.
Rebuttal
P4: Graphic content can be harmful to the public, but this does not mean it should be permanently deleted.
P5: Independent organizations, like Mnemonic mentioned in the article, already show that information can be preserved while still protecting everyday users from seeing those videos and images.
P6: By implementing “shields” and changing methods for saving posted content, social media companies could maintain user safety, as they claim to be their motif, while also being accountable for the information on their platforms.
R: Therefore, content removal policies should shift from simply and completely deletion to preservation in some kind of restricted-access archives.
Alternative Argument
C2: Content deemed harmful should not be bluntly deleted. Instead, it should be flagged as such but kept in a secure, stable database and made accessible when needed, as such content may be not only related to war crimes but also other issues, such as bullying and harassment.
Recommendation
Social media companies should not rely solely on AI for moderation and content protection, as these algorithms are highly biased depending on the data they are trained on (and all the data we have available today has some kind of bias). Instead, content that is flagged as harmful should be restricted but preserved in secure archives. At the same time, platforms need to be transparent about how their moderation algorithms define harm.
Additional Concerns
Removing content deemed “harmful” is not always the right solution, as it can prevent the public from accessing firsthand evidence of what is happening in real time. Worse, there is little to none transparency about what information these AI systems being used to moderate content are trained on, or how they determine what counts as harmful. Platforms had claimed they could allow these “harmful” footage to be accessed only by adults, but the question is: How do they verify someone is an adult? In the UK, laws are pushing towards ID verification, and YouTube is already experimenting with AI-based behavior profiling to guess whether a user is underage. This raises its own ethical risks about privacy and individuality.
Even more concerning is that not all harmful content is treated equally. War crimes footage is being quickly erased, as it was mentioned in this article, but content linked to the growing problem of “adultization” remains untouched, even with some evidence that it could pose more or equal harm to society. There is also the recent action from Collective Shout that ended up with credit card companies blocking users from buying games on Steam that were deemed “unfit”. This somewhat tendentious execution of moderation shows that these algorithms are not neutral as they reflect the priorities and biases of the companies that train them, following an ethics of what is wrong or right that is only confined to some people.
Reflection
This assignment made me think about the importance of protecting users, but also about where content ultimately goes to. The article shows how AI moderation can quickly remove videos of war crimes, which may shield viewers from trauma but also erases evidence that could be crucial for justice. It made me think about broader implications: I’ve been seeing news about ID verification, games being banned, restricted, or removed because some groups deem them problematic, while at the same time so much harmful content, like Discord servers propagating hate, remains online.
I personally do not think that the solution for the current issues we have with the internet is just erase content and heavily moderate everything without holding users and platforms accountable for harmful or misleading material. A better approach could be restricting or flagging content that might be triggering, allowing users to choose whether or not to view it. When the focus is only on profit and avoiding issues without accountability, even content that users have purchased, like games, can disappear, and the material would be lost forever. We’re already seeing “internet archivists” becoming increasingly important because so much digital content has already been lost over the years. Simply erasing everything deemed harmful would risk losing even more valuable content; it would be like burning libraries, erasing years of progress and information.