Blog 1: Ethical Frameworks

Published on:

The ethical concerns behind tech regulation: should laws on fair use be changed?
Thoughts and discussions on how individual rights to ownership and creativity can be overwritten by tech companies in the pursuit of technological development, often justified under the claim of “fair use.”

Case Study reading:
AI guzzled millions of books without permission. Authors are fighting back

Why this matters?

The issue raised in the article highlights the increasing need for stronger regulation in the development of technology. Today, large tech companies operate with little supervision, which blurs the boundaries of what they are allowed to do, and this can often end up harming individual rights.

The article specifically discusses how companies like OpenAI and Microsoft have used copyrighted material to train large language models without acknowledgement or compensation for authors and publishers. This reminded me of the Aaron Swartz case, a young computer activist who faced 13 federal felony charges for downloading 40 million academic journals from JSTOR. While Swartz’s actions technically involved hacking MIT’s systems, the act of downloading the articles in itself was not explicitly illegal. Yet, he was subjected to an overwhelming prosecution and eventually took his own life.

The contrast is baffling: one individual was seriously punished for accessing less than a terabyte of academic articles, while corporations like Meta can download over 80 terabytes of copyrighted books and articles from LibGen and face no consequences. Instead, they justify their actions as “fair use.” Meanwhile, the public has little transparency about how this data is actually used.

Ethical Concerns

These cases raise significant ethical questions

  • How much freedom should corporations have simply because they can afford it?
  • Where do we draw the line between advancing technology for the greater good and protecting the rights of individuals whose work greatly contributes to that progress?
  • What kinds of ethical biases are in the works used to train AI models, and how do these biases shape the outputs of chatbots and other tools?

Who does this affect and why should we care?

At first, these issues may feel distant from everyday life, especially when they take the form of lawsuits between large corporations and unions (authors union in this case), with no obvious connection to the average person. But as consumers of entertainment, books, journalism, and online information, we are impacted even without being directly involved in it. If companies continue training AI on copyrighted works without consent or compensation, we risk a future where much of what we consume is no longer original but instead an amalgamation of past creations.

There is also the danger of bias in AI outputs. Since the data used to train these systems is not carefully curated, fictional works that can contain biases could be reproduced in answers and decisions made by AI tools. From a feminist/care ethics perspective, this underlines the responsibility to consider vulnerable groups who may be harmed when biased assumptions are normalized in “objective” chatbot responses.

Another important stakeholders but less clearly mentioned in this article are lawmakers. These kinds of lawsuits between tech companies and unions or individuals, pressure them to clarify the legal boundaries between innovation and intellectual property rights. Their role is to demand transparency from companies, and create regulations that protect individual creators without slowing down technological progress.

The two central stakeholders in this article are

  • Big Tech companies (in this article OpenAI and Microsoft, but overall all of those investing in AI), who argue that using copyrighted works falls under “fair use” and that restricting them would slow down development and harm humanity’s progress.
  • Authors and publishers, who feel their rights and livelihoods are threatened because they are not compensated for the use of their creative work.

Discussion of ethical frameworks

There are many ways to view the actions of both groups, and ethical frameworks we have seen can help us better understand each individual position. Tech companies align with utilitarian reasoning, but conflict with deontology, while authors/publishers align with deontological and natural law frameworks, but conflict with utilitarian arguments.

  • Utilitarianism: This framework focuses on maximizing overall well-being and minimizing harm. From this perspective, tech companies’ actions can be justified if training AI leads to broad societal benefits, even if some individuals (like authors) are negatively affected. The argument is that the greater good outweighs the harm.

  • Deontological: This framework prioritizes respecting individual rights regardless of the outcome. Here, authors and publishers have the stronger case. Even if AI could bring positive advancements, it does not justify overriding intellectual property rights.

  • Natural Law: This framework focuses on actions that align with human nature, reason, and the common good. This perspective also supports authors and publishers. Their concern about AI-generated books or “cheap copies” flooding the market and killing human creativity is legitimate. Undermining imagination and originality goes against what natural law considers essential to human flourishing.

Conclusion

Together, utilitarianism and deontology help explain the tension in this case. We can also mention a feminist/care ethics perspective, which could strengthen this conclusion: tech companies’ lack of transparency shows a disregard for the individuals and communities most affected, while authors’ concerns highlight the need for responsibility and fairness in relationships between creators and those who benefit from their work.

Overall, while both sides present arguments, the balance tips toward the authors. Their stance aligns more consistently with ethical principles that respect rights, creativity, and care for individuals.

Final Thoughts

This exercise helped me see both sides of the discussion and recognize that each has strong arguments. It also made me realize that when development is justified in the name of the “greater good,” we should ask: the greater good for whom? Will the benefits of these technologies actually be accessible to everyone?

Another complex topic to be taken into account in this discussion is access. In some countries with strict governments, or for people with low incomes, illegal websites are often the only way to obtain information or entertainment that others take for granted. These lawsuits, however, can eventually frame the problem as the existence of such websites rather than the actions of tech companies exploiting them. As a result, the ethical use of these websites could be jeopardized, and they might be shut down.

Ultimately, it is both important and necessary to have regulation over how tech companies use people’s work and creations. This article and exercise led me to the conclusion that overall “progress” is not truly justifiable if the cost is cutting down creativity and undermining workers’ rights.