Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Case Study in Computer Ethics: Artificial Intelligence

Author: Sophia

what's covered
In this tutorial we will explore a case study on ethics and artificial intelligence. In specific, this tutorial covers:

Table of Contents

1. The Three Rules of Robotics

Writer Isaac Asimov largely created the concept of robots as we know them in the book I, Robot, a 1950 collection of short stories that explore the philosophical and ethical dimensions of mechanical beings that had human intelligence. (Note that the movie starring Will Smith has little to do with the book.)

The three rules, sometimes called “Asimov’s laws,” are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Unlike humans, robots are obligated by their programming to follow these three laws. Since they must follow these rules, it is easy to understand how this approach would lead automatically to a deontological understanding of ethics. However, if we look at the rules closely we can see that what is being examined are the outcomes of an action. Could a robot drop a large weight on a human? The action of dropping itself is not problematic, but not catching the weight prior to squishing a human would result in harm. This looks a lot like consequentialism. The laws of robotics might be understood as an excellent example of rule utilitarianism.

More than 70 years after Asimov proposed the rules in his fictional accounts, they prove a good starting point for how to program artificial intelligence and robots as they approach the vision that Asimov entertained in his book. What rules must they follow? What moral (and legal) obligations do their programmers have to ensure that robots or computers make ethically good decisions.


2. Generative AI

In the 2020s, tools like ChatGPT became widespread, and because of that people now associate AI, which is an expansive term and has many applications, with generative AI, any tool that can be “fed” large amounts of data then respond to prompts generating texts, art work, videos, or music, drawing from the examples in its data set.

The quick adoption by millions of people to use these tools was matched only by the eagerness of tech companies to introduce generative AI elements in their software. This raised a number of concerns:

  1. Academic dishonesty in having generative AI do homework and assignments.
  2. Copyright infringement as generative AI used recognizable chunks of works that were under copyright and which the author or artist did not give permission.
  3. Replicating the style of known authors and artists without their consent.
  4. Deceit of using AI in other contexts beyond academic work, such as composing “personalized” emails to clients.
  5. Generating “deep fake” videos that show real, recognizable people committing crimes or other activities that harm their reputations.
  6. Generating realistic pornography with celebrities, child abuse imagery, or otherwise mimicking illegal photography or video.
For each of these, how does the use of AI contribute to “injuring a human being” or “allowing a human being to come to harm.” The harm may vary from public humiliation to loss of revenue to simply hurt feelings from being deceived.

think about it
Does responsibility for the use of AI in the manners described above, and which create measurable harm, belong only to the person who prompted the AI to do so, or does it include the developers who create AI capable of being used in these ways? These are the ethical, even legal, issues that developers are currently grappling with.

Consider the ethics of building AI in terms of:

  • Using large sets of personal data
  • Using copyrighted materials as part of the language model
  • Generating faked images or videos featuring real people
  • Allowing the generation of illegal content such as realistic child abuse videos
  • Reinforcing stereotypes such as showing white men when the user asks for “pictures of doctors”
Asimov’s law suggests makers of robots have an obligation to program robots in accordance with the laws and with ethics. In this case the developers of AI would have a responsibility to ensure that their tools do no harm.

What limits should be programmed into AI generators? While no individuals are physically harmed by the creation of text or images using these, one could easily see how psychological distress could occur upon discovering that faked images of yourself are spread across the Internet.


3. AI and Ethical Theory

As mentioned above, computer programs and robots are bound by rules. Thus, actions taken by them would best be evaluated from a deontological perspective. But, of course, the programs and robots do not program themselves. These technological creations that we call generative AI are built with the ability to discern patterns of words or images and provide the most likely set of words/images that would correspond to a given prompt.

AI bots that produce a text response (like a poem to be given to one’s romantic partner) generally work on what is called a Large Language Model (LLM). When given the prompt to compose such a poem, the program examines its data set for a structure of lines and number of words most commonly used for such writings, the words and phrases that are most likely to occur, and builds hypothetical possibilities that would fit “the rules” of composing such a poem. Similarly, an image generator uses a process called Generative Adversarial Networking to compare two possible images to an exemplar from its data set. The possible image that is more like the exemplar is kept and used as the beginning point for the next step in the process.

EXAMPLE

If we ask a GAN to create a new image that depicts a red barn in a snowy field, the program will reject possible images that do not appear “barn-like” enough, then use that image while overlaying it into an image that appeared most like a snowy field. In both the LLM and GAN cases, the AI program does not “understand” what it has created, but it has built a response to your prompt that statistically matches most closely what it knows about the requested item based upon its data set.



Once we understand the importance of the data set for the production of text or images by generative AI programs, it is easier to understand how programmers have to consider the ethics of the software they are designing. Whether we are using a generative AI to create text or an image, both rely upon data that has been collected to provide “source material” to be considered. In a LLM text generator, this might involve inputting the text of novels, short stories, magazine and newspaper articles, and transcriptions of published speeches. When the data set is large enough, the program can find patterns of word usage.

think about it
While it is common to have the word “bird” followed by “house,” it is rare to have “bird” followed by “trampoline.” Similarly, in a love sonnet the use of moon imagery is far more common than the discussion of a black hole. But this of course depends upon the texts used to build the data set. How should these data sets be constructed?

A programmer who is a deontologist would be morally obligated to follow a set of rules for creating the data set. Thus, in order to avoid using another’s property with compensation, she may limit the creation of the data set to works which are old enough that their copyright has expired. Using only those works would mean that the programmer did not engage in theft.

EXAMPLE

A data set built on the complete works of William Shakespeare is going to have a very different algorithm for predicting what words are best to use than one which is built using a more contemporary author like Stephen King.

By acting in keeping with deontological ethics, the programmer is setting up two very different predictive models. Her ethical standards have a very practical difference in the outcome of her work. A consequentialist might understand that same data set building in very different ethical terms depending upon what outcomes are being measured. He might avoid including text from internet social media platforms so that language that is racist, sexist, or nationalistic would not be included in the data set.

For both text and image generators, the use of AI involves a lot of computing processes, which draw both a large amount of electricity and a large amount of water to be used in cooling the computers. The environmental impact of AI use is astonishingly large. By 2026, it is estimated that the centers used to store the data sets and run the computing processes to produce output are going to use more electricity than all but six nations in the world. The amount of fresh water that would be used to cool the systems is twelve times the amount of water used every day in the United States. The implications of the creation and use of generative AI would certainly lead a consequentialist to have concerns about the environmental impact of their work.

In some ways, generative AI fits nicely with a virtue ethics approach. For virtue ethicist, we come to learn how to do the right thing in a given situation over time by learning from our mistakes. Both LLM and generative AI approaches build off of their past results to produce better outcomes with each creation. It is less clear how a programmer would understand their role in building such systems. They may be worried about the honesty of building such AI engines (in a manner similar to the deontologists concern above), how the software would shape the habits of the users, or if the use of the programs would make consumers less likely to reflect upon their own knowledge and skills. Philosopher Shannon Vallor has called for the use of virtue ethics in building a “technomoral” values approach both for those who create and use our technological systems.

Possible image: Ethics2025_433c

https://media.istockphoto.com/id/1394126901/vector/smiling-young-woman-using-computer.jpg?s=612x612&w=0&k=20&c=WgfKfMmFlffRjI-fV-bBPtbgKvhUvoGfD7GmWRMkUlU=

learn more
UNESCO, an agency within the United Nations, has a team exploring these issues. You can find out more here: 
www.unesco.org/en/artificial-intelligence/recommendation-ethics

terms to know
Large Language Model (LLM)
A type of artificial intelligence that uses deep learning techniques to understand, generate, and manipulate human language based on vast amounts of text data.
Generative Adversarial Networking
A machine learning framework where two neural networks, a generator and a discriminator, compete to create and evaluate realistic data, improving each other over time.

summary
In this tutorial we looked at computer ethics, in particular machines that are capable of harm, such as robotics and artificial intelligence. We began with the three rules of robotics, an ethical code that actually predates robots, and then looked at a number of ethical issues related to the widespread availability and use of generative AI. Each of the three major systems of ethics (deontology, consequentialism, and virtue ethics) would have questions for us to consider about AI and ethical theory in the programing and use of computer programs.

REFERENCES

Asimov, I. (1950). Runaround. I, Robot (The Isaac Asimov Collection ed.). Doubleday. p. 40.

Terms to Know
Generative Adversarial Networking

A machine learning framework where two neural networks, a generator and a discriminator, compete to create and evaluate realistic data, improving each other over time.

Large Language Model (LLM)

A type of artificial intelligence that uses deep learning techniques to understand, generate, and manipulate human language based on vast amounts of text data.