Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Modern Ethical Theories

Author: Sophia

what's covered
In this lesson, you will learn about the major ethical theories of recent times. Specifically, this lesson will cover:

Table of Contents

1. Utilitarianism

hint
Although the concepts discussed in this tutorial are hundreds of years old, they are considered “modern” in the historical sense, which refers particularly to the Age of Enlightenment in Europe, when people turned away from monarchs and religious leaders as beacons of truth and attempted to found new concepts based on reason and empirical evidence, or evidence that can be observed and measured. More importantly, these concepts may be considered modern because they describe the ethical theories that still guide business decisions.

Although the ultimate aim of Aristotelian virtue ethics was eudaimonia, later philosophers began to question this notion of happiness. If happiness consists of leading a good life, what is good? More importantly, who decides what is good? Jeremy Bentham (1748–1842), a progressive British philosopher and jurist of the Enlightenment period, advocated for the rights of women, freedom of expression, the abolition of slavery and of the death penalty, and the decriminalization of homosexuality. He believed that the concept of good could be reduced to one simple instinct: the search for pleasure and the avoidance of pain. All human behavior could be explained by reference to this basic instinct, which Bentham saw as the key to unlocking the workings of the human mind. He created an ethical system based on it, called utilitarianism.

Utilitarianism is a consequentialist theory. In consequentialism, actions are judged solely by their consequences, without regard to character, motivation, or any understanding of good and evil and separate from their capacity to create happiness and pleasure. Thus, in utilitarianism, it is the consequences of actions that determine whether those actions are right or wrong.

EXAMPLE

News companies make consequentialist decisions every day as they consider the impact of their reporting. For most of the 20th century, most newspapers would not report on suicides for fear they would lead to or inspire more suicide; indeed, it is well-documented that suicides increase after a widely reported celebrity suicide. A utilitarian view would be that such stories should not be run because whatever damage to the newspaper was less than the damage done by running the story. However, this practice has largely changed in the last 20 years.

During Bentham’s lifetime, revolutions occurred in the American colonies and in France, producing the Bill of Rights and the Déclaration des Droits de l’Homme (Declaration of the Rights of Man), both of which were based on liberty, equality, and self-determination. Karl Marx and Friedrich Engels published The Communist Manifesto in 1848. Revolutionary movements broke out that year in France, Italy, Austria, Poland, and elsewhere. In addition, the Industrial Revolution transformed Great Britain and eventually the rest of Europe from an agrarian (farm-based) society into an industrial one, in which steam and coal increased manufacturing production dramatically, changing the nature of work, property ownership, and family. This period also included advances in chemistry, astronomy, navigation, human anatomy, and immunology, among other sciences.

Given this historical context, it is understandable that Bentham used reason and science to explain human behavior. His ethical system was an attempt to quantify happiness and the good so they would meet the conditions of the scientific method. Ethics had to be empirical, quantifiable, verifiable, and reproducible across time and space. Just as science was beginning to understand the workings of cause and effect in the body, so ethics would explain the causal relationships of the mind. The fundamental unit of human action for him was utility—solid, certain, and factual.

Bentham’s fundamental axiom, which underlies utilitarianism, was that all social morals and government legislation should aim for producing the greatest happiness for the greatest number of people. Utilitarianism, therefore, emphasizes the consequences or ultimate purpose of an act rather than the character of the actor, the actor’s motivation, or the particular circumstances surrounding the act. In order to be scientific, notions of utility had to be:

  1. Universal, applying to all acts of human behavior.
  2. Objective, independent of individual emotions or perceptions.
  3. Rational, instead of based on any religious doctrine.
  4. Quantifiable, meaning it can be observed and measured.
Bentham was interested in reducing utility to a single index so that units of it could be assigned a numerical and even monetary value, which could then be regulated by law. This utility function measures in “utils” the value of a good, service, or proposed action relative to the utilitarian principle of the greater good—that is, increasing happiness or decreasing pain. Bentham thus created a “hedonic calculus” to measure the utility of proposed actions according to the conditions of intensity, duration, certainty, and the probability that a certain consequence would result. He intended utilitarianism to provide a reasoned basis for making judgments of value rather than relying on subjectivity, intuition, or opinion. The implications of such a system on law and public policy were profound and had a direct effect on his work with the British House of Commons, where he was commissioned by the Speaker to decide which bills would come up for debate and vote. Utilitarianism provided a way of determining the total amount of utility or value a proposal would produce relative to the harm or pain that might result for society.

Utilitarian theorists weigh which option will do the most good (and the least harm) to others.
Utilitarian theorists weigh which option will do the most good (and the least harm) to others.

In this way, consequentialism differs from Aristotelian and Confucian virtue ethics, which can accommodate a range of outcomes as long as the character of the actor is ennobled by virtue. For Bentham, character had nothing to do with the utility of an action. Everyone sought pleasure and avoided pain regardless of personality or morality. In fact, too much reliance on character might obscure decision making. Rather than making moral judgments, utilitarianism weighed acts based on their potential to produce the most good (pleasure) for the most people. It judged neither the good nor the people who benefitted. In Bentham’s mind, no longer would humanity depend on inaccurate and outdated moral codes. For him, utilitarianism reflected the reality of human relationships and was enacted in the world through legislative action.

Bentham’s protégé, John Stuart Mill (1806–1873), refined Bentham’s system by expanding it to include human rights. In so doing, Mill reworked Bentham’s utilitarianism to respond to critics who felt that Bentham’s utility function reduced humans to mathematical units. Mill believed in human rights; he thus believed the effort to achieve utility was unjustified if it coerced people into doing things they did not want to do. Likewise, the appeal to science as the arbiter of truth would prove just as futile, he believed, if it did not temper facts with compassion. “Human nature is not a machine to be built after a model and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing,” he wrote (1859). Mill was interested in humanizing Bentham’s system by ensuring that everyone’s rights were protected, particularly the minority’s, not because rights were God-given but because that was the most direct path to truth. Therefore, he introduced the harm principle, which states that the “only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.”

To be sure, there are limitations to Mill’s version of utilitarianism, just as there were with the original. For one, there has never been a satisfactory definition of “harm,” and what one person finds harmful, another may find beneficial. For Mill, harm was defined as the setback of one’s interests. Thus, harm was defined relative to an individual’s interests. But what role, if any, should society play in defining what is harmful or in determining who is harmed by someone’s actions? For instance, is society culpable for not intervening in cases of suicide, euthanasia, and other self-destructive activities such as drug addiction? These issues have become part of the public debate in recent years and most likely will continue to be as such actions are considered in a larger social context. People may also define intervention and coercion differently depending on where they fall on the political spectrum.

Considering the social implications of an individual action highlights another limitation of utilitarianism, and one that perhaps makes more sense to people today than it would to Bentham and Mill, namely, that it makes no provision for emotional or cognitive harm. If the harm is not measurable in physical terms, then it lacks significance. For example, if a reckless driver today irresponsibly exceeds the speed limit, crashes into a concrete abutment, and kills himself while totaling his vehicle (which he owns), utilitarianism would hold that in the absence of physical harm to others, no one suffers except the driver. You may not arrive at the same conclusion. Instead, you might hold that the driver’s survivors and friends, along with society as a whole, have suffered a loss. Arguably, everyone is diminished by the recklessness of his act.

think about it
To illustrate the concept of consequentialism, consider the hypothetical story told by Harvard psychologist Fiery Cushman. When a man offends two volatile brothers with an insult, Jon wants to kill him; he shoots and misses. Matt, who intends only to scare the man but kills him by accident, will suffer a more severe penalty than his brother in most countries (including the United States). Applying utilitarian reasoning, can you say which brother bears greater guilt for his behavior? Are you satisfied with this assessment of responsibility? Why or why not?

terms to know
Empirical Evidence
Evidence that can be observed and measured.
Consequentialism
An ethical theory where actions are judged by their consequences.
Utilitarianism
A consequentialist ethical theory where the rightness of an action is the one that brings the most pleasure (and does the least harm) to the greatest number of people.
Utility Function
An attempt to weigh the outcomes of a decision mathematically.
Harm Principle
The belief that humans have the right to do anything that does not harm others.

2. Deontology

Unlike Bentham and Mill, Immanuel Kant (1724–1804) was not concerned with consequences of one’s actions or the harm caused to one’s individual interests. Instead, he focused on motives and the willingness of individuals to act for the good of others, even though that action might result in personal loss. Doing something for the right reason was much more important to Kant than any particular outcome. Kant argued that humans have an inherent and unconditional duty to act ethically, which he called the categorical imperative.

In its initial form, Kant described his concept of the categorical imperative as follows: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” In other words, in considering any action, a person must consider what they would have all people do all the time.

EXAMPLE

A manager is tempted to tell a small lie to protect an employee who is habitually late but otherwise a good worker and colleague. The manager knows the employee is late because of family obligations. To a utilitarian, this would be a way of minimizing harm to both the office and the employee’s family by protecting his job. To a deontologist, it would be necessary to tell the truth, regardless of the consequences, because the manager would not want a work environment where everybody lied all the time.

A worker is being questioned by a manager, who is pointing at the empty desk adjacent to her own.

Kant’s categorical (or unconditional) imperative has practical applications for the study of ethics. The categorical imperative contains two major suppositions:

  1. One must act on the basis of goodwill rather than purely on self-interested motives that benefit oneself at the expense of others.
  2. One must never treat others as means toward ends benefitting oneself without consideration of them also as ends in themselves. Kant held that observing the categorical imperative as one considers what actions to take would directly lead to ethical actions.
In Kant’s view, rationalism and empiricism prevented people from perceiving the truth about their own nature. What was that truth? What was sufficient to constitute it? Kant identified an a priori world of knowledge and understanding in which truth lay in the structures and categories of the mind that were beyond perception and reason. This was a radical concept for the times. Simply put, Kant believed that humans were born with a sense of right and wrong and needed only to follow those instincts.

In the end, Kant’s systematic analysis of knowing and understanding provided a much-needed counterweight to the logic of Enlightenment rationalism. The existence of the mental structures he proposed has even been confirmed today. For instance, the scientific consensus is that humans are born with cognitive structures designed specifically for language acquisition and development. Even more surprising, there may be similar cognitive structures for morality, conscience, and moral decision making. So, it is quite possible that conscience, if not happiness, may have a genetic component after all, although Kant himself did not believe the categories of the understanding or the a priori structures of the mind were biological.

From a Kantian perspective, it is clear that adherence to duty is what builds the framework for ethical acts. This is in direct contradiction of Bentham’s view of human nature as selfish and requiring an objective calculus for ethical action to result. Kant rejected the idea of such a calculus and believed, instead, that perceptions were organized into preexisting categories or structures of the mind. Compare his notion of an ordered and purposeful universe of laws with the similar logos, or logic, of the ancient Greeks. One of those laws included implementation of the categorical imperative to act ethically, in accordance with one's conscience. However, even though that imperative ought to be followed without exception, not everyone does so. In Kant’s moral teachings, individuals still had free will to accept or reject it.

There is a definite contrast between utilitarianism, even Mill’s version, and Kant’s system of ethics, known as deontology, in which duty, obligation, and good will are of the highest importance. (The word is derived from the Greek deon, meaning duty, and logos again, here meaning organization for the purpose of study.) An ethical decision requires people to observe only the rights and duties they owe to others, and, in the context of business, act on the basis of a primary motive to do what is right by all stakeholders. Kant was not concerned with utility or outcome—his was not a system directed toward results. The question for him was not how to attain happiness but how to become worthy of it.

big idea
In utilitarianism, “the ends justify the means.” This is in direct contrast to deontology, where you must do the right thing regardless of the consequences.

EXAMPLE

One case that will often find a difference between deontology and consequentialism is lying. For the deontologist, lying violates a moral duty to tell the truth; the nature of the lie is unimportant. This means they even avoid the “white lies” that spare people’s feelings or protect a surprise. In contrast, a consequentialist would weigh the pain caused by the truth against the minimal harm done by lying, and easily tell the “white lie,” that Uncle Joe’s meatloaf is delicious or that nobody is planning a surprise party at work for Janice’s retirement.

Rather like Aristotle and Confucius, Kant taught that the transcendent aspects of human nature, if followed, would lead people inevitably to treat others as ends rather than means. To be moral meant to renounce uninformed dogmatism and rationalism, abide by the categorical imperative, and embrace freedom, moral sense, and even divinity. This was not a lofty or unattainable goal in Kant’s mind, because these virtues constituted part of the systematic structuring of the human mind. It could be accomplished by living truthfully or, in today's terms, authentically. Such a feat transcended the logic of both rationalism and empiricism.

IN CONTEXT

Is it wrong to steal bread to feed hungry children? The answer depends on your ethical theory.
Is it wrong to steal bread to feed hungry children? The answer depends on your ethical theory.
In utilitarianism, the greatest good or happiness for the largest number of people is the goal, and that which produces the greatest good is considered the most ethical, but there are two critical flaws:
  1. What is “good”? What is “happiness”? And who is one considering when one tries to define those terms? Can anyone really define a common “good” or a universal “happiness?”
  2. Nobody can know the future; a person can only predict possible or probable consequences.
The deontological model also has several critical flaws:
  1. It is about absolutes, both principles-based and/or duty-based—but not everyone accepts the principles and/or duties.
  2. It rejects utilitarianism, as it is impossible to know the consequences of an action before the action, but asserts a person should judge an action ethical on its own merit, and consider intention, which a person can also never really know.
If you consider a classic question, each approach demonstrates its weaknesses. Recall the story of Jean Valjean in the Victor Hugo novel Les Misérables, who steals a loaf of bread to feed his starving sister’s children. Did Valjean behave ethically?

How would you answer? Would you focus on consequences, or intentions and principles and/or duty?

To focus on consequences, you would not look directly at the theft itself, but instead at the probable consequences: feeding starving children with minimal harm done to the victim of the theft. A utilitarian might argue the action is ethical as it produces the greatest good by its probable consequences: the children are no longer hungry. But you don’t really know that the children have no other options. Moreover, since Valjean is caught and imprisoned, the children lose a valuable family member who can help them, so judged by the actual (rather than imagined) consequences, stealing the bread led to more harm than good.

By contrast, if you focus on absolute principles and duty, you understand the rule first: to steal is wrong and unethical, regardless of situation or context or extenuating circumstances, because nobody would want a world in which all people stole all the time (per the categorical imperative). Now, observing the action itself (the theft of bread), you judge the act itself as unethical because it violates the principle and duty to do the right thing. But few would say that letting children starve is a good moral choice.

Neither system works perfectly, and while utilitarianism is too flexible, deontology is too inflexible. How can you resolve the two? This is not easily answered, and in reality, many people draw on both deontological and utilitarian principles as they make ethical decisions. You will continue to consider this question in subsequent chapters.

terms to know
Categorical Imperative
The ethical precept that moral choices should only be made if you would have all people behave that way all the time.
Deontology
An ethical theory that stresses moral duty's adherence to universal laws; this duty is performed without regard to consequences.

summary
In this lesson, you learned about two key ways to assess whether an action is ethical: by evaluating consequences or by evaluating the action itself in relation to principles, duty, and adherence to universal laws, regardless of the consequences. Neither system works perfectly, and while utilitarianism is too flexible, deontology is too inflexible. You will often observe, in the real world, elements of both coming to bear on ethical thinking and decision making.

Source: THIS TUTORIAL HAS BEEN ADAPTED FROM OPENSTAX "BUSINESS ETHICS". ACCESS FOR FREE AT OPENSTAX.ORG/BOOKS/BUSINESS-ETHICS/PAGES/1-INTRODUCTION. LICENSE: CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL.

REFERENCES

Mill, J.S. (1859). On Liberty. Project Gutenberg. www.gutenberg.org/files/34901/34901-h/34901-h.htm

Terms to Know
Categorical Imperative

The ethical precept that moral choices should only be made if you would have all people behave that way all the time.

Consequentialism

An ethical theory where actions are judged by their consequences.

Deontology

An ethical theory that stresses moral duty's adherence to universal laws; this duty is performed without regard to consequences.

Harm Principle

The belief that humans have the right to do anything that does not harm others.

Utilitarianism

A consequentialist ethical theory where the rightness of an action is the one that brings the most pleasure (and does the least harm) to the greatest number of people.

Utility Function

An attempt to weigh the outcomes of a decision mathematically.