Skip main navigation

What are Moral Theories?

In this article, we discuss the differences between moral theories and scientific theories, though neither are 'mere' theories.
© Tim Dare, University of Auckland

Scientific theories are not ‘mere’ theories – untested, tentative, vague generalisations. They are, instead, comprehensive, supported by large bodies of converging evidence, based on repeated observations, usually integrating and generalising hypotheses and making consistently accurate predictions across a broad area of scientific inquiry.

Moral theories are not mere theories in the dismissive sense either. They too are based on repeated observations, are likely to integrate hypotheses, and attempt to explain and justify a range of moral or ethical judgements about particular cases.

But neither are moral theories quite like scientific theories. The data that scientific theories try to explain is provided by observation of the natural world. The data that moral theories try to explain is our considered moral judgements; judgements that have, we might say, survived the test of good logical and critical thinking. There is an obvious difference here. In the case of moral theories, we use our considered judgements to supply the data by which we judge the adequacy of moral theories.

Still, even with this difference acknowledged, we do use moral theories, and often in very similar ways to their scientific counterparts.

According to ‘act utilitarianism’, for instance, the right action is the one that generates the most utility, understood by early proponents as the greatest happiness for the greatest number of people. The theory claims that right actions have a common property – their tendency to generate utility.

Act utilitarianism preserves a number of key considered judgements: it seems clear that the consequences of actions matters to their moral status; that utility, or happiness, is important; that morality should have regard not just to a select few but to all those affected by an act.

And it can provide useful guidance: if we are deciding whether to build opera houses or football stadiums, utilitarianism can tell us not only which action is the right one, but – assuming we can assess the happiness which would be created by the options – how to find out.

Plausibly, however, we also have a considered moral judgement that it isn’t acceptable to treat someone merely as a resource to improve the lot of others, and while act utilitarianism preserves various other considered moral judgements, it struggles to preserve this one.

Suppose I find myself in hospital for a minor procedure at the same moment five people are injured in a serious bus accident. Now it might really be true that distributing my organs to save five people would create greater happiness than fixing me up and sending me home. Of course, the unhappiness created by my demise and dismemberment has to be figured in the utilitarian calculus, but it seems plausible that my family’s unhappiness would be outweighed by the happiness of the five recipients and their families.

In response to act utilitarianism’s apparent conflict with the considered judgment that we should not use people as mere resources, moral theorists propose amendments, calculating the utility not of particular acts but general rules (to get rule utilitarianism), or reject utilitarian theories altogether. In either case, they respond in much the way scientists do when theories are unable to account for data.

An important alternative theory responds to the fact that utilitarianism makes moral rightness depend upon what in fact would make people happy. According to Immanuel Kant, the ultimate principle of morality must be capable of guiding us to the right action in any circumstances. He tells us to ask if the motive we are considering acting from could become a universal law, a law to be followed by anyone in any situation. He is concerned with consistency and rationality, not consequences. It could not be moral to make a promise intending to break it, he thought, because breaking promises, as a rule, would be inconsistent with the very idea of a promise: rational people could not will that such practice become a universal law.

But it is easy to imagine cases in which Kant’s absolute rules would run afoul of our considered moral judgments. Shouldn’t we break a promise if keeping it would have very bad consequences?

A further alternative theory downplays rules and principles, emphasising the importance of ‘practical wisdom’ in particular cases. The right action, according to Aristotelian virtue theory, is the action that would be chosen by an agent who has practical wisdom. But how are we to identify the person of practical wisdom? We can’t do so by seeing if they choose the right action, since the right action is the action they choose! (if we can work out whether they’ve chosen the right action on some other ground – utility maximization or respect for persons, perhaps? – that would be the test for right action, not the choice of the person of practical wisdom).

And so it goes, with moral theorists amending theories and proposing alternatives.

These are all ‘theories’ in our sense. They offer general accounts of what makes an action right, accounting for as many of our considered judgments as possible (just as the scientists attempts to explain as many observations as possible), sometimes calling upon us to abandon some of those judgments, (just as scientists might reject some observations or hypotheses as mistaken), attempting to explain why a judgment in some past case is troubling and how we should judge in the future.

There is, however, one other important way in which moral theories are not like scientific theories.

In science, we assume that there is one right theory and that it will explain all of the data and probably lead us to abandon rival theories (though it may have incorporated significant parts of them). Moral theories do not seem to work quite like that. They are generated by our considered moral judgments and they reflect, for instance, the judgment that consequences matter; that recognition and respect for autonomous reasoning agents matter; that wisdom and judgment matter. Moral theories allow us to see the implications of the judgments that these (and other fundamental concerns) are each important in moral reasoning.

Perhaps, as a result, moral theories have in recent years become more concerned to accommodate the insights of what were once regarded as rival theories. It may be that that trend will lead to the emergence of some broader, unifying, theory. If it does it will be because that new unifying theory can accommodate the contributing intuitions better than narrower alternatives.

© Tim Dare, University of Auckland
This article is from the free online

Logical and Critical Thinking

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now