Wednesday, 7 April 2021

Ethical Reasoning: A Philosophical-Psychological Exploration

by Douglas E. Chismar

Since Aristotle’s writing of the Nicomachean Ethics, philosophers have sought to understand the nature and scope of ethical reasoning. Some of the most insightful attempts have been those which worked to integrate the investigation of ethical questions with related topics in other areas of knowledge. Such related areas have included epistemology, metaphysics, and the social sciences. In this paper, we wills consider attempts to understand the nature of ethical reasoning which bring psychological and philosophical issues into a common forum.

Psychology and philosophy have been veritable “bosom buddies,” particularly since the dawn of modern (post-medieval) philosophy. Modern philosophers, often beginning from an epistemological standpoint, have on many an occasion blundered unwittingly into doing primitive psychology. An example is Hume’s lengthy and detailed treatment of the emotions in the second Enquiry. Others have been openly enamored to a prominent psychological perspective, and have sought to remake philosophy accordingly. Thus in W.V. Quine’s Word and Object, behaviorism and epistemology become one. Hopefully, these two approaches do not exhaust the alternatives. Whatever approach one chooses, philosophers cannot afford to overlook the many insights afforded them by contemporary psychology. This is especially the case in regard to the study of ethical reasoning.

Moral or ethical reasoning (we shall use the terms synonomously) denotes the thinking processes which play a part in the making of moral decisions. Philosophers historically have made numerous attempts to define in some detail the nature of these processes. The study is made problematic by the fact that philosophers are concerned not only with describing how people do often think, but also how they ought to think. That is, it is occupied with prescriptive as well as descriptive considerations. To define moral reasoning, for most philosophers, is to offer a normative theory which, when consistently applied, correctly sets the boundaries of morally acceptable conduct.[1] Having defined a theory, it is put to the test over a wide range of applications in search of counterexamples—instances in which the method of reasoning turns out to be flawed, leading to undesirable consequences. Thus utilitarian theories are challenged by cases in which the sacrifice of a minority appears to bring about the greatest happiness of the greatest number; Kantian deontological theories are tested by cases in which actions judged inherently wrong by the theory (e.g., lying) appear to actually be justified when alternative actions seem to lead to even worse consequences (not lying, and sacrificing a life). Moral theories which yield outcomes which are clearly contrary to the standard intuitions or widely accepted moral beliefs of one’s moral community are either rejected or modified to cover the adverse cases.

Essential to the process of testing moral theories, as we have described it, is the availability or a relatively unquestioned standard against which the outcomes of a theory can be tested. This standard may be revealed truth (the Bible), but for many philosophers it is simply a set of actions or qualities the normative acceptability of which is basically uncontroversial. Hence, a theory which allows, across the board, for arbitrary taking of life, stealing, or cheating is obviously unacceptable. Likewise, an approach which does not find a place of merit for such praiseworthy qualities as altruism or fairness is an approach destined for the ethical scrap pile. Only after a theory passes these initial, uncontroversial tests, can it be then applied to more difficult ethical issues in which no standard or agreed-upon intuitions are available to guide the way.

The basic intuitions of a moral community are those which play the most central part in what are often referred to as “value systems”. Value theory is an important point of confluence of philosophy and psychology. Philosophers are concerned with identifying the most fundamental values, and the role they play in moral reasoning. Psychologists seek to describe the formation, maintenance, structuring and change of value systems, especially as values have impact upon behavior. We will discuss values and their relation to moral reasoning when treating “attitudes” in a later section.

An even more important juncture of philosophy and psychology has to do with defining the concept of “rationality”. As we shall observe in the next section, philosophers have often disagreed on what they view as “rational” procedure. One may mean simply being consistent, or one may go further to state the ends with which one must be consistent. Psychologists also discuss the concept of rationality, but generally extend its meaning beyond a purely cognitive sense to embrace the idea of a high or efficient level of individual functioning. How this expanded notion of rationality relates to the judgment of good and bad ethical reasoning will be a topic of interest in the latter portions of this paper. At this point, we note five important issues surrounding ethical reasoning and rationality:

  1. What does it mean to be “rational” in one’s moral reasoning?
  2. To what extent is reason (cognition) a determinant of the individual’s moral decisions? Are moral decisions the result of reasons, causes, or both?
  3. To what extent can an individual become more rational in his moral decision-making?
  4. To what extent is it desirable that moral decision making be a cognitive, rational process (e.g., in some cases, a warm heart might be preferable to a “cold, calculating mind”)?
  5. Can psychological characterizations of moral reasoning styles aid us in evaluating philosophically-constructed ethical theories?

In the following section, we will survey some of the attempts of philosophers and psychologists to answer these difficult questions. In order to highlight one important variable (relating to question No. 2), we arrange the surveyed theoretical approaches along a cognitive-noncognitive continuum. Highly cognitive approaches are those which stress that reasoning plays a significant role in the formation of values and beliefs, and in deciding verbal and behavioral outcomes. Noncognitive approaches are those which interpret moral decision making, and the process of moral reasoning in general, as largely the result of nonrational causes, whether internally generated or the product of environmental impingements. It turns out that both philosophers and psychologists have staked out a number of positions on the cognitive-noncognitive continuum.

I. Some Theoretical Approaches to Ethical Reasoning

A Noncognitive Psychological Approach

Psychology as a discipline began in a highly mentalistic fashion (perhaps as an offshoot of philosophical epistemology). Thus early psychological treatments were strongly cognitive in nature. A major turning point was the appearance of Sigmund Freud’s psychoanalytic perspective. According to Freud, most human behavior is to be explained as a result of the interplay of largely unconscious drives. He maintains “that mental processes are essentially unconscious, and that those which are conscious are merely isolated acts and parts of the whole psychic entity.”[2] Freud portrays the embattled ego, constrained by reality, impelled by the guilt-producing demands of the superego, and striving to hold down the thrusts of the libido, which is an overflowing well of biological energy.[3] According to the psychoanalytic perspective, reasoning processes exist primarily to fulfill the purpose of rationalization— the justification to the ego of the inevitable inner conflict taking place between the various drives and impulses. Rationalization often takes the form of “defense mechanisms,” by which inner tensions are at least temporarily released. Typical examples are aggression, regression, projection, withdrawal and repression.

One’s style of reasoning, then, is often but a post facto expression of inner events and conflicts. While it may serve as a useful indicator to certain unconscious events (as indirectly manifested), reasoning itself is ultimately but a facade, jabbering on about things which have little to do with what is really important to the individual. Reasoning is viewed as a function of more basic events, motivations and causes hidden in the personality structure.[4] Moral reasoning is especially suspect in that it is a tool of repressive societal moral systems—viewed by Freud in at least one stage of his career as a major cause of mental illness. Opposition of the socially-approved internalized moral norms to the flow of energy which constitutes the “id” leads to anxiety, guilt, “reaction formations,” etc. Needless to say, this view casts the activity to moral reasoning in an extremely morbid and skeptical light.

A Noncognitive Philosophical Approach

A somewhat similar model is offered by Charles L. Stevenson. Stevenson distinguishes between a “disagreement in belief and a “disagreement in attitude.” These two kinds of disagreements can take place in every kind of discourse, ethics as well. Because two kinds of disagreement are possible, our concept of ethical reasoning must somehow be expanded to include both:

If ethical arguments, as we encounter them in everyday life, involved disagreement in belief exclusively—whether the beliefs were about attitudes or about something else—then I should have no quarrel with the ordinary sort of naturalistic analysis. Normative judgments could be taken as scientific statements, and amenable to the usual scientific proof. But a moment’s attention will readily show that disagreement in belief has not the exclusive role that theory has so repeatedly ascribed to it. It must be readily granted that ethical arguments usually involve disagreement in belief; but they also involve disagreement in attitude. And the conspicious role of disagreement in attitude is what we usually take, whether we realize it or not, as the distinguishing feature of ethical arguments.[5]

Accordingly, Stevenson arrives at a “working model” of moral terms which does justice to this heavily attitudinal character which he finds characterizing ethical discussions. ‘This is good’ is translated into ‘I approve of this; do so as well,’ while ‘This is bad’ becomes ‘I disapprove of this; do so as well.’[6] Stevenson acknowledges that this is a “crude” interpretation (and suggests some possible alterations), but adopts these as a sufficiently usable working model.

A great deal of Stevenson’s attention is devoted to the question of how ethical disagreements are to be resolved. Stuart Chase, in a review of Stevenson’s Ethics and Language, concludes that this amounts to the basic question, “how much can individuals be influenced by reason?”[7] Stevenson resolves this question into two separate ones, corresponding to the dual categories of disagreements in belief and disagreements in attitude. Disagreements in belief, he suggests, are highly amenable to resolution, essentially through appeal to the scientific method.[8] This may also lead to resolution of disagreement in attitudes, “due simply to the psychological fact that altered beliefs may cause altered attitudes.”[9] In this case, complete agreement on an ethical issue (a dispute about values) has been obtained, as both forms of disagreement are resolved.

Unfortuately, while one might hope that scientific and rational methods could solve all ethical disputes, such hopes do not find support in experience. Stevenson notes that “it is logically possible, at least, that two men should continue to disagree in attitude even though they had all their beliefs in common, and even though neither had made any logical or inductive error.”[10] Continuing disagreements (in attitude) are common, inasmuch as they are often due to “differences in temperament, or in early training, or in social status”—matters relatively closed to the sphere of rational discussion.[11] Given that this is the case, Stevenson pessimistically concludes that disagreement in ethical attitudes generally persist until non-rational methods for dealing with them are applied (e.g., impassioned, moving oratory). Thus, the task of the moralist is occasionally a cognitive, or rational one, but more often a noncognitive “persuasive” one. “Insofar as normative ethics draws from the sciences, in order to change attitudes via changing people’s beliefs, it draws from all the sciences; but a moralist’s peculiar aim—that of redirecting attitudes—is a type of activity, rather than knowledge, and falls within no science.”[12]

A Modified Noncognitive Approach

In Book III, Part I, section 1 of the Treatise, David Hume raises the question “whether ‘tis by means of our ideas or impressions we distinguish betwixt vice and virtue, and pronounce an action blameable and praise-worthy?”[13] Those holding that virtue “is nothing but a conformity to reason” and that there are “eternal fitnesses and unfitnesses of things, which are the same to every rational being that considers them” are those who “concur in the opinion, that morality, like truth, is discern’d merely by ideas.” Hume concludes, on the contrary, that as morals have an influence on actions, “they cannot be deriv’d from reason.” “Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason.”[14]

In the remainder of the Treatise, Hume attempts to explain how the difference between vice and virtue is to be traced to “some impression or sentiment they occasion.” He endeavors to define this “moral sense” as a “peculiar kind” of pleasure and pain, felt “only when a character is considered in general, without reference to our particular interest”[15] (though it may be the generalized result of particular, self-interested experiences in the past). In addition, he works to offer an account of the genesis of these sentiments, both from natural and artificial origins. Justice, for example, is of artificial origin, coming to be valued by men due to its learned utility, it being “requisite to the public interest, and to that of every individual.”[16] The sense of sympathy for others, on the other hand, is natural, insofar as “the minds of all men are similar in their feelings and operations.”[17] In the Enquiry, Hume discusses this category in great detail, noting those qualities which are “immediately agreeable to ourselves” and those “immediately agreeable to others.”[18] These qualities, as “agreeable” ones, differing from those perceived as “useful,” evoke immediately the peculiar kind of sentiments of easiness or satisfaction which eventually come to be designated “virtue and vice.” It is in this way that Hume attempts to offer an explanation, if not justification, for the presence of “standard intuitions” spoken of in the introduction to this paper.

To sum up, reasoning is involved with the discernment of right and wrong in only a mediate or indirect way.[19] Reason, as “the discovery of truth or falsehood,” has to do solely with “an agreement or disagreement either to the real relations of ideas, or to real existence and matters of fact;” these are “original facts and realities, complete in themselves, and implying no reference to other passions, volitions or actions.”[20] As applied to ethics, reason has but two functions. It (1) sometimes “excites a passion by informing us of the existence of something which is a proper object of it,” and (2) “it discovers the connection of causes and effects so as to afford us means of exerting any passion.”[21] Reason and judgment may be “the mediate cause of an action, by prompting or by directing a passion.”[22] Outside of these two roles, however, reason is “wholly inactive, and can never be the source of so active a principle as conscience, or a sense of morals.”[23]

Before moving on, we note that a major difference between Hume’s account, and that of Stevenson, is that Hume attempts to offer a rational explanation for the kinds of sentiments humans experience. Although ethical norms are founded on the basis of sentiment and not reasons, a “reason” can be given for the occurrence of the sentiments, based on the “usefulness or agreeableness” or their objects. These “meta-reasons” for ethical norms are employed when

Hume argues against the “sensible knave” who would violate the conventional moralty for his own profit. Stevenson, emphasizing individual differences in personality and upbringing, does not attempt to offer this kind of metatheoretical rationale.

A Cognitive Psychological Approach

Lawrence Kohlberg, an educator, psychologist and philosopher from Harvard University, has been particularly concerned to evolve a successful method of moral education capable of inducing moral character growth in individuals regardless of their present state of moral development. In order to accomplish this, he has constructed a developmental model and corresponding measuring instruments by which it is possible to determine an individual’s stage of development. Through strongly cognitive methods, Kohlberg seeks to bring subjects to a conscious awareness of how far they have advanced, and hopefully to further progress in their style of moral reasoning. Kohlberg’s method offers the hope of precision and controlled testing of moral education techniques; as a result, he has received much attention in the literature and has attracted an enthusiastic band of disciples. However, he has also not been without his critics. The following is his essential method, as it has evolved in the past twenty years.[24]

Kohlberg’s approach is generally described as a “cognitive-developmental” approach.[25] This is an excellent description, capturing the central motifs of his thought. On the one hand, it is a cognitive approach. Kohlberg is centrally committed to the importance of cognition (i.e., of thinking and reasoning processes) in moral decision-making. This does not entail that he accepts the classical notion of the “rational man”—i.e., the view that humans, like virtual computers are eminently rational beings, scarcely swayed by feelings, motivations or other baser sorts of impulses. A century of psychology and generations of human experience have served to sufficiently dispel that notion. Yet Kohlberg does not swing to the other extreme with the noncognitivists (cf. supra). Reasoning is regarded by Kohlberg as an important if not all-encompassing, determinant of human action. This renders the study of the modes of cognitive processing (in this case, about ethical matters) highly significant. Which leads to the second primary motif of Kohlberg’s thought.

Influenced greatly by the developmental theory and research of Jean Piaget,[26] Kohlberg maintains that individuals advance through predictable stages of development in their ability to think and reason. Thus, certain essential cognitive capacities (e.g., the ability to make fine or subtle distinctions) are less present or even absent in the young child, but gradually come into play as the child develops both mentally and socially. Some, who suffer genetic impairments in learning abilities, may never develop these skills. Generalizing upon these observations, if one’s cognitive skills are such that they develop over time according to a fairly regular sequence of growth, then one’s moral reasoning abilities as well must develop according to this same sequence. It becomes possible, then, to postulate stages of moral reasoning development and, presuming (as Kohlberg does) that moral reasoning plays a significant role in determining moral behavior, one can suggest general stages of moral development, as based on the stage at which one reasons. This completes the essential theoretical background of Kohlberg’s approach; it remains to note the techniques by which he measures and identifies these “stages,” and some of the proposed strategies for enhancing moral development which he has sought to test in a variety of moral education settings.

Identification of the stages of moral reasoning is made possible by the assumed link between styles of moral thinking and moral behavior outcomes. Kohlberg begins by searching out a short list of uncontroversially praiseworthy moral traits such as helping, sharing and resisting the temptation to cheat.[27] His next step is both a conceptual and empirical one. In terms of conceptual analysis, he considers approaches to thinking about ethics most likely to motivate one to maximize these morally praiseworthy behaviors in one’s own conduct patterns. In line with his own philosophical preferences, and Piaget’s cognitive analysis, he chooses theoretical reasoning approaches emphasizing justice, fairness and individual autonomy. Conjointly, through an empirical interview process, Kohlberg identifies regularities in the thinking patterns of those not so accustomed to maximizing these behaviors (e.g., juvenile delinquents, hardened criminals) and those who often exemplify them (Kohlberg’s enlightened colleagues? Those in the “helping professions?). Collation of the results of these two approaches yields, for Kohlberg, a set of six stages, divided into three basic levels.[28] This schema can now be submitted to further testing, particularly for predictive accuracy (e.g., can one identify the hardened criminal by blind exposure to a sample of his moral reasoning or, vice versa, can one predict how a self-sacrificing missionary doctor will answer questions on a moral reasoning interview?).

Finally, assuming that Kohlberg’s six stages schema has received a high degree of experimental confirmation (as claimed by many adherants, and contested by the critics),[29] it can now be employed as a measuring device to determine when instances of moral development have taken place. A variety of techniques can thus be tested against the Kohlberg scale to determine their efficacy in producing an increased facility in moral reasoning.

Two particular techniques employed quite often are those of the moral dilemma and moral role-playing (the two are often combined in one exercise). An individual or entire group is challenged to take positions in a difficult moral dilemma; their manner of reasoning about the problem is then analyzed, and they are invited to assess their development according to the Kohlberg scale. This is often referred to as a form of “values clarification,” inasmuch as the emphasis is not so much upon immediate change as upon gaining awareness of how one thinks at a particular time.

A Cognitive Philosophical Approach

Another strongly cognitive model is that of Immanuel Kant. In The Groundwork of the Metaphysics of Morals, Kant argues that reason, and not other casual factors, is the only true motivating force behind ethical decision-making. This is not true of decision-making in other action-oriented disciplines, hence this is often referred to as an argument for the “autonomy of ethics.” Kant argues that actions which have truly moral connotations are those which are motivated by duty, a uniquely rational form of practical necessitation.

The practical necessity of acting on this principle—that is, duty— is in no way based on feelings, impulses, and inclinations, but only on the relation of rational beings to one another, a relation in which the will of a rational being must always be regarded as making universal law, because otherwise he could not be conceived as an end in himself.[30]

Man, viewed as a rational being, is an end in himself, and thus “the maker of universal law;”[31] Kant maintains this in the context of a complex argument interconnecting the notions of man’s rationality, his freedom, and dignity. This is summed up well in the concept of autonomy, which implies both freedom and law-making, law-making then implying rationality. Man, as a highly dignified and rational creature, regulates himself through the operation of his own reason; it is reason (i.e., himself qua reasoner) and nothing else that binds him when he acts according to duty.

Such actions too need no recommendation from any subjective disposition or taste in order to meet with favour and approval; they need no immediate propensity or feeling for themselves; they exhibit the will which performs them as an object of immediate reverence; nor is anything other than reason required to impose them upon the will, nor to coax them from the will—which last would anyhow be a contradition in the case of duties.[32]

Kant thus revolts against conceptions of ethical reasoning which ascribe significant causal roles to non-cognitive feelings, sentiments or attitudes; these are in essential contradiction to Kant’s view of the dignity and autonomy of man.

It follows equally that this dignity (or prerogative) of his above all mere things of nature carries with it the necessity of always choosing his maxims from the point of view of himself—and also of every other rational being—as a maker of law (and this is why they are called persons).[33]

In order to construct a corresponding method of ethical reasoning, Kant must find some way in which reason alone can function as a determinant or motivating force of action.[34] To do so, he constructs a theory of ethical reasoning founded on “conformity to universal law as such.” This consists of, in essence, a test of consistency by which it is determined whether a suggested maxim could be followed through consistently (i.e., “rationally”) by a rational individual. In the various forms of the Categorical Imperative offered by Kant, this amounts to testing the “universalizability” of the maxim (which, after all, is supposed to be a “universal law”). The obligatory force of acceptable maxims comes from one’s having posited them for oneself as a well-functioning rational being (as well as from the equally necessary rational implication of one’s duty to all other rational beings as members of a kingdom of ends). The motivating force is reason, but reason in the form of a universal law posited by oneself. Hence, ultimately, the motivation for obedience to moral standards consists of one’s act of volition—but this, as opposed to Hume,[35] is viewed as an act rational in its very nature (it is a “rational being” that acts).

To conclude, Kant thus develops a highly cognitive view of ethics and ethical reasoning, based primarily on his image of man. While he acknowledges the role of other, non-cognitive factors on the human person (man viewed from the “point of view of the sensible world”),[36] he also maintains man’s capacity to transcend these influences, making spontaneous rational decisions which are self-binding. Moral reasoning consists of demonstrating, via the Categorical Imperative, that a proposed maxim is genuinely of this spontaneous and rational character, and not an expression of baser, selfish instincts.

A Modified Cognitive Approach

Like Kant, Hobbes constructs his view of moral reasoning, and his political philosophy in general, around a view of human nature and human functioning. Two factors especially stand out. First of all, Hobbes’ mechanistic psychological egoism. Life “is but a motion of Limbs, the beginning whereof is in some principal part within;”[37] this principal part, or “spring” is apparently wound up for the purpose of pursuing self-interest. The will, for Hobbes, is but that which “in deliberation (is) the last appetite, or aversion, immediately adhering to the action, or to the omission thereof.”[38]

Thus, “the voluntary actions, and inclinations of all men, tend, not only to the procuring, but also to the assuring of a contented life … of the voluntary acts of every man, the object is some good to himself.”[39] The result of this is “a general inclination of all mankind, a perpetual and restless desire of power after power, that ceaseth only in death.”[40] From this ground-level analysis of human motivation, Hobbes concludes the necessity of some absolute sovereign power, capable of enforcing peace between men otherwise equally powerful and equally self-interested, and hence in a constant state of war.

A second important facet of Hobbes’ analysis of human nature is in regard to the ethical notions and values of mankind. In that Hobbes is a psychological egoist, he interprets man’s moral notions and beliefs in terms of what men value as a function of their personal desires and goals. “But whatsoever is the object of any man’s appetite or desire, that is it which he for his part calleth good; and the object of his hate and aversion, evil.”[41] For these words of good and evil, and contemptible, are ever used with relation to the person that useth them: there being nothing simply and absolutely so; nor any common rule of good and evil, to be taken from the nature of the objects themselves.”[42] A priori concepts of moral good or rights are denied, as well, by Hobbes’ radically empiricist and nominalist epistemology.[43] “Virtue,” then, can only be “that which is valued,” and “consisteth in comparison;”[44] ethics will be constructed upon what is of value to each individual, construed as being concerned only with self-interest. Aside from the value one has to himself as a human, in general “the WORTH of a man is, as of all other things, his price.” It is “not absolute … but a thing dependent on the need and judgment of another.” Human “dignity” is but a function of the “public worth of a man, which is the value set on him by the commonwealth.”[45]

This approach is often referred to as an “economic man” view; it was this view which was often assumed by early economists for theoretical and predictive purposes. David Gauthier sums up the economic conception (uniquely suited for describing marketplace behavior) in terms of three “dogmas.”[46] First, value is conceived of as utility, “a measure of subjective, individual preference.” Secondly, rationality is construed as maximization of utility: to be rational is to decide to act in those ways which offer the highest expected value to oneself. Thirdly, individual interests are regarded as “non-tuistic:” individuals tend to take primary interest in their own needs and wants, deriving utility therefrom.[47] On this model, rationality plays an important, but primarily instrumental or “means-end” role. Behavioral decision theory, from a psychological standpoint, has studied in considerable detail the manner in which individuals calculate costs and benefits in making utility-maximizing decisions.[48] Moral reasoning, on this model, is a form of calculation (a “Reconing of the consequences”—Hobbes) of the best means to essentially selfish ends. It remains for moral theorists of this stripe to demonstrate that traditionally moral conduct is a good means to selfish ends—a task which has proven immensely difficult over time.

* * *

In this section we have considered six models of ethical reasoning, ranging on the continuum from noncognitive through highly cognitive. It might be noted that we did not consider a psychological model fitting into the “modified” category. A model conveniently classifiable according to this designation will be treated in the next section as we discuss a multidimensional attitudinal model of ethical reasoning. Before doing so, however, we note some of the strengths and weaknesses of the models just surveyed.

Noncognitive models, first of all, have their greatest strength in acknowledging the complexity of human behavior. It is simply impossible to ascribe all human conduct to disciplined rational decision-making. Not only do individuals often act inconsistently with their stated beliefs, but the very beliefs themselves are often held seemingly with no reason at all. Rather, maintenance of the beliefs may satisfy some basic internal personal need or drive, or constitute a form of defense against a perceived threat in the environment. Noncognitive models fail, however, in not paying proper respect to the extent to which humans are rational. As some psychologists note, the fact that humans find it necessary to “rationalize” is evidence that they want to at least appear rational; human reason-giving behavior, which is very common, is an important datum which cannot be overlooked.[49] Also, considerable evidence exists that individuals often reject persuasive communications which are regarded as failing to offer valid arguments, suggesting that not all values and attitudes are held as a result of solely noncognitive causes. This does not entail the “rational man” who is moved by nothing but reason, but it does imply that sizable chunks of human experience are open to rational assessment.

These weaknesses are more than made up for by cognitive theories. Those espousing a cognitive approach give reason a very strong role in moral decision-making and conduct. They do not hesitate to seek out a correct mode of moral reasoning in the hope that the kind of stalemates noncognitivists worry about can be avoided. Cognitive developmentalists account for the frequent failures of supposedly rational people to produce moral outcomes by noting, as one factor, the ontogenic stages of moral thinking. That is, moral reasoning may be a universal phenonmenon, but some do it better than others. The weakness of highly cognitive approaches, however, is that they often attempt to explain too much in terms of stages of cognitive development. It cannot be denied that noncognitive causes often interfere with the processes of rational assessment, leading to outcomes quite out of stride with one’s cognitive developmental stage.[50] For example, one’s ability to argue in a very mature way for the moral obligation to save a drowning man may not be able to overcome one’s hydrophobia, operating as a strong deterrant against action. Cognitive theories have tended to suffer a certain theoretical impoverishment in regard to their ability to explain how these noncognitive factors in personality and environment interact with normally functioning cognitive processes. One thinks, for example, of Kohlberg’s stages: can the ordered sequence he posits be explained totally by the development of cognitive skills? Experimental results militate strongly against this hypothesis, suggesting that other, essentially noncognitive factors also contribute to determining the level of advancement of one’s ability to morally reason.[51]

Turning to the two modified approaches treated in the survey, we note similar advantages and shortcomings. Hume’s essentially noncognitive approach is modified, as noted, in the sense that it attempts to offer “meta-reasons” for the occurrence of those noncognitive sentiments socially baptized as “moral.” These sentiments are shown to have survival value, and it becomes understandable why they come to have moral connotations. However, it is difficult to move from highly speculative and generalized stories about the genesis of sentiments to specific moral judgments, as required in controversial or problematic situations. Hume’s psychological characterization, though often appealing, is simply too vague; like the cognitive theories discussed above, it requires theoretical enrichment. We will attempt something of this in the next section.

A similar weakness flaws the “economic man” concept of moral reasoning. Without doubt, means-end reasoning is a common and vital aspect of human decision making; the many current applications of behavioral decision theory in business management and government attests to how many areas of human life depend upon this kind of thinking. However, it is an oversimplification to assume that all reasoning is of a means-end sort. In fact, humans act in response to a number of motivations, of which personal utility maximization is only one. Studies have shown, for example, that in order to reduce dissonance between beliefs, individuals will often act in ways which do not reflect maximal utility outcomes—even when they know that they could have gained those outcomes. Just as individuals do not always act to maximize their perceived utilities, so also it can be strongly disputed that individuals are non-tuistic. As Gauthier notes, it is unfortunate that “acting rationally” has come, in accord with the economic man model, to be equated with “doing what is in your self interest” (whatever the effects may be on others). It is the “individualistic” bias so characteristic of Western liberal humanism which has evoked sharp criticism from Marxist theorists, who argue that there is no such thing as a “rational” disregard for one’s fellow man.[52]

II. Attitudes and Moral Reasoning

In order to shed additional light on the nature and scope of moral reasoning, we turn to one of the most important areas of research in the field of social psychology—that of attitude theory. The concept of “attitude” has been central to social psychology since it was emphasized in Thomas and Znaniecki’s seminal study, The Polish Peasant in Europe and America (1913).[53] The concept initially stood for a “physical positioning” of an object with respect to a background. German theoriests at the turn of the century experimented with attitudes and psychophysical “sets” or states of readiness involving muscular preparations for action. Later, the term came to have a more subjective connotation (e.g., in Thomas and Znaniecki’s study), having to do with a subject’s mental positioning of an object in regard to himself, his values and world, especially in the sense that it prepares him for action in regard to the object, event or person. Gordon Allport, after noting the difficulty of defining the concept (and considering over 100 possibilities!) arrives at the following definition:

An attitude is a mental and neural state of readiness, organized through experience, exerting a direct or dynamic influence upon the individual’s response to all objects and situations with which it is related.[54]

This definition has come to be widely accepted.

One reason for these many interpretations of attitude is that it is a purposely multidimensional construct. Katz and Stotland, in an important functional analysis of attitudes, note this peculiar but positive attribute:

Efforts to deal with the real world show our need for a concept more flexible and more covert than habit, more specifically oriented to social objects than personality traits, less global than value systems, more directive than beliefs, and more ideational than motive pattern.[55]

The weakness of the concept of attitude turns out to be its greatest strength: it is a construct which serves the purpose of unifying several different kinds of phenomena occuring in the personality structure and social life under the heading of one theoretical variable.

The three “components” most commonly ascribed to attitudes are cognition (knowing), affect (feelings) and behavior (intentions, actions, or what is traditionally called “connation”).[56] This is a division of the personality which goes back at least to Plato, and has a rich and variegated history. Attitudes are postulated, on this model, as unified theoretical constructs which systematically integrate these three functions. Considerable attention to the processes which make for this integration has come from the “cognitive consistency” branch of attitudinal theory.57 Cognitive consistency theorists have noted the tendency of individuals to seek consistency (i.e., to reduce dissonances or incongruities) between their beliefs, feelings and actions in regard to an object or set of related objects, events or persons. While logicians have for centuries been concerned with the preservation of consistency between beliefs, cognitive consistency theorists, as social psychologists, extend this interest to the study of the relationship between feelings, behavior and beliefs, especially as they are manifested in social contexts. It is this school which has done the most to popularize the multidimensional concept of attitude. Thus “attitude” becomes a convenient theoretical arena within which to seek to specify the influences and processes involved in the interrelating of the three components. In what follows we will consider a specific attitudinal model of these processes advocated by cognitive consistency theorist Milton J. Rosenberg. We will suggest that this has considerable value for understanding the phenomenon of moral reasoning.

An Attitudinal Model

Rosenberg describes his model as an “affective-cognitive consistency theory.”[58] This is because it concentrates primarily on the relationship between those higher order cognitive processes which constitute belief systems, and the influence of the individual’s affective coloring of his world. Rosenberg also refers to his theory as a system of “symbolic psycho-logic.” Psychologic involves the rules of inference commonly employed by those processing affectively-loaded subject matter.[59] These rules, as Rosenberg notes, might be “mortifying to the logician,” but as interpreted according to the cognitive consistency model, turn out to have a logicality peculiarly their own. Rosenberg’s approach has received some rigorous testing (e.g., in his well-known collaboration with Carl Abelson in the “Fenwick studies”[60] on interpersonal balance), and has received continuing modification and refinement since its initial formulation in the late 1950s. The theory is best described by means of a metaphor Rosenberg employs in a summarizing article.[61]

First, one begins by picturing a finite but vast space called the “attitudinal cognitorium” or “attitude universe.” Within this space are located hundreds or probably thousands of object-concepts, each being a verbal or symbolic representation of a person, institution, policy, place, event, value standard (or ideal), or any other distinct “thing” which when psychologically encountered, elicits some fairly stable magnitude of either positive or negative evaluative affect. Rosenberg suggests that each of these object-concepts might be represented by a small metal disk. Between these many disks, trying them together, run strings which are thin or thick, red or green. Thick strings suggest strong ties between two object-concepts— strong, that is, as perceived by the self, not necessarily as they may be in reality. Thin strings connote more accidental or superficial ties, most likely having little to do with the internal constitution of the two interconnected objects. Some disks are not tied together at all. Red strings stand for negative or disjunctive relations, of the sort that might be conveyed by the terms ‘opposes,’ ‘prevents,’ ‘dislikes,’ ‘stays away from,’ etc. Rosenberg seems to have in mind here a semi-conceptual and semi-affective relationship. Previous theorists (e.g., Fritz Heider) distinguished between “sentiment relations,” linked by common feelings, and “unit relations” which involve factual or conceptual connections perceived to exist between two objects. The latter are presumed to be affectively neutral, while the former are affectively loaded. Rosenberg treats these as one, so that red strings in general appear to indicate one’s inability (or unwillingness) to think of two things as being together, for either cognitive or affective reasons. They are psychologically in tension. Green strings, on the other hand, indicate a positive or “conjunctive” relationship, as conveyed by such terms as ‘supports,’ ‘facilitates,’ ‘likes,’ ‘helps’ and ‘is part of.’

In an individual’s attitudinal universe, then, any given disk is tied by red strings to some objects and green strings to others. Rosenberg gives as an example such objects as ‘air pollution,’ ‘Chicago Blackhawks,’ ‘bituminous coal,’ ‘the romantic tradition,’ ‘Gustav Mahler,’ ‘Senator Fulbright’ and ‘my son.’ No string appears to tie ‘Gustav Mahler’ to ‘Senator Fulbright’ or ‘the Chicago Blackhawks.’ However, the disk “Fulbright’ is connected by a strong red thread to “Vietnam War’ and a thinner red string to ‘air pollution.’ Between ‘air pollution’ (an effectively negative object) and ‘bituminous coal’ exists a strong green thread, especially where the individual’s experience has been in a coal-burning city which is highly polluted. Similar connections can be imagined for all of the disks mentioned. Imagining the whole array of thousands of disks complexly interconnected, one would expect to see like-signed objects most often connected by green strings, while unlike signed objects are most often connected by red strings. This would be the case to the extent that the individual is consistent in his attitudes. Disks would be connected directly to only a few other disks (disks often being arrayed in the form of overlapping clusters), while indirectly to hundreds of others.

This picture enables Rosenberg to offer a metaphorical characterization of an “attitude.” Imagining the entire interconnected system of disks arrayed upon a vast floor, we can imagine the effect of lifting up one disk a few feet from the surface. The result would be the lifting up of other disks—those directly connected, as well as a periphery of more intermediately connected items. Those disks which are lifted from the surface constitute an attitude, where the center disk (the one used to lift up the cluster) is the attitude object. Thus attitudes are regarded, metaphorically, as “radial structures” uniting an object to other object-concepts with a high degree of affective-cognitive consistency or at least interrelatedness (as connoted by the presence of the red and green strings). Lifting up one object disk will bring to one’s attention other disks towards which one will feel either positive or negative affect, depending upon their red-stringed or green-stringed relations to the attitude object. We note that these relations are those conceived of as existing by the subject. They may or may not correspond to actual relations in the world, or conform to the standards of logical consistency. It is also important to observe that the disks occur in often highly organized clusters. Larger clusters, having broad organizational implications for the entire attitude universe, may be classified in two ways:

1. World-views or belief-systems:

(a) threads primarily designate perceived conceptual or factual relations 

(b) the affective loading of the disks is not the preeminent factor in the threading process, though it may have some import 

(c) the attitude cluster is lifted up for analysis purposes, rather than for affectively evaluating an object or action

2. Value systems:

(a) threads often (though not always) designate affective connections 

(b) affective loading of disks is of great importance 

(c) lifting out of the attitude cluster is often for the purpose of deciding about the affect sign of an object or action[62]

These two classifications do not constitute a strict dichotomy. Affect and cognition, beliefs and value systems often interact (hence the ambiguity as to the affective-cognitive nature of the red and green threads). Studies on ethnocentrism, for example, have demonstrated that in many cases one’s beliefs about another ethnic group (conceptual red and green threads tying the concept of the ethnic group to other factors, such as stereotypic qualities) play an important role in determining one’s attitude towards that group.

This is in marked contrast to both the cognitive and noncognitivist stances towards human reasoning and its functioning.

Rosenberg develops the model we have described in the interest of understanding the psychodynamics involved in an individual’s quest for consistency on various kinds of salient issues.[63] We are concerned, however, with attitudinal thinking directed at moral questions, and seek a characterization of such thinking as it involves both belief and value systems. In the next two sections, we will attempt to bring the model into clearer focus, examining (1) particular types of attitudes as characterized by their functional role in the life of the individual, and (2) cognitive styles, expressing particular configurations or “threads and disks” in an individual’s attitude universe. Though attitudes have been examined from many conceptual and experimental angles, we suggest that these two aspects will be especially helpful in applying our model to moral reasoning.

III. Determination, Functions and Types of Attitudes

Most contemporary personality theorists agree that there is no one unitary drive or homogeneous activation state which accounts for all facets of human thinking and behavior.[64] Individuals are driven and motivated by a number of needs, appetites, wishes, intentions and goals varying in intensity, continuity, control by the individual, and openness to conscious awareness and cognitive processing. William J. McGuire[65] lists the following factors which have come to be widely accepted as significant determinants of attitudes: genetic factors (e.g., innate personality characteristics, IQ and, if the sociobiologists are right, genertically inbred instincts such as altruism), physiological factors (sex, age, physical illnesses, drug-induced effects), direct experience with stimulus objects (single traumatic incidents or repeated observations), total institutions (socializing environments—in general, group influences—tending to impart internalizable programs to the individual) and social communications (especially those ostensible offering cognitive support for a position). In acknowledging these many determinants, attitude researchers have traditionally sought to sidestep the “nature-nurture” debate; they suggest that attitudes are often a function of both acquired dispositions and “built-in” functional tendencies.

Attitudes, then, are not “windowless monads,” nor even one-window affairs. This openness to multiple influences strongly suggests that attitudes may serve a number of functions in the individual. Katz and Stotland, outlining a general theory of attitudes, discuss three basic types of motive patterns which are instrumental to the satisfaction of many of the individual’s needs.[66] As motive patterns are important in functionally shaping the structure and direction of the attitudes which they generate, in our discussion we will equate motive patterns with the attitude types which result from them. The three motive patterns are (1) proximal attitudes, (2) object-instrumental attitudes, and (3) ego-instrumental or ego-defensive attitudes.

Proximal attitudes are attitudes towards objects regarded as having intrinsic value (i.e., which satisfy needs and wants directly). In this case, attitude objects are “consummatory with regard to psychological gratification.” Examples are foods found agreeable to the taste, or the sports car which gives a sense of power and control to the driver. The ability of such objects to satisfy needs determines their “functional value.” Katz and Stotland suggest that the intensity of affective evaluative qualities (our tendency to call it “good” or “bad”) in the object may vary with such factors as how readily or easily it is satisfied, and the tendency of one’s group to evaluate the object.

A second kind of motive pattern-satisfying attitude is the object-instrumental type. Such attitudes reflect the “lengthy and sometimes circuitous pathways” involved in satisfying a motive in a complex society characterized by scarcities.[67] In this case, the individual favorably evaluates attitude objects due to their perceived instrumental value in attaining his goals. Object-instrumental attitudes usually have a heavily cognitive character due to the need to justify the delay and frustration involved in indirectly consummating valued ends; also a certain cognitive “bolstering” is required to justify one means to the end over others.[68]

A third type of attitude is the ego-instrumental or ego-defensive type. This plays the role of helping an individual to maintain his conception of himself as a certain kind of person. Verbally expressing these attitudes indicates to others the kind of person one is. Whereas in the case of proximal attitudes, the object was gratifying, and for object-instrumental attitudes, the goal served this purpose, in this case ego-satisfactions provide the attitude with affective thrust. As Katz and Stotland note, two purposes are served by this kind of attitude.

Ego-defensive attitudes protect the ego but their expression also gives the individual direct satisfaction. The person who projects his own hostilities onto other people and then attacks these hostile people satisfies two purposes. Projecting his aggression protects his self-image from a recognition of undesirable qualities. Expressing the aggression gives cathartic release.[69]

McGuire, offering a similar functionally-defined list of attitude-types, distinquishes between the two functions of ego-defense and self-realization/ expression.[70] He also adds another major function— that of forming attitudes as organizing devices for knowledge purposes.[71] This kind of attitude may involve no affective loading or motive satisfaction except that gained from the sheer enjoyment of investigative curiousity or the “love of wisdom.” We will refer to these as “cognitive-explorative attitudes.”

Attitude Determinants and Moral Reasoning

Interesting correlations between these attitude types and the theoretical approaches to moral reasoning discussed in Section I spring immediately to mind. Proximal attitudes, for example, sound strikingly similar to Hume’s “sentiments found immediately agreeable.” Object-instrumental attitudes correlate with economic conceptions of reasoning, wherein good reasoning is equated with the evaluation of choices yielding maximal expected utility. Self-expressive attitudes might be compared to ethical intuitionist approaches (not surveyed in Section I), which emphasize self-defining moral properties which are phenomenologically identified. Also, Kohlberg’s approach, with its emphasis upon “post-conventional” autonomy, seems to imply a high degree of self-realization, hopefully leading to the increased ability to approach others emphatically. Finally, cognitive-explorative attitudes resemble highly cognitive approaches, in which ethical norms and duties tend to be transcendentalized, abstracted or eternalized. Are these just playful comparisons, or do they indicate an important relationship?

We suggest that these correlations are of high, but not grandiose significance. On the one hand, they tend to affirm the psychological foundations which support a number of the theoretical approaches surveyed in Section I. Each theory appears to reflect something of the motivational patterns present in most individuals. However, inasmuch as attitudes may serve a multitude of functions, it is unclear why those representing any one motivational need pattern should be preferred to all of the others. Object-instrumental attitudes, as expressed in utility-oriented means-end thinking, accomplish important adaptive goals in human functioning. Self-expressive attitudes, though, are also vital, facilitating the cathartic release of tensions and encouraging growth in self-understanding and expression. Why should one type of reasoning be preferred over the other? If anything, it is the situation which often forces us to make such normative distinctions. It would be poor timing to seek to satisfy the love of wisdom when faced with an adaptive crisis requiring an accurate and quick cost-benefit analysis of outcomes.

This has interesting implications for our questions about the degree of cognitivity of ethical reasoning. If ethical reasoning is defined according to any one type of attitudinal thinking, then we can expect that humans will not turn out to be exhaustively rea-tional. This is because, as we have seen, no type of attitudinal thinking captures the entirety of the human motivational picture. The question as to the extent to which individuals can increase in their rational capacities would be answered by two considerations. First, as Kohlberg points out, this is a question of developmental growth in cognitive skills. But secondly, it would involve shifting an individual’s patterns of need satisfaction, so that he might prefer one type of attitudinal thinking to another. A person who is highly proximal in the way he maintains attitudes is one primarily motivated by immediate gratifications.[72] Moral reasoning development for such an individual might consist of introducing him to the more varied and lasting kinds of gratification which result from object-instrumental, self-expressive or explorative attitudinal thinking. Such has been the goal of educators since the dawn of time. To some degree this can be accomplished through appeal to cognitive considerations (“if you keep doing this, here is where it will lead you!”). On the other other hand, heavy investment in proximal need satisfaction may indicate a psychological impairment of other functions due to low esteem and a pattern of frustrated object-instrumental attempts.[73] Addressing these problems would appear to be essential to the further development of applied reasoning capacities in such individuals. Various therapeutic methodologies—some highly cognitve—are possible avenue at this point.

How cognitive should moral decision-making be? This has generally been asked in reaction to those seeking to maximize either object-instrumental or cognitive explorative types of thinking. Object-instrumentalists are hence regarded as being “beady-eyed” or “cold and unsympathizing,” while cognitive-explorationists are said to reside in distant ivory towers. The problem here seems to be one of determining when it is proper to act out the more cognitive attitude functions. Means-ends type thinking seems poorly suited if one’s task is the discernment within one’s heart of whether a motive is a selfish one. A self-expressive attitude would better serve the purposes, involving as it does an intuitionistic type of reasoning. An important philosophical project (which we will not attempt in this paper) would be to determine the conditions specifying the applicability of each moral reasoning approach. When is it right to be a means-end thinker? When is it right to abstract moral questions from specific situations? Recognizing that moral reasoning approaches reflect a variety of attitudinal functions (all of which are sometimes beneficial ones) casts questions about the cognitivity of moral decision-making in an entirely different light.

One final application of our study of attitude types has to do with moral reasoning and ego-defensive attitudes. Among the five kinds of attitude types, ego-defense is most often associated with insufficient and self-defeating kinds of behavior. Though some degree of defensiveness is necessary for any person on this side of heaven’s gates, heavy indulgence in ego-defense is not seen as a healthy psychological orientation. It is “irrational” in the sense that it does not serve the long range purposes of the individual. To use Fromm’s terminology, it is a “nonproductive orientation.”[74] Here a normative concept of “good functioning” crosses paths with the psychological description of a type of attitude. Were we able to identify forms of moral reasoning with this type of attitude, we might have the privilege of making normative distinctions based on the study of attitudes.

But this leads us to questions of attitudinal structure. Ego-defensive reasoning can of course be identified by its self-justificatory and others-condemning character. But moral reasoning expressive of ego-defensive attitudes appears to cross theoretical lines: one may be a defensive utilitarian or a defensive deontologist. Only a more fine-grained analysis will reveal the difference between overly defensive versus relatively undefensive ethical reasoning. To this we turn.

IV. Cognitive Styles and Moral Reasoning

Another aspect of attitude research may bear more fruit in our attempt to make normative distinctions. One of the motivations behind the development of twentieth century social psychology has been to understand the thinking styles of those who are regarded (via an a priori normative judgment) as causing problems for society. Two particular targets of this investigation are criminals and bigots. Many interesting psychological portraits of “the criminal mind” exist; unfortunately, many of these conflict, and the success rate of treatment in connection with these diagnoses has been dismally low. On the other hand, psychological characterizations of the “cognitive styles” of racial and ethnic bigots have achieved relatively wide acceptance and agreement.[75] In addition, some success has been reported in reeducation and attitude change of these personality types. Thus, we will concentrate on this latter type, as illustrative of cognitive styles which are regarded as being both normatively unacceptable and (though not in a strictly pathological sense) psychologically unhealthy and undesirable.

Cognitive styles have been described by means of a number of hypothesized dimensions. One of the most famous is Milton Rokeach’s “open and closed mind.” The following statement is representative of Rokeach’s findings:

Some major findings that come out of such studies are that persons who are high in ethnic prejudice and/or authoritarianism, as compared with persons who are low, are more rigid in their problem-solving behavior, more concrete in their thinking, and more narrow in their grasp of a particular subject; they also have a greater tendency to premature closure in their perceptual processes and to distortions in memory, and a greater tendency to be intolerant of ambiguity.[76]

Other such structural characteristics expressing themselves in cognitive styles are the “authoritarian personality,” the others-directed conformist, the undifferentiated or field-dependent thinker, and those with “low latitudes of acceptance,” high measures of concreteness, minimal category width, and a relative inability to distinguish source from concept. Whether these may someday be reduced to some one essential variable, they all represent structural properties affecting reasoning styles which have been found to correlate with undesirable or normatively unacceptable personality types. It will be noted that this is similar to Kohlberg’s approach, except that it descends one theoretical level deeper into attitudinal processes in order to explain why an individual reasons as he does. Moral reasoning approaches which are unsatisfactory are, as we have noted, characterized by peculiar cognitive organizations. Employing our section II model, we might describe them as consisting of a number of affectively-loaded clusters. The clusters tend to be relatively limited in size (low category width), have extremely well-defined boundaries (concreteness) and are either quite detached from each other or separated by red threads.[77] Each cluster is characterized by a high degree of internal consistency (low differentiation). This consistency, however, is most often achieved by coupling like-signed disks. Where threads run between disks, they are usually green ones, and they stand for affective, as opposed to factual relations. Most of the disks are attached to the self-concept by such threads. This suggests that attitude objects are evaluated based upon their perceived consistency with the self-image. Degree or ego-involvement would determine the intensity of affective reaction to the object. Where self-image needs support, attitude objects perceived of as agreeing with and likable to the individual are strongly knit to one by positive relations. “Enemies” to the self or to perceived friends and allies are distanced by red threads in accord with the strategies of ego defense. Consistency is maintained primarily by like-signedness, rather than by means of logical-conceptual relations. As a result, many logical inconsistencies may exist, hidden by the seeming consistency of affective ties. Finally, such systems are characterized by swift and radical change. Should an important element in a cluster be perceived as changing sign, an entire cluster might immediately suffer disgrace, or be raised to a position of honor.

This model corresponds nicely to some of the results of cognitive developmental theory. Developmentalists are concerned to increase such factors as the individual’s willingness to seek his own standards (autonomy), the ability to appreciate subtle differences in situations, consistency, the ability to think consequentially, and the inclination and capacity to empathize. All of these are affected by one’s attitudinal structure. Tightly knit clusters having extreme affective significance (due to heavy ego-involvement and defensiveness) discourage one from taking risks in one’s thinking, as this would be to “go out on one’s own” without the protection of one’s clustered attitude objects. Subtleties in general cannot be perceived in an attitude system which insists on placing every attitude object in strongly homogenous positive or negative clusters. Consistency and consequential thinking are both affected by the preference for affective over conceptual and factual ties between objects. Those perceived as favorable will often be attached by green threads, even when the facts or logic indicate that red threads should be placed. Finally, empathic skills are limited by (a) the tendency to classify as positive or negative, and (b) inability to make imaginative leaps (via “principles”) to unclustered attitude objects. Often, foreign alien objects are automatically classified as unfavorable unless direct experience of them as nonthreatening or favorable takes place.

How can such thinking be changed? In terms of our model, this might mean rearranging clusters attitudinally into less of a black and white pattern through (a) eliminating false relations between objects based on incorrect beliefs, (b) eliminating the number of relations between objects established purely on the basis of like-signed affective loading, and (c) establishing new, differentiated clusters of attitudes through introducing new relationships between previously unconnected disks. Both (a) and (c) involve primarily cognitve readjustments, which might be done through classroom exercises and educational experiences. However, (b) is more difficult, inasmuch as the tendency to build affectively similar clusters is often an expression of ego-defense, unresolved conflicts, insecurity and poor self-image, etc. Unless these functional determinants are treated, perhaps through counseling, the motivation to continue building such structures will wipe out any temporary readjustments.

Summary

Our treatment of cognitive styles suggests another approach to distinguishing between methods of ethical reasoning. On the other hand, there is the theoretical approach (cf. Section I). We have suggested (Section III) that one’s preference for a theoretical approach may vary according to the function its correlative type of attitudinal thinking plays for him in terms of his motivations, needs, and situational requirements. This is regrettably simplistic, but at least illustrates the point that unqualified appeal to any one motive in human nature will not suffice to establish the correct ethical theory.

Our second approach is rooted in a concept of “rationality” as psychological good-functioning. We described cognitive organizations which are not regarded as products of a well-functioning personality, and attempted to suggest styles of reasoning behavior which would tend to characterize these structures. This approach helps little in choosing between ethical theories (one can be an open or closed-minded utilitarian just as much as an intuitionist); a preponderance of ego-defensive attitudes, as expressed in unhealthy cognitive styles, interferes equally with all other basic functions of the personality, and any of the ethical theories may be put to a devious if inconsistent use. Analysis of moral reasoning in terms of cognitive styles is helpful in theoretically identifying the causes of aberrated moral reasoning. It also identifies in what respect this kind of reasoning is “irrational” and how, in general, it might be remedied.

The philosophical ramifications of this approach have only been hinted at in this paper; no one has yet constructed a “calculus of cognitive styles.” Yet it appears that, from a psychological perspective, moral reasoning can be evaluated on the attitudinal level in terms of good and bad cognitive structuring. The philosophical counterpart to this kind of analysis is, hopefully, waiting in the wings.

Notes

  1. For a recent discussion of the task of the moral philosopher: William D. Boyce, Larry Cyril Jensen, Moral Reasoning (Lincoln, Nebraska: U. of Nebraska Press, 1978), part I.
  2. Sigmund Freud, A General Introduction to Psychoanalysis, trans, by Joan Riviere (N.Y.: Pocket Books, 1953 edition), pp. 26-7.
  3. cf. Donn Byrne, An Introduction to Personality (Englewood Cliffs, N.J.: Prentice-Hall, 1974), chapter two.
  4. For a psychoanalytic treatment more concerned with cognitive issues, cf. Irving Sarnoff, “Psychoanalytic Theory and Cognitive Dissonance” in Theories of Cognitive Consistency: A Sourcebook, edited by Robert P. Abel-son, Elliot Aronson, William J. McGuire, Theodore M. Newcomb, Milton J. Rosenberg, Percy H. Tannenbaum (Chicago: Rand McNally & Co., 1968), pp. 192-f. (Note: we shall hereafter refer to this volume as ‘Sourcebook’).
  5. Charles Leslie Stevenson, “The Nature of Ethical Disagreement,” in Richard Brandt, editor, Value and Obligation (N.Y.: Harcourt, Brace & World, 1961), p. 371.
  6. Charles L. Stevenson, Ethics and Language (New Haven: Yale U. Press, 1944), pp. 20-36. This approach is commonly referred to as “emotivism.”
  7. Stuart Chase, “The Criteria of Semantics, “Saturday Review, Volume 28, No. 23, (June 9, 1945), p. 17.
  8. Stevenson, in Brandt, p. 373.
  9. Ibid.
  10. Ibid.
  11. Ibid., p. 374.
  12. Ibid., p. 374. David Hume (cf. infra) makes similar statements.
  13. David Hume, A Treatise of Human Nature, edited by L.A. Selby-Bigge, revised by P.H. Nidditch (Oxford: Oxford U. Press, 1978 edition), p. 456.
  14. Ibid., p. 457.
  15. Ibid., p. 472; cf. pp. 471, 574.
  16. Ibid., p. 496.
  17. Ibid., p. 575.
  18. David Hume, Enquiries concerning Human Understanding and concerning the Principles of Morals, edited by L.A. Selby-Bigge, revised by P.H. Nidditch (Oxford: Oxford U. Press, 1975 edition), pp. 250-267.
  19. Hume, Treatise, p. 462.
  20. Ibid., p. 458; cf. p. 463.
  21. Ibid., p. 459; cf. p. 416.
  22. Ibid., p. 462, italics mind.
  23. Ibid., p. 458.
  24. For an excellent introduction with an extensive bibliography: Boyce and Jenson, op. cit., part II. Also, Thomas Lickons, editor, Moral Development and Behavior (N.Y.: Holt, Rinehart and Winston, 1976), and W. Kurtines, and E.B. Greif, “The Development of Moral Thought: Review and Evaluation of Kohlberg’s Approach,” Psychological Bulletin, Vol. 81 (1974): pp. 453-470. Amongst the many works by Kohlberg, one might begin with “Stage and Sequence: The Cogntive-Developmental Approach to Socialization,” in D. Goslin, editor, Handbook of Socialization Theory and Research (Chicago: Rand-McNally, 1969).
  25. cf. the Psychological Bulletin article cited in the previous note, and another: Augusto Blasi, “Bridging Moral Cognition and Moral Action: A Critical Review of the Literature,” Psychological Bulletin, Vol. 88 (1980), pp. 1-f.
  26. cf. Jean Piaget, The Moral Judgement of the Child N.Y.: Harcourt, Brace & World, 1932).
  27. cf. Blasi, op. cit., p. 8.
  28. Kohlberg’s scheme of stages, in simple form, is as follows: Level I—Preconventional, Stage one—heteronomous morality, Stage two—individualism, instrumental purpose and exchange; Level II—Conventional Stage three—mutual interpersonal expections, relationships, conformity, Stage four—social system and conscience; Level III—Postconventional or PrincipledStage five—social contract or utility and individual rights Stage six—universal ethical principles (cf. Lawrence Kohlberg, “Moral Stages and Moralization: The Cognitive-Developmental Approach,” in Lickona, ed., op. cit., pp. 31-53).
  29. Blasi, op. cit., cites many favorable studies; for critical responses, cf., Kurtines and Greif, op. cit,; also, Jack R. Fraenkel, “The Kohlberg Bandwagon: Some Reservations,” Social Education (April, 1976): pp. 216-222.
  30. Immanuel Kant, Groundwork of the Metaphysic of Morals, trans, by H.J. Paton (N.Y.: Harper Torchbooks, 1964 edition), pp. 101-2.
  31. Ibid., p. 105.
  32. Ibid., pp. 102-3.
  33. Ibid., pp. 105-6.
  34. This is in many ways an answer to the challenge of Hume, as indicated in this statement from the Treatise (p. 463, op. cit.): There has been an opinion very industriously propagated by certain philosophers, that morality is susceptible of demonstration; and tho’ no one has ever been able to advance a single step in those demonstrations; yet ‘tis taken for granted, that this science may be brought to an equal certainty with geometry or algebra. Upon this supposition, vice and virtue must consist in some relations; since ‘tis allowed on all hands, that no matter of fact is capable of being demonstrated. Let us, therefore, begin with examining this hypothesis, and endeavor, if possible, to fix those moral qualities which have been so long the objects of our fruitless researches. Point out distinctly the relations, which constitute morality or obligation, that we may know wherein they consist, and after what manner we must judge them.
  35. cf. Hume, Treatise, p. 458.
  36. Kant, pp. 119-120.
  37. Thomas Hobbes, Leviathan, edited by Michael Oakeshott (N.Y.: Collier Books, 1962), p. 19.
  38. Ibid., p. 54.
  39. Ibid., pp. 80, 105.
  40. Ibid., p. 80.
  41. Ibid., p. 48. Hume makes similar statements.
  42. Ibid., pp. 48-9.
  43. Ibid., cf. pp. 21, 35.
  44. Ibid., p. 59.
  45. Ibid., p. 73.
  46. David Gauthier, “Thomas Hobbes: Moral Theorist,” Journal of Philosophy, Vol. 76 (October, 1979), pp. 547-8.
  47. This third “dogma” is not universally held. Utilitarians (e.g., Mill) would deny it, substituting for one’s own utility the “greatest happiness of the greatest number.”
  48. cf. Hillel J. Einhorn, Robin M. Hogarth, “Behavioral Decision Theory: Processes of Judgement and Choice,” Annual Review of Psychology, Vol. 32 (1981): pp. 53-88.
  49. cf. David Krech, Richard S. Crutchfield, Theory and Problems of Social Psychology (N.Y.: McGraw-Hill, 1948), pp. 168-9.
  50. An example is the observation by Murphy, Murphy, and Newcomb in Experimental Social Psychology (N.Y.: Harper and Brothers, 1938 revised edition) pp. 1031-3: “Evidence abounds … to suggest that the most freakish assortments of opinions and beliefs are commonly held by single individuals. The prevalence of irrational beliefs, even among those at college levels, has more than once been amply demonstrated … ‘Rational’ and “irrational ideas’ may evidently be the best of bedfellows.” This issue is discussed at some length by Jonathan L. Freedman, “How Important is Cognitive Consistency?” in Sourcebook, op. cit., pp. 497-503.
  51. This was strongly pointed out by J.W. Brehm, A.R. Cohen, Explorations in Cognitive Dissonance (N.Y.: Wiley, 1962); cf. also David C. Glass, “Individual Differences and the Resolution of Cognitive Inconsistencies,” in Sourcebook, op. cit. pp. 615-623.
  52. Gauthier, op. cit. pp. 547- fig. cf. also Arthur E. Murphy, “The Context of Moral Judgment,” in The Uses of Reason (N.Y.: MacMillan Co., 1943), p. 128.
  53. Two excellent histories and conceptual overviews of “attitude” are: Thomas M. Ostram, “The Emergence of Attitude Theory: 1930–1950,” in Psychological Foundations of Attitudes, edited by Anthony G. Greenwald, Timothy C. Brock, Thomas M. Ostram (N.Y.: Academic Press, 1968), pp. 1-32; and Melvin L. DeFleur, Frank R. Westie, “Attitude as a Scientific Concept,” in Allen P. Liska, The Consistency Controversy (N.Y.: John Wiley and Sons, 1975), pp. 23-43. A collection of essays most important to attitude theory is Martin Fishbein, ed., Readings in Attitude Theory and Measurement (N.Y.: John Wiley and Sons, 1967), while a recent survey of current research is: Robert B. Cialdini, Richard E. Petty, John T. Cacioppo, “Attitude and Attitude Change,” Annual Review of Psychology, Vol. 32 (1981): pp. 357-404.
  54. Gordon W. Allport, “Attitudes,” in Handbook of Social Psychology, edited C. Murchison (Worcester, Mass: Clark U. Press, 1935), pp. 808-9.
  55. Daniel Katz, Ezra Stotland, “Preliminary Statement to a Theory of Attitudes,” in S. Koch, editor, Psychology: Study of a Science, Vol. III (N.Y.: McGraw-Hill, 1959), p. 466.
  56. A number of different kinds of studies supporting this claim are discussed in M.J. Rosenberg, “An Analysis of Affective-Cognitive Consistency,” in M.J. Rosenberg, C.I. Hovland, W.J. McGuire, R.P. Abelson, J.W. Brehm, Attitude Organization and Change (New Haven: Yale U. Press, 1960), pp. 15-64.
  57. A number of excellent overviews and collections of critical essays on this approach to attitude theory are available. A few are: Sourcebook, op. cit.; R. Brown, “Models of Attitude Change,” in R. Brown, E. Galanter, B.H. Hess, G. Mandler, editors, New Directions in Psychology (N.Y.: Holt, Rinehart and Winston, 1962), pp. 1-85; Shel Feldman, Cognitive Consistency; Motivational Antecedents and Behavioral Consequents (N.Y.: Academic Press, 1966); R. Zajonc, “Cognitive Theories of Social Behavior,” in G. Lindzey E. Aronson, editors, Handbook of Social Psychology (Reading, Mass.: Addison-Wesley, 1968 edition), Vol. I, pp. 320-410.
  58. Rosenberg, “An Analysis of Affective-Cognitive Consistency,” op. cit.
  59. R.P. Abelson, M.J. Rosenberg, “Symbolic Psycho-logic: A Model of Attitudinal Cognition,” Behavioral Science, Vol. 3 (1958): pp. 1-13.
  60. M.J. Rosenberg, R.P. Abelson, “An Analysis of Cognitive Balancing,” in Attitude Organization and Change, op. cit., pp. 112-163.
  61. The following description is generously extracted from M.J. Rosenberg, “Hedonism, Inauthenticity, and Other Goads Toward Expansion of a Consistency Theory,” in Sourcebook, op. cit., especially pp. 79-80.
  62. This is somewhat similar to the approach to value systems taken by Katz and Stotland, op. cit., pp. 432-4.
  63. cf. Rosenberg, “Hedonism, Inauthenticity, and Other Goads Toward Expansion of a Consistency Theory,” in Sourcebook, p. 81.
  64. This point is made strongly in Leonard Berkowitz, “The Motivational Status of Cognitive Consistency Theory,” in Sourcebook, pp. 303-310.
  65. W.J. McGuire, “The Nature of Attitudes and Attitude Change,” in Handbook of Social Psychology, revised edition, Vol. III, pp. 159-169.
  66. Katz and Stotland, op. cit., pp. 436-f.
  67. Hume makes this same point when accounting for the need for a concept of justice in the Enquiries, op. cit., pp. 183-192 (sect. Ill, pt. 2).
  68. cf. R.P. Abelson, “Modes of Resolution of Belief Dilemmas,” Journal of Conflict Resolution, Vol. 3 (1959), pp. 343-352.
  69. Katz and Stotland, op. cit. p. 440.
  70. McGuire, op. cit., pp. 157-8.
  71. Ibid., pp. 156-7. McGuire refers to this as an “economy” function, but the term seems ill-suited, as it is easily mixed up with what he calls “utilitarian” functions, which more closely approximate the traditional sense of “economic.”
  72. Studies on altruism, moral development and the delay of gratification are discussed in Donn Byrne, An Introduction to Personality (Englewood Cliffs, N.J.: Prentice-Hall, 1974 edition), pp. 478-f.
  73. A number of theorists have examined the variables of self-esteem and ego-involvement in the judgement process; e.g., M. Sherif, H. Cantril, The Psychology of Ego-Involvements (N.Y.: Wiley, 1947).
  74. Erich Fromm, Man for Himself: An Inquiry into the Psychology of Ethics (Greenwich, Conn.: Fawcett Books, 1947), pp. 70-88.
  75. On cognitive styles: David Rapaport, “Cognitive Structures,” in Contemporary Approaches to Cognition (Harvard U. Press, 1957), pp. 159-f; cf. also Zajonc, op. cit., pp. 332-5 discusses various classifications of cognitive structure, but questions (335) the connection of structure-types with styles of functioning.
  76. Milton Rokeach, The Open and Closed Mind: Investigations into the Nature of Belief Systems and Personality Systems (N.Y.: Basic Books, 1960), p. 16.
  77. An important exception to this is a belief-system characterized by the presence of an ideology. Ideologies are generally tightly knit, but broad ranging belief systems in the service of an institution or cause. Unlike the cluster effect we have been describing, an ideological cognitive structure would be characterized by widespread symmetries. However, the tendency to establish these symmetries on the basis of like-signedness of objects, rather than well-supported factual relationships places ideological thinking in the same class with the kind of “close minded” thinking we have been discussing.

No comments:

Post a Comment