Alexandra Logue: Not All Excess Credits Are The Students’ Fault

This post was originally published on Alexandra Logue’s blog

A recent article in Educational Evaluation and Policy Analysis reported on an investigation of policies punishing students for graduating with excess credits.  Excess credit hours are the credits that a student obtains in excess of what is required for a degree, and many students graduate having taken several courses more than what was needed.

To the extent that tuition does not cover the cost of instruction, and/or that financial aid is paying for these excess credits, someone other than the student—the college or the government—is paying for these excess credits.  Graduating with excess credits also means that a student is occupying possibly scarce classroom seats longer than s/he needs to and is not entering the work force with a degree and paying more taxes as soon as s/he could.  Thus there are many reasons why colleges and/or governments might seek to decrease excess credits.  The article considers cases in which states have imposed sanctions on students who graduate with excess credits, charging more for credits taken significantly above the number required for a degree.  The article shows that such policies, instead of resulting in students graduating sooner, have instead resulted in greater student debt.  But the article does not identify the reasons why this may be the case.  Perhaps one reason is because students do not have control over those excess credits.

For example, as described in my forthcoming book, Pathways to Reform: Credits and Conflict at The City University of New York, students may accumulate excess credits because of difficulties they have transferring their credits.  When students transfer, there can be significant delays in having the credits that they obtained at their old institution evaluated by their new institution.  At least at CUNY colleges, the evaluation process can take many months.  During that period, a student either has to stop out of college or take a risk and enroll in courses that may or may not be needed for the student’s degree.  Even when appropriate courses are taken, all too often credits that a student took at the old college as satisfying general education (core) requirements or major requirements become elective credits, or do not transfer at all. A student then has to repeat courses or take extra courses in order to satisfy all of the requirements at the new college.  Given that a huge proportion of students now transfer, or try to transfer, their credits (49% of bachelor’s degree recipients have some credits from a community college, and over one-third of students in the US? transfer within six years of starting college), a great number of credits are being lost.

Nevertheless, a 2010 study at CUNY found that a small proportion of the excess credits of its bachelor’s degree recipients was due to transfer—students who never transferred graduated with only one or two fewer excess credits, on average, than did students who did transfer.  Some transfer students may have taken fewer electives at their new colleges in order to have room in their programs to make up nontransferring credits from their old colleges, without adding many excess credits.

But does this mean that we should blame students for those excess credits and make them pay more for them?  Certainly some of the excess credits are due to students changing their majors late and/or to not paying attention to requirements and so taking courses that don’t allow them to finish their degrees, and there may even be some students who would rather keep taking courses than graduate.

But there are still other reasons that students may accumulate extra credits, reasons for which the locus of control is not the student.  Especially in financially strapped institutions, students may have been given bad or no guidance by an advisor.  In addition, students may have been required to take traditional remedial courses, which can result in a student acquiring many of what CUNY calls equated credits, on top of the required college-level credits (despite the fact that there are more effective ways to deliver remediation without the extra credits). Or a student may have taken extra courses that s/he didn’t need to graduate in order to continue to enroll full-time, so that the student could continue to be eligible for some types of financial aid and/or (in the past) health insurance. Students may also have made course-choice errors early in their college careers, when they were unaware of excess-credit tuition policies that would only have an effect years later.

The fact that the imposition of excess-credit tuition policies did not affect the number of excess credits accumulated but instead increased student debt by itself suggests that, to at least some degree, the excess credits are not something that students can easily avoid, and/or that there are counter-incentives operating that are even stronger than the excess tuition.

Before punishing students, or trying to control their behavior, we need to have a good deal of information about all of the different contingencies to which students are subject.  Students should complete their college’s requirements as efficiently as possible.  However, just because some students demonstrate delayed graduation behavior does not mean that they are the ones who are controlling that behavior.  Decreasing excess credits needs to be a more nuanced process, with contingencies and consequences tailored appropriately to those students who are abusing the system, and those who are not.

LogueAlexandra W. Logue is a research professor at the Center for Advanced Study in Education at the Graduate Center, CUNY. From 2008 to 2014, she served as executive vice chancellor and university provost of the CUNY system. She is the author of Pathways to Reform: Credits and Conflict at The City University of New York.

Omnia El Shakry: Psychoanalysis and Islam

Omnia El Shakry‘s new book, The Arabic Freud, is the first in-depth look at how postwar thinkers in Egypt mapped the intersections between Islamic discourses and psychoanalytic thought.

What are the very first things that pop into your mind when you hear the words “psychoanalysis” and “Islam” paired together?  For some of us the connections might seem improbable or even impossible. And if we were to be brutally honest the two terms might even evoke the specter of a so-called “clash of civilizations” between an enlightened, self-reflective West and a fanatical and irrational East.

It might surprise many of us to know, then, that Sigmund Freud, the founding figure of psychoanalysis, was ever-present in postwar Egypt, engaging the interest of academics, novelists, lawyers, teachers, and students alike. In 1946 Muhammad Fathi, a Professor of Criminal Psychology in Cairo, ardently defended the relevance of Freud’s theories of the unconscious for the courtroom, particularly for understanding the motives behind homicide. Readers of Nobel laureate Naguib Mahfouz’s 1948 The Mirage were introduced to the Oedipus complex, graphically portrayed in the novel, by immersing themselves in the world of its protagonist—pathologically erotically attached and fixated on his possessive mother. And by 1951 Freudian theories were so well known in Egypt that a secondary school philosophy teacher proposed prenuptial psychological exams in order to prevent unhappy marriages due to unresolved Oedipus complexes!

Scholars who have tackled the question of psychoanalysis and Islam have tended to focus on it as problem, by assuming that psychoanalysis and Islam have been “mutually ignorant” of each other, and they have placed Islam on the couch, as it were, alleging that it is resistant to the “secular” science of psychoanalysis. In my book, The Arabic Freud, I undo the terms of this debate and ask, instead, what it might mean to think of psychoanalysis and Islam together, not as a “problem,” but as a creative encounter of ethical engagement.

What I found was that postwar thinkers in Egypt saw no irreconcilable differences between psychoanalysis and Islam. And in fact, they frequently blended psychoanalytic theories with classical Islamic concepts. For example, when they translated Freud’s concept of the unconscious, the Arabic term used, “al-la-shuʿur,” was taken from the medieval mystical philosopher Ibn ʿArabi, renowned for his emphasis on the creative imagination within Islamic spirituality.

Islamic thinkers further emphasized similarities between Freud’s interpretation of dreams and Islamic dream interpretation, and they noted that the analyst-analysand (therapist-patient) relationship and the spiritual master-disciple relationship of Sufism (the phenomenon of mysticism in Islam) were nearly identical. In both instances, there was an intimate relationship in which the “patient” was meant to forage their unconscious with the help of their shaykh (spiritual guide) or analyst, as the case might be. Both Sufism and psychoanalysis, then, were characterized by a relationship between the self and the other that was mediated by the unconscious. Both traditions exhibited a concern for the relationship between what was hidden and what was shown in psychic and religious life, both demonstrated a preoccupation with eros and love, and both mobilized a highly specialized vocabulary of the self.

What, precisely, are we to make of this close connection between Islamic mysticism and psychoanalysis? On the one hand, it helps us identify something of a paradox within psychoanalysis, namely that for some psychoanalysis represents a non-religious and even atheistic world view. And there is ample evidence for this view within Freud’s own writings, which at times pathologized religion in texts such as The Future of an Illusion and Civilization and Its Discontents. At the same time, in Freud and Man’s Soul, Bruno Bettelheim argued that in the original German Freud’s language was full of references to the soul, going so far as to refer to psychoanalysts as “a profession of secular ministers of souls.” Similarly, psychoanalysis was translated into Arabic as “tahlil al-nafs”—the analysis of the nafs, which means soul, psyche, or self and has deeply religious connotations. In fact, throughout the twentieth century there have been psychoanalysts who have maintained a receptive attitude towards religion and mysticism, such as Marion Milner or Sudhir Kakar. What I take all of this to mean is that psychoanalysis as a tradition is open to multiple, oftentimes conflicting, interpretations and we can take Freud’s own ambivalence towards religion, and towards mysticism in particular, as an invitation to rethink the relationship between psychoanalysis and religion.

What, then, if religious forms of knowledge, and the encounter between psychoanalysis and Islam more specifically, might lead us to new insights into the psyche, the self, and the soul? What would this mean for how we think about the role of religion and ethics in the making of the modern self? And what might it mean for how we think about the relationship between the West and the Islamic world?

FreudOmnia El Shakry is Professor of History at the University of California, Davis. She is the author of The Great Social Laboratory: Subjects of Knowledge in Colonial and Postcolonial Egypt and the editor of Gender and Sexuality in Islam. Her new book, The Arabic Freud, is out this September.

Chris Chambers: The Seven Deadly Sins of Psychology

ChambersPsychological science has made extraordinary discoveries about the human mind, but can we trust everything its practitioners are telling us? In recent years, it has become increasingly apparent that a lot of research in psychology is based on weak evidence, questionable practices, and sometimes even fraud. The Seven Deadly Sins of Psychology by Chris Chambers diagnoses the ills besetting the discipline today and proposes sensible, practical solutions to ensure that it remains a legitimate and reliable science in the years ahead.

Why did you decide to write this book?

CC: Over the last fifteen years I‘ve become increasingly fed up with the “academic game” in psychology, and I strongly believe we need to raise standards to make our research more transparent and reliable. As a psychologist myself, one of the key lessons I’ve learned is that there is a huge difference between how the public thinks science works and how it actually works. The public have this impression of scientists as objective truth seekers on a selfless mission to understand nature. That’s a noble picture but bears little resemblance to reality. Over time, the mission of psychological science has eroded from something that originally was probably quite close to that vision but has now become a contest for short-term prestige and career status, corrupted by biased research practices, bad incentives and occasionally even fraud.

Many psychologists struggle valiantly against the current system but they are swimming against a tide. I trained within that system. I understand how it works, how to use it, and how it can distort your thinking. After 10 years of “playing the game” I realized I didn’t like the kind of scientist I was turning into, so I decided to try and change the system and my own practices—not only to improve science but to help younger scientists avoid my predicament. At its heart this book lays out my view of how we can reinvigorate psychology by adopting an emerging philosophy called “open science.” Some people will agree with this solution. Many will not. But, above all, the debate is important to have.

It sounds like you’re quite skeptical about science generally.

CC: Even though I’m quite critical about psychology, the book shouldn’t be seen as anti-science—far from it. Science is without doubt the best way to discover the truth about the world and make rational decisions. But that doesn’t mean it can’t or shouldn’t be improved. We need to face the problems in psychology head-on and develop practical solutions. The stakes are high. If we succeed then psychology can lead the way in helping other sciences solve similar problems. If we fail then I believe psychology will fade into obscurity and become obsolete.

Would it matter if psychology disappeared? Is it really that important?

CC: Psychology is a huge part of our lives. We need it in every domain where it is important to understand human thought or behavior, from treating mental illness, to designing traffic signs, to addressing global problems like climate change, to understanding basic (but extraordinarily complex) mental functions such as how we see or hear. Understanding how our minds work is the ultimate journey of self-discovery and one of the fundamental sciences. And it’s precisely because the world needs robust psychological science that researchers have an ethical obligation to meet the high standards expected of us by the public.

Who do you think will find your book most useful?

CC: I have tried to tailor the content for a variety of different audiences, including anyone who is interested in psychology or how science works. Among non-scientists, I think the book may be especially valuable for journalists who report on psychological research, helping them overcome common pitfalls and identify the signs of bad or weak studies. At another level, I’ve written this as a call-to-arms for my fellow psychologists and scientists in closely aligned disciplines, because we need to act collectively in order to fix these problems. And the most important readers of all are the younger researchers and students who are coming up in the current academic system and will one day inherit psychological science. We need to get our house in order to prepare this generation for what lies ahead and help solve the difficulties we inherited.

So what exactly are the problems facing psychology research?

CC: I’ve identified seven major ills, which (a little flippantly, I admit) can be cast as seven deadly sins. In order they are Bias, Hidden Flexibility, Unreliability, Data Hoarding, Corruptibility, Internment, and Bean Counting. I won’t ruin the suspense by describing them in detail, but they all stem from the same root cause: we have allowed the incentives that drive individual scientists to fall out of step with what’s best for scientific advancement. When you combine this with the intense competition of academia, it creates a research culture that is biased, closed, fearful and poorly accountable—and just as a damp bathroom encourages mold, a closed research culture becomes the perfect environment for cultivating malpractice and fraud.

It all sounds pretty bad. Is psychology doomed?

CC: No. And I say this emphatically: there is still time to turn this around. Beneath all of these problems, psychology has a strong foundation; we’ve just forgotten about it in the rat race of modern academia. There is a growing movement to reform research practices in psychology, particularly among the younger generation. We can solve many problems by adopting open scientific practices—practices such as pre-registering study designs to reduce bias, making data and study materials as publicly available as possible, and changing the way we assess scientists for career advancement. Many of these problems are common to other fields in the life sciences and social sciences, which means that if we solve them in psychology we can solve them in those areas too. In short, it is time for psychology to grow up, step up, and take the lead.

How will we know when we’ve fixed the deadly sins?

CC: The main test is that our published results should become a lot more reliable and repeatable. As things currently stand, there is a high chance that any new result published in a psychology journal is a false discovery. So we’ll know we’ve cracked these problems when we can start to believe the published literature and truly rely on it. When this happens, and open practices become the norm, the closed practices and weak science that define our current culture will seem as primitive as alchemy.

Chris Chambers is professor of cognitive neuroscience in the School of Psychology at Cardiff University and a contributor to the Guardian science blog network. He is the author of The 7 Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice.

Joel Brockner: Can Job Autonomy Be a Double-Edged Sword?

This post was originally published on the Psychology Today blog.

“You can arrive to work whenever convenient.”

“Work from home whenever you wish.”

“You can play music at work at any time.”

These are examples of actual workplace policies from prominent companies such as Aetna, American Express, Dell, Facebook, Google, IBM, and Zappos. They have joined the ranks of many organizations in giving employees greater job autonomy, that is, more freedom to decide when, where, and how to do their work. And why not? Research by organizational psychologists such as Richard Hackman and Greg Oldham and by social psychologists such as Edward Deci and Richard Ryan, has shown that job autonomy can have many positive effects. The accumulated evidence is that employees who experience more autonomy are more motivated, creative, and satisfied with their jobs.

Against this backdrop of the generally favorable effects of job autonomy, recent research has shown that it also may have a dark side: unethical behavior. Jackson Lu, Yoav Vardi, Ely Weitz and I discovered such results in a series of field and laboratory studies soon to be published in the Journal of Experimental Social Psychology. In field studies conducted in Israel, employees from a wide range of industries rated how much autonomy they had and how often they engaged in unethical behavior, such as misrepresenting their work hours or wasting work time on private phone calls. Those who had greater autonomy said that they engaged in more unethical behavior on the job. In laboratory experiments conducted in the United States we found that it may not even be necessary for people to have actual autonomy for them to behave unethically; merely priming them with the idea of autonomy may do the trick. In these studies participants were randomly assigned to conditions differing in how much the concept of autonomy was called to mind. This was done with a widely used sentence-unscrambling task in which people had to rearrange multiple series of words into grammatically correct sentences. For example, those in the high-autonomy condition were given words such as, “have many as you as days wish you vacation may” which could be rearranged to form the sentence, “You may have as many vacation days as you wish.” In contrast, those in the low-autonomy condition were given words such as, “office in work you must the,” which could be rearranged to, “You must work in the office.” After completing the sentence-unscrambling exercise participants did another task in which they were told that the amount of money they earned depended on how well they performed. The activity was structured in a way that enabled us to tell whether participants lied about their performance. Those who were previously primed to experience greater autonomy in the sentence-unscrambling task lied more. Job autonomy gives employees a sense of freedom which usually has positive effects on their productivity and morale but also can lead them to feel that they can do whatever they want, including not adhering to rules of morality.

All behavior is a function of what people want to do (motivation) and what they are capable of doing (ability). Consider the unethical behavior elicited by high levels of autonomy. Having high autonomy may not have made people want to behave unethically. However, it may have enabled the unethical behavior by making it possible for people to engage in it. Indeed, the distinction between people wanting to behave unethically versus having the capability of doing so may help answer two important questions:

(1) What might mitigate the tendency for job autonomy to elicit unethical behavior?

(2) If job autonomy can lead to unethical behavior should companies re-evaluate whether to give job autonomy to its employees? That is, can job autonomy be introduced in a way that maximizes its positive consequences (e.g., greater creativity) without introducing the negative effect of unethical behavior?

With respect to the first question, my hunch is that people who have job autonomy and therefore are able to behave unethically will not do so if they do not want to behave unethically. For example, people who are high on the dimension of moral identity, for whom behaving morally is central to how they define themselves would be less likely to behave unethically even when a high degree of job autonomy enabled or made it possible for them to do so.

With respect to the second question, I am not recommending that companies abandon their efforts to provide employees with job autonomy. Our research suggests, rather, that the consequences of giving employees autonomy may not be summarily favorable. Taking a more balanced view of how employees respond to job autonomy may shed light on how organizations can maximize the positive effects of job autonomy while minimizing the negative consequence of unethical behavior.

Whereas people generally value having autonomy, some people want it more than others. People who want autonomy a lot may be less likely to behave unethically when they experience autonomy. For one thing, they may be concerned that the autonomy they covet may be taken away if they were to take advantage of it by behaving unethically. This reasoning led us to do another study to evaluate when the potential downside of felt autonomy can be minimized while its positive effects can be maintained. Once again, we primed people to experience varying degrees of job autonomy with the word-unscrambling exercise. Half of them then went on to do the task which measured their tendency to lie about their performance, whereas the other half completed an entirely different task, one measuring their creativity. Once again, those who worked on the task in which they could lie about their performance did so more when they were primed to experience greater autonomy. And, as has been found in previous research those who did the creativity task performed better at it when they were primed to experience greater autonomy.

Regardless of whether they did the task that measured unethical behavior or creativity, participants also indicated how much they generally valued having autonomy. Among those who generally valued having autonomy to a greater extent, (1) the positive relationship between experiencing job autonomy and behaving unethically diminished, whereas (2) the positive relationship between experiencing job autonomy and creativity was maintained. In other words, as long as people valued having autonomy, the experience of autonomy had the positive effect of enhancing creativity without introducing the dangerous side effect of unethical behavior. So, when organizations introduce job autonomy policies like those mentioned at the outset, they may gain greater overall benefits when they ensure that their employees value having autonomy. This may be achieved by selecting employees who value having autonomy as well as by creating a corporate culture which emphasizes the importance of it. More generally, a key practical takeaway from our studies is that when unethical behavior is enabled, whether through job autonomy or other factors, it needs to be counterbalanced by conditions that make employees not want to go there.

BrocknerJoel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School. He is the author of The Process Matters: Engaging and Equipping People for Success.

Face Value: Man or Woman?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. In honor of the book’s release, we’re sharing examples from his research every week.

Todorov

It is easy to identify the woman in the image on the right and the man in the image on the left. But the two images are almost identical with one subtle difference: the skin surface in the image on the left is a little bit darker. The eyes and lips of the faces are identical, but the rest of the image on the left was darkened, and the rest of the image on the right was lightened. This manipulation makes the face on the left look masculine and the face on the right look feminine. This is one way to induce the gender illusion. Here is another one.

Based on research reported in

  1. Russell (2009). “A sex difference in facial contrast and its exaggeration by cosmetics.” Perception 38, 1211–1219.

Todorov

Face Value: Eyebrows

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

It is easier to recognize Richard Nixon when his eyes are removed than when his eyebrows are removed from the image. Our intuitions about what facial features are important are insufficient at best and misleading at worst.

 

Based on research reported in

  1. Sadr, I. Jarudi, and P. Sinha (2003). “The role of eyebrows in face recognition.” Perception 32, 285–293.

Todorov

Face Value: Can you recognize the celebrities?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

 

A: Justin Bieber and Beyoncé

 

Todorov

The Great Mother—Jackets throughout the years

Goddess, monster, gate, pillar, tree, moon, sun, vessel, and every animal from snakes to birds: the maternal has been represented throughout history as both nurturing and fearsome, a primordial image of the human psyche. In celebration of Mother’s Day, we dipped into the archives for a tour of the various covers of a landmark book, Erich Neumann’s The Great Mother.

Face Value: Who is more likely to have committed a violent crime?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

 

The face on the right was manipulated to be perceived as more criminal looking and the face on the left as less criminal looking.

Note that these immediate impressions need not be grounded in reality. They are our visual stereotypes of what constitutes criminal appearance. Note also the large number of differences between the two faces: shape, color, texture, individual features, placement of individual features, and so on. Yet we can easily identify global characteristics that differentiate these faces. More masculine appearance makes a face appear more criminal. In contrast, more feminine appearance makes a face appear less criminal. But keep in mind that it is impossible to describe all the variations between the two faces in verbal terms.

Based on research reported in

  1. Funk, M. Walker, and A. Todorov (2016). “Modeling perceived criminality and remorse in faces using a data-driven computational approach.” Cognition & Emotion, http://dx.doi.org/10.1080/02699931.2016.1227305.

 

Todorov

Alexander Todorov on the science of first impressions

TodorovWe make up our minds about others after seeing their faces for a fraction of a second—and these snap judgments predict all kinds of important decisions. For example, politicians who simply look more competent are more likely to win elections. Yet the character judgments we make from faces are as inaccurate as they are irresistible; in most situations, we would guess more accurately if we ignored faces. So why do we put so much stock in these widely shared impressions? What is their purpose if they are completely unreliable? In Face Value, Alexander Todorov, one of the world’s leading researchers on the subject, answers these questions as he tells the story of the modern science of first impressions. Here he responds to a few questions about his new book.

What inspired you to write this book?

AT: I have been doing research on how people perceive faces for more than 10 years. Typically, we think of face perception as recognizing identity and emotional expressions, but we do much more than that. When we meet someone new, we immediately evaluate their face and these evaluations shape our decisions. This is what we informally call first impressions. First impressions pervade everyday life and often have detrimental consequences. Research on first impressions from facial appearance has been quite active during the last decade and we have made substantive progress in understanding these impressions. My book is about the nature of first impressions, why we cannot help but form impressions, and why these impressions will not disappear from our lives.

In your book, you argue that first impressions from facial appearance are irresistible. What is the evidence?

AT: As I mentioned, the study of first impressions has been a particularly active area of research and the findings have been quite surprising. First, we form impressions after seeing a face for less than one-tenth of a second. We decide not only whether the person is attractive but also whether he or she is trustworthy, competent, extroverted, or dominant. Second, we agree on these impressions and this agreement emerges early in development. Children, just like adults, are prone to using face stereotypes. Third, these impressions are consequential. Unlucky people who appear “untrustworthy” are more likely to get harsher legal punishments. Those who appear “trustworthy” are more likely to get loans on better financial terms. Politicians who appear more “competent” are more likely to get elected. Military personnel who appear more “dominant” are more likely to achieve higher ranks. My book documents both the effortless nature of first impressions and their biasing effects on decisions.

The first part of your book is about the appeal of physiognomy—the pseudoscience of reading character from faces. Has not physiognomy been thoroughly discredited?

AT: Yes and no. Most people today don’t believe in the great physiognomy myth that we can read the character of others from their faces, but the evidence suggests that we are all naïve physiognomists: forming instantaneous impressions and acting on these impressions. Moreover, fueled by recent research advances in visualizing the content of first impressions, physiognomy appears in many modern disguises: from research papers claiming that we can discern the political, religious, and sexual orientations of others from images of their faces to private ventures promising to profile people based on images of their faces and offering business services to companies and governments. This is nothing new. The early 20th century physiognomists, who called themselves “character analysts,” were involved in many business ventures. The modern physiognomists are relying on empirical and computer science methods to legitimize their claims. But as I try to make clear in the book, the modern claims are as far-stretched as the claims of the old physiognomists. First, different images of the same person can lead to completely different impressions. Second, often our decisions are more accurate if we completely ignore face information and rely on common knowledge.

You mentioned research advances that visualize the content of first impressions. What do you mean?

AT: Faces are incredibly complex stimuli and we are inquisitively sensitive to minor variations in facial appearance. This makes the study of face perception both fascinating and difficult. In the last 10 years, we have developed methods that capture the variations in facial appearance that lead to specific impressions such as trustworthiness. The best way to illustrate the methods is by providing visual images, because it is impossible to describe all these variations in verbal terms. Accordingly, the book is richly illustrated. Here is a pair of faces that have been extremely exaggerated to show the variations in appearance that shape our impressions of trustworthiness.

Faces

Most people immediately see the face on the left as untrustworthy and the face on the right as trustworthy. But notice the large number of differences between the two faces: shape, color, texture, individual features, placement of individual features, and so on. Yet we can easily identify global characteristics that differentiate these faces. Positive expressions and feminine appearance make a face appear more trustworthy. In contrast, negative expressions and masculine appearance make a face appear less trustworthy. We can and have built models of many other impressions such as dominance, extroversion, competence, threat, and criminality. These models identify the contents of our facial stereotypes.

To the extent that we share face stereotypes that emerge early in development, isn’t it possible that these stereotypes are grounded in our evolutionary past and, hence, have a kernel of truth?

AT: On the evolutionary scale, physiognomy has a very short history. If you imagine the evolution of humankind compressed within 24 hours, we have lived in small groups during the entire 24 hours except for the last 5 minutes. In such groups, there is abundant information about others coming from first-hand experiences (like observations of behavior and interactions) and from second-hand experiences (like testimonies of family, friends, and acquaintances). That is for most of human history, people did not have to rely on appearance information to infer the character of others. These inferences were based on much more reliable and easily accessible information. The emergence of large societies in the last few minutes of the day changed all that. The physiognomists’ promise was that we could handle the uncertainty of living with strangers by knowing them from their faces. It is no coincidence that the peaks of popularity of physiognomists’ ideas were during times of great migration. Unfortunately, the physiognomists’ promise is as appealing today as it was in the past.

Are there ways to minimize the effects of first impressions on our decisions?

AT: We need to structure decisions so that we have access to valid information and minimize the access to appearance information. A good real life example is the increase of the number of women in prestigious philharmonic orchestras. Until recently, these orchestras were almost exclusively populated by men. What made the difference was the introduction of blind auditions. The judges could hear the candidates’ performance but their judgments could not be swayed by appearance, because they could not see the candidates.

So why are faces important?

AT: Faces play an extremely important role in our mental life, though not the role the physiognomists imagined. Newborns with virtually no visual experience prefer to look at faces than at other objects. After all, without caregivers we will not survive. In the first few months of life, faces are one of the most looked upon objects. This intensive experience with faces develops into an intricate network of brain regions dedicated to the processing of faces. This network supports our extraordinary face skills: recognizing others and detecting changes in their emotional and mental states. There are likely evolutionary adaptations in the human face—our bare skin, elongated eyes with white sclera, and prominent eyebrows—but these adaptations are about facilitating the reading of other minds, about communicating and coordinating our actions, not about inferring character.

Alexander Todorov is professor of psychology at Princeton University, where he is also affiliated with the Princeton Neuroscience Institute and the Woodrow Wilson School of Public and International Affairs. He is the author of Face Value: The Irresistible Influence of First Impressions.

Joel Brockner: Why Bosses Can Be Dr. Jekyll and Mr. Hyde

It is unnerving when people in authority positions behave inconsistently, especially when it comes to matters of morality. We call such people “Jekyll and Hyde characters,” based on Robert Louis Stevenson’s novella in which the same person behaved very morally in some situations and very immorally in others. Whereas the actual title of Stevenson’s work was the Strange Case of Dr. Jekyll and Mr. Hyde, recent research suggests that Jekyll and Hyde bosses may not be so unusual. In fact, behaving morally like Dr. Jekyll may cause bosses to subsequently behave immorally like Mr. Hyde.

Researchers at Michigan State University (Szu Han Lin, Jingjing Ma, and Russell Johnson) asked employees to describe the behavior of their bosses from one day to the next. Bosses who behaved more ethically on the first day were more likely to behave abusively towards their subordinates the next day. For instance, the more that bosses on the first day did things like: 1) define success not just by results but also by the way that they are obtained, 2) set an example of how to do things the right way in terms of ethics, or 3) listen to what their employees had to say, the more likely they were on the next day to ridicule employees, to give employees the silent treatment, or to talk badly about employees behind their back. Does being in a position of authority predispose people to be hypocrites?

Not necessarily. Lin, Ma, and Johnson found two reasons why ethical leader behavior can, as they put it, “break bad.” One is moral licensing, which is based on the idea that people want to think of themselves and their behavior as ethical or moral. Having behaved ethically, people are somewhat paradoxically free to behave less ethically, either because their prior behavior gave them moral credits in their psychological ledgers or because it proved them to be fine, upstanding citizens.

A second explanation is based on Roy Baumeister’s notion of ego depletion, which assumes that people have a limited amount of self-control resources. Ego depletion refers to how people exerting self-control in one situation are less able to do so in a subsequent situation. Ego depletion helps to explain, for instance, why employees tend to make more ethical decisions earlier rather than later in the day. Throughout the day we are called upon to behave in ways that require self-control, such as not yelling at the driver who cut us off on the way to work, not having that second helping of delicious dessert at lunch, and not expressing negative emotions we may be feeling about bosses or co-workers who don’t seem to be behaving appropriately, in our view. Because we have fewer self-control resources later in the day, we are more susceptible to succumb to the temptation to behave unethically. In like fashion, bosses who behave ethically on one day (like Dr. Jekyll) may feel ego depleted from having exerted self-control, making them more prone to behave abusively towards their subordinates the next day (like Mr. Hyde).

Distinguishing between moral licensing and ego depletion is important, both conceptually and practically. At the conceptual level, a key difference between the two is whether the self is playing the role of object or subject. When people take themselves as the object of attention they want to see themselves and their behavior positively, for example, as ethical. As object (which William James called the me-self), self-processes consist of reflecting and evaluating. When operating as subject, the self engages in regulatory activity, in which people align their behavior with meaningful standards coming from within or from external sources; James called this the I-self. Moral licensing is a self-as-object process, in which people want to see themselves in certain positive ways (e.g., ethical), so that when they behave ethically they are free, at least temporarily, to behave in not so ethical ways. Ego depletion is a self-as-subject process, in which having exerted self-control in the service of regulation makes people, at least temporarily, less capable of doing so.

The founding father of social psychology, Kurt Lewin, famously proclaimed that, “There is nothing so practical as a good theory.” Accordingly, the distinction between moral licensing and ego depletion lends insight into the applied question of how to mitigate the tendency for ethical leader behavior to break bad. The moral licensing explanation suggests that one way to go is to make it more difficult for bosses to make self-attributions for their ethical behavior. For instance, suppose that an organization had very strong norms for its authorities to behave ethically. When authorities in such an organization behave ethically, they may attribute their behavior to the situation (strong organizational norms) rather than to themselves. In this example authorities are behaving morally but are not licensing themselves to behave abusively.

The ego depletion explanation suggests other ways to weaken the tendency for bosses’ ethical behavior to morph into abusiveness. For instance, much like giving exercised muscles a chance to rest and recover, ensuring that bosses are not constantly in the mode of exerting self-control may allow for their self-regulatory resources to be replenished. It also has been shown that people’s beliefs about how ego depleted they are influences their tendency to exert self-control, over and above how ego depleted they actually are. In a research study appropriately titled, “Ego depletion—is it all in your head?,” Veronika Job, Carol Dweck, and Gregory Walton found that people who believed they were less ego depleted after engaging in self-control were more likely to exert self-control in a subsequent activity. People differ in their beliefs about the consequences of exercising self-control. For some, having to exert self-control is thought to be de-energizing whereas for others it is not believed to be de-energizing. Bosses who believe that exerting self-control is not de-energizing may be less prone to behave abusively after exerting the self-control needed to behave ethically.

Whereas we have focused on how Dr. Jekyll can awaken Mr. Hyde, it also is entirely possible for Mr. Hyde to bring Dr. Jekyll to life. For instance, after behaving abusively bosses may want to make up for their bad feelings about themselves by behaving ethically. In any event, the case of Dr. Jekyll and Mr. Hyde may not be so strange after all. We should not be surprised by inconsistency in our bosses’ moral behavior, once we consider how taking the high road may cause them to take the low road, and vice versa.

BrocknerJoel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School. He is the author of The Process Matters: Engaging and Equipping People for Success.

This article was originally published on Psychology Today.

Joel Brockner: Are We More or Less Likely to Continue Behaving Morally?

by Joel Brockner

This post appears concurrently on Psychology Today.

Sometimes when we do something it causes us to continue in the same vein, or show a more extreme version of the behavior. The method of social influence known as “the foot-in-the-door” technique is based on this tendency. For instance, salespeople usually won’t ask you to make a big purchase, such as a yearlong subscription, right off the bat. Instead, they will first ask you to take a small step, such as to accept an introductory offer that will only last for a little while. Then, at a later date they will ask you to make the big purchase. Research shows that people are more likely to go along with a big request if they previously agreed to a small related request. A now-classic study suggested that people were willing to put a large, ugly sign in front of their homes saying, “Drive Carefully,” if, a few days before they simply signed their name to a petition supporting safe driving.

Other times, however, when people do something it makes them less likely to continue to behave that way. For example, if people made a charitable contribution to the United Way at work, they may feel less compelled to do so if the United Way came knocking on their door at home. In fact, if solicited at home they would probably say something to the effect of, “I gave at the office.” Research by Benoit Monin and Dale Miller on moral licensing shows a similar tendency. Once people do a good deed it makes them less likely to continue, at least for a while.

The notion of moral licensing assumes that most of us want to see ourselves as open-minded or generous. Engaging in behavior that is open-minded or generous allows us to see ourselves in these desirable ways, which ironically may free us up to behave close-mindedly or selfishly. Regarding open-mindedness, consider the evolution that has transpired in the management literature on the meaning of diversity. Originally, diversity referred to legally protected categories set forth in the Civil Rights Act of 1964, which was designed to prevent employment discrimination based on race, color, religion, sex, or national origin. Over time, the definition of diversity has broadened, such that employers increasingly use non-legal dimensions – e.g., personality traits, culture, and communication style – as indicators of diversity. An example of a broad definition of diversity may be found on the website of Dow AgroSciences: “Diversity … extends well beyond descriptors such as race, gender, age or ethnicity; we are intentional about including aspects of diversity that address our differences in culture, background, experiences, perspectives, personal and work style.” Modupe Akinola and her colleagues recently discovered that law firms that adopted broader definitions of diversity had fewer women and minorities in their employee base. Thus, behaving open-mindedly (adopting a broad definition of diversity) was associated with law firms acting close-mindedly towards women and minorities.

Regarding generosity, studies have shown that people’s willingness to donate to a charitable cause is reduced if, beforehand, they wrote a short story about themselves using morally positive words (e.g., fair, kind) than if they wrote a short story about themselves using morally negative words (selfish, mean). The same thing happened if people simply thought about an instance in which they behaved morally rather than immorally. When people’s self-image of being moral is top of mind, they feel licensed to behave in less than moral ways.

So, on the one hand, there is evidence that behaving in a certain way or even thinking about those behaviors causes people to do more of the same. On the other hand, there is evidence that prior acts (or reflecting on prior acts) of morality may make people less likely to behave consistently with their past actions. What makes it go one way rather than the other? One watershed factor is how people think about or construe their behavior. All behavior can be construed in abstract ways or in concrete ways. Abstract construals reflect the “forest,” which refers to the central or defining feature of a behavior. Concrete construals reflect the “trees,” which refers to the specific details of a behavior. Abstract construals focus on the why or deeper meaning of behavior whereas concrete construals focus on the details of how the behavior was enacted. For instance, “developing a procedure” may be construed abstractly as increasing work efficiency or concretely as writing down step-by-step instructions. “Contributing to charity” may be construed abstractly as doing the right thing or concretely as writing a check.

When people construe their behavior abstractly they see it as reflective of their values, their identity, in short, of themselves. When people engage in behavior perceived to reflect themselves it induces them to show more of the same. However, when the same behavior is construed concretely, it is seen as less relevant to who they are. A moral act viewed concretely provides evidence to people that they are moving in the direction of being a moral person, thereby freeing them up subsequently to succumb to more selfish desires. Supporting this reasoning, Paul Conway and Johanna Sheetz showed that when people viewed their acts of morality abstractly they continued to behave morally whereas when they viewed those same behaviors concretely they subsequently behaved more selfishly.

Not only is it intriguing that moral behavior can foster more of the same or less, but also it is practically important to consider when behaving morally will have one effect rather than the other. People in authority positions, such as parents, teachers, and managers, typically want those over whom they have authority to behave morally over the longer haul. This may happen when children, students, and employees construe their acts of morality abstractly rather than concretely. Moreover, authorities have at their disposal a variety of ways to bring about abstract construals, such as: (1) encouraging people to think about why they are engaging in a given behavior rather than how they are doing so, (2) getting people to think categorically (e.g., by asking questions such as, “Downsizing is an example of what?”) rather than in terms of examples (“What is an example of organizational change?”), and (3) thinking about their behavior from the vantage point of greater psychological distance; for instance, when people think about how their extra efforts to benefit the organization will pay off over the long-term, they may be more likely to engage in such activities consistently than if they merely thought about the more immediate benefits.

In The Process Matters, I emphasize that even small differences in how people are treated by authorities can have a big impact on what they think, feel, and do. Here, I am raising a related point: a subtle difference in how people think about their behavior dictates whether their expressions of morality will beget more or less.

Joel Brockner is the Phillip Hettleman Professor of Business at the Columbia Business School. He is the author of A Contemporary Look at Organizational Justice: Multiplying Insult Times Injury and Self-Esteem at Work, and the coauthor of Entrapment in Escalating Conflicts.

Brockner