Geoff Mulgan on Big Mind: How Collective Intelligence Can Change Our World

A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies. Highlighting differences between environments that stimulate intelligence and those that blunt it, Geoff Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. Read on to learn more about the ideas in Big Mind.

So what is collective intelligence?

My interest is in how thought happens at a large scale, involving many people and often many machines. Over the last few years many experiments have shown how thousands of people can collaborate online analyzing data or solving problems, and there’s been an explosion of new technologies to sense, analyze and predict. My focus is on how we use these new kinds of collective intelligence to solve problems like climate change or disease—and what risks we need to avoid. My claim is that every organization can work more successfully if it taps into a bigger mind—mobilizing more brains and computers to help it.

How is it different from artificial intelligence?

Artificial intelligence is going through another boom, embedded in everyday things like mobile phones and achieving remarkable break throughs in medicine or games. But for most things that really matter we need human intelligence as well as AI, and an over reliance on algorithms can have horrible effects, whether in financial markets or in politics.

What’s the problem?

The problem is that although there’s huge investment in artificial intelligence there’s been little progress in how intelligently our most important systems work—democracy and politics, business and the economy. You can see this in the most everyday aspect of collective intelligence—how we organize meetings, which ignores almost everything that’s known about how to make meetings effective.

What solutions do you recommend?

I show how you can make sense of the collective intelligence of the organizations you’re in—whether universities or businesses—and how to become better. Much of this is about how we organize our information commons. I also show the importance of countering the many enemies of collective intelligence—distortions, lies, gaming and trolls.

Is this new?

Many of the examples I look at are quite old—like the emergence of an international community of scientists in the 17th and 18th centuries, the Oxford English Dictionary which mobilized tens of thousands of volunteers in the 19th century, or NASA’s Apollo program which at its height employed over half a million people in more than 20,000 organizations. But the tools at our disposal are radically different—and more powerful than ever before.

Who do you hope will read the book?

I’m biased but think this is the most fascinating topic in the world today—how to think our way out of the many crises and pressures that surround us. But I hope it’s of particular interest to anyone involved in running organizations or trying to work on big problems.

Are you optimistic?

It’s easy to be depressed by the many examples of collective stupidity around us. But my instinct is to be optimistic that we’ll figure out how to make the smart machines we’ve created serve us well and that we could on the cusp of a dramatic enhancement of our shared intelligence. That’s a pretty exciting prospect, and much too important to be left in the hands of the geeks alone.

MulganGeoff Mulgan is chief executive of Nesta, the UK’s National Endowment for Science, Technology and the Arts, and a senior visiting scholar at Harvard University’s Ash Center. He was the founder of the think tank Demos and director of the Prime Minister’s Strategy Unit and head of policy under Tony Blair. His books include The Locust and the Bee.

Joel Brockner: The Passion Plea

This post originally appears on the blog of Psychology Today

BrocknerIt’s tough to argue with the idea that passion is an admirable aspect of the human condition. Passionate people are engaged in life; they really care about their values and causes and being true to them. However, a big minefield of passion is when people use it to excuse or explain away unseemly behavior. We saw this during the summer of 2017 in how the White House press secretary, Sarah Huckabee Sanders, responded to the infamous expletive-laced attack of Anthony Scaramucci on his then fellow members of the Trump team, Steve Bannon and Reince Priebus. According to The New York Times, (July 27, 2017),  “Ms. Sanders said mildly that Mr. Scaramucci was simply expressing strong feelings, and that his statement made clear that ‘he’s a passionate guy and sometimes he lets that passion get the better of him.’ ” Whereas Ms. Sanders acknowledged that Mr. Scaramucci behaved badly (his passion got the better of him), her meta-message is that it was no big deal, as implied by the words “mildly” and “simply” in the quote above.

The passion plea is by no means limited to the world of politics. Executives who are seen as emotionally rough around the edges by their co-workers often defend their behavior with statements like, “I’m just being passionate,” or “I am not afraid to tell it like it is,” or, “My problem is that I care too much.”

The passion plea distorts reality by glossing over the distinction between what is said and how it is said. Executives who deliver negative feedback in a harsh tone are not just being passionate. Even when the content of the negative feedback is factual, harsh tones convey additional messages – notably a lack of dignity and respect. Almost always, there are ways to send the same strong messages or deliver the same powerful feedback in ways that do not convey a lack of dignity and respect. For instance, Mr. Scaramucci could have said something like, “Let me be as clear as possible: I have strong disagreements with Steve Bannon and Reince Priebus.” It may have been less newsworthy, but it could have gotten the same message across. Arguably, Mr. Scaramucci’s 11-day tenure as White House director of communications would have been longer had he not been so “passionate” and instead used more diplomatic language.

Similarly, executives that I coach rarely disagree when it is made evident that they could have sent the same strong negative feedback in ways that would have been easier for their co-workers to digest. Indeed, this is the essence of constructive criticism, which typically seeks to change the behavior of the person on the receiving end. Rarely are managers accused of coming on “too strong” if they deliver negative feedback in the right ways. For example, instead of saying something about people’s traits or characters (e.g., “You aren’t reliable”) it would be far better to provide feedback with reference to specific behavior (e.g., “You do not turn in your work on time”). People usually are more willing and able to respond to negative feedback about what they do rather than who they are. Adding a problem-solving approach is helpful as well, such as, “Some weeks you can be counted on to do a good job whereas other weeks not nearly as much. Why do you think that is happening, and what can we do together to ensure greater consistency in your performance?” Moreover, the feedback has to be imparted in a reasonable tone of voice, and in a context in which people on the receiving end are willing and able to take it in. For instance, one of my rules in discussing with students why they didn’t do well on an assignment is that we not talk immediately after they received the unwanted news. It is far better to have a cooling-off period in which defensiveness goes down and open-mindedness goes up.

If our goal is to alienate people or draw negative attention to ourselves then we should be strong and hard-driving, even passionate, in what we say as well as crude and inappropriate in how we say it. However, if we want to be a force for meaningful change or a positive role model, it is well within our grasp to be just as strong and hard-driving in what we say while being respectful and dignified in how we say it.

Joel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School.

Alexandra Logue: Not All Excess Credits Are The Students’ Fault

This post was originally published on Alexandra Logue’s blog

A recent article in Educational Evaluation and Policy Analysis reported on an investigation of policies punishing students for graduating with excess credits.  Excess credit hours are the credits that a student obtains in excess of what is required for a degree, and many students graduate having taken several courses more than what was needed.

To the extent that tuition does not cover the cost of instruction, and/or that financial aid is paying for these excess credits, someone other than the student—the college or the government—is paying for these excess credits.  Graduating with excess credits also means that a student is occupying possibly scarce classroom seats longer than s/he needs to and is not entering the work force with a degree and paying more taxes as soon as s/he could.  Thus there are many reasons why colleges and/or governments might seek to decrease excess credits.  The article considers cases in which states have imposed sanctions on students who graduate with excess credits, charging more for credits taken significantly above the number required for a degree.  The article shows that such policies, instead of resulting in students graduating sooner, have instead resulted in greater student debt.  But the article does not identify the reasons why this may be the case.  Perhaps one reason is because students do not have control over those excess credits.

For example, as described in my forthcoming book, Pathways to Reform: Credits and Conflict at The City University of New York, students may accumulate excess credits because of difficulties they have transferring their credits.  When students transfer, there can be significant delays in having the credits that they obtained at their old institution evaluated by their new institution.  At least at CUNY colleges, the evaluation process can take many months.  During that period, a student either has to stop out of college or take a risk and enroll in courses that may or may not be needed for the student’s degree.  Even when appropriate courses are taken, all too often credits that a student took at the old college as satisfying general education (core) requirements or major requirements become elective credits, or do not transfer at all. A student then has to repeat courses or take extra courses in order to satisfy all of the requirements at the new college.  Given that a huge proportion of students now transfer, or try to transfer, their credits (49% of bachelor’s degree recipients have some credits from a community college, and over one-third of students in the US? transfer within six years of starting college), a great number of credits are being lost.

Nevertheless, a 2010 study at CUNY found that a small proportion of the excess credits of its bachelor’s degree recipients was due to transfer—students who never transferred graduated with only one or two fewer excess credits, on average, than did students who did transfer.  Some transfer students may have taken fewer electives at their new colleges in order to have room in their programs to make up nontransferring credits from their old colleges, without adding many excess credits.

But does this mean that we should blame students for those excess credits and make them pay more for them?  Certainly some of the excess credits are due to students changing their majors late and/or to not paying attention to requirements and so taking courses that don’t allow them to finish their degrees, and there may even be some students who would rather keep taking courses than graduate.

But there are still other reasons that students may accumulate extra credits, reasons for which the locus of control is not the student.  Especially in financially strapped institutions, students may have been given bad or no guidance by an advisor.  In addition, students may have been required to take traditional remedial courses, which can result in a student acquiring many of what CUNY calls equated credits, on top of the required college-level credits (despite the fact that there are more effective ways to deliver remediation without the extra credits). Or a student may have taken extra courses that s/he didn’t need to graduate in order to continue to enroll full-time, so that the student could continue to be eligible for some types of financial aid and/or (in the past) health insurance. Students may also have made course-choice errors early in their college careers, when they were unaware of excess-credit tuition policies that would only have an effect years later.

The fact that the imposition of excess-credit tuition policies did not affect the number of excess credits accumulated but instead increased student debt by itself suggests that, to at least some degree, the excess credits are not something that students can easily avoid, and/or that there are counter-incentives operating that are even stronger than the excess tuition.

Before punishing students, or trying to control their behavior, we need to have a good deal of information about all of the different contingencies to which students are subject.  Students should complete their college’s requirements as efficiently as possible.  However, just because some students demonstrate delayed graduation behavior does not mean that they are the ones who are controlling that behavior.  Decreasing excess credits needs to be a more nuanced process, with contingencies and consequences tailored appropriately to those students who are abusing the system, and those who are not.

LogueAlexandra W. Logue is a research professor at the Center for Advanced Study in Education at the Graduate Center, CUNY. From 2008 to 2014, she served as executive vice chancellor and university provost of the CUNY system. She is the author of Pathways to Reform: Credits and Conflict at The City University of New York.

Omnia El Shakry: Psychoanalysis and Islam

Omnia El Shakry‘s new book, The Arabic Freud, is the first in-depth look at how postwar thinkers in Egypt mapped the intersections between Islamic discourses and psychoanalytic thought.

What are the very first things that pop into your mind when you hear the words “psychoanalysis” and “Islam” paired together?  For some of us the connections might seem improbable or even impossible. And if we were to be brutally honest the two terms might even evoke the specter of a so-called “clash of civilizations” between an enlightened, self-reflective West and a fanatical and irrational East.

It might surprise many of us to know, then, that Sigmund Freud, the founding figure of psychoanalysis, was ever-present in postwar Egypt, engaging the interest of academics, novelists, lawyers, teachers, and students alike. In 1946 Muhammad Fathi, a Professor of Criminal Psychology in Cairo, ardently defended the relevance of Freud’s theories of the unconscious for the courtroom, particularly for understanding the motives behind homicide. Readers of Nobel laureate Naguib Mahfouz’s 1948 The Mirage were introduced to the Oedipus complex, graphically portrayed in the novel, by immersing themselves in the world of its protagonist—pathologically erotically attached and fixated on his possessive mother. And by 1951 Freudian theories were so well known in Egypt that a secondary school philosophy teacher proposed prenuptial psychological exams in order to prevent unhappy marriages due to unresolved Oedipus complexes!

Scholars who have tackled the question of psychoanalysis and Islam have tended to focus on it as problem, by assuming that psychoanalysis and Islam have been “mutually ignorant” of each other, and they have placed Islam on the couch, as it were, alleging that it is resistant to the “secular” science of psychoanalysis. In my book, The Arabic Freud, I undo the terms of this debate and ask, instead, what it might mean to think of psychoanalysis and Islam together, not as a “problem,” but as a creative encounter of ethical engagement.

What I found was that postwar thinkers in Egypt saw no irreconcilable differences between psychoanalysis and Islam. And in fact, they frequently blended psychoanalytic theories with classical Islamic concepts. For example, when they translated Freud’s concept of the unconscious, the Arabic term used, “al-la-shuʿur,” was taken from the medieval mystical philosopher Ibn ʿArabi, renowned for his emphasis on the creative imagination within Islamic spirituality.

Islamic thinkers further emphasized similarities between Freud’s interpretation of dreams and Islamic dream interpretation, and they noted that the analyst-analysand (therapist-patient) relationship and the spiritual master-disciple relationship of Sufism (the phenomenon of mysticism in Islam) were nearly identical. In both instances, there was an intimate relationship in which the “patient” was meant to forage their unconscious with the help of their shaykh (spiritual guide) or analyst, as the case might be. Both Sufism and psychoanalysis, then, were characterized by a relationship between the self and the other that was mediated by the unconscious. Both traditions exhibited a concern for the relationship between what was hidden and what was shown in psychic and religious life, both demonstrated a preoccupation with eros and love, and both mobilized a highly specialized vocabulary of the self.

What, precisely, are we to make of this close connection between Islamic mysticism and psychoanalysis? On the one hand, it helps us identify something of a paradox within psychoanalysis, namely that for some psychoanalysis represents a non-religious and even atheistic world view. And there is ample evidence for this view within Freud’s own writings, which at times pathologized religion in texts such as The Future of an Illusion and Civilization and Its Discontents. At the same time, in Freud and Man’s Soul, Bruno Bettelheim argued that in the original German Freud’s language was full of references to the soul, going so far as to refer to psychoanalysts as “a profession of secular ministers of souls.” Similarly, psychoanalysis was translated into Arabic as “tahlil al-nafs”—the analysis of the nafs, which means soul, psyche, or self and has deeply religious connotations. In fact, throughout the twentieth century there have been psychoanalysts who have maintained a receptive attitude towards religion and mysticism, such as Marion Milner or Sudhir Kakar. What I take all of this to mean is that psychoanalysis as a tradition is open to multiple, oftentimes conflicting, interpretations and we can take Freud’s own ambivalence towards religion, and towards mysticism in particular, as an invitation to rethink the relationship between psychoanalysis and religion.

What, then, if religious forms of knowledge, and the encounter between psychoanalysis and Islam more specifically, might lead us to new insights into the psyche, the self, and the soul? What would this mean for how we think about the role of religion and ethics in the making of the modern self? And what might it mean for how we think about the relationship between the West and the Islamic world?

FreudOmnia El Shakry is Professor of History at the University of California, Davis. She is the author of The Great Social Laboratory: Subjects of Knowledge in Colonial and Postcolonial Egypt and the editor of Gender and Sexuality in Islam. Her new book, The Arabic Freud, is out this September.

Chris Chambers: The Seven Deadly Sins of Psychology

ChambersPsychological science has made extraordinary discoveries about the human mind, but can we trust everything its practitioners are telling us? In recent years, it has become increasingly apparent that a lot of research in psychology is based on weak evidence, questionable practices, and sometimes even fraud. The Seven Deadly Sins of Psychology by Chris Chambers diagnoses the ills besetting the discipline today and proposes sensible, practical solutions to ensure that it remains a legitimate and reliable science in the years ahead.

Why did you decide to write this book?

CC: Over the last fifteen years I‘ve become increasingly fed up with the “academic game” in psychology, and I strongly believe we need to raise standards to make our research more transparent and reliable. As a psychologist myself, one of the key lessons I’ve learned is that there is a huge difference between how the public thinks science works and how it actually works. The public have this impression of scientists as objective truth seekers on a selfless mission to understand nature. That’s a noble picture but bears little resemblance to reality. Over time, the mission of psychological science has eroded from something that originally was probably quite close to that vision but has now become a contest for short-term prestige and career status, corrupted by biased research practices, bad incentives and occasionally even fraud.

Many psychologists struggle valiantly against the current system but they are swimming against a tide. I trained within that system. I understand how it works, how to use it, and how it can distort your thinking. After 10 years of “playing the game” I realized I didn’t like the kind of scientist I was turning into, so I decided to try and change the system and my own practices—not only to improve science but to help younger scientists avoid my predicament. At its heart this book lays out my view of how we can reinvigorate psychology by adopting an emerging philosophy called “open science.” Some people will agree with this solution. Many will not. But, above all, the debate is important to have.

It sounds like you’re quite skeptical about science generally.

CC: Even though I’m quite critical about psychology, the book shouldn’t be seen as anti-science—far from it. Science is without doubt the best way to discover the truth about the world and make rational decisions. But that doesn’t mean it can’t or shouldn’t be improved. We need to face the problems in psychology head-on and develop practical solutions. The stakes are high. If we succeed then psychology can lead the way in helping other sciences solve similar problems. If we fail then I believe psychology will fade into obscurity and become obsolete.

Would it matter if psychology disappeared? Is it really that important?

CC: Psychology is a huge part of our lives. We need it in every domain where it is important to understand human thought or behavior, from treating mental illness, to designing traffic signs, to addressing global problems like climate change, to understanding basic (but extraordinarily complex) mental functions such as how we see or hear. Understanding how our minds work is the ultimate journey of self-discovery and one of the fundamental sciences. And it’s precisely because the world needs robust psychological science that researchers have an ethical obligation to meet the high standards expected of us by the public.

Who do you think will find your book most useful?

CC: I have tried to tailor the content for a variety of different audiences, including anyone who is interested in psychology or how science works. Among non-scientists, I think the book may be especially valuable for journalists who report on psychological research, helping them overcome common pitfalls and identify the signs of bad or weak studies. At another level, I’ve written this as a call-to-arms for my fellow psychologists and scientists in closely aligned disciplines, because we need to act collectively in order to fix these problems. And the most important readers of all are the younger researchers and students who are coming up in the current academic system and will one day inherit psychological science. We need to get our house in order to prepare this generation for what lies ahead and help solve the difficulties we inherited.

So what exactly are the problems facing psychology research?

CC: I’ve identified seven major ills, which (a little flippantly, I admit) can be cast as seven deadly sins. In order they are Bias, Hidden Flexibility, Unreliability, Data Hoarding, Corruptibility, Internment, and Bean Counting. I won’t ruin the suspense by describing them in detail, but they all stem from the same root cause: we have allowed the incentives that drive individual scientists to fall out of step with what’s best for scientific advancement. When you combine this with the intense competition of academia, it creates a research culture that is biased, closed, fearful and poorly accountable—and just as a damp bathroom encourages mold, a closed research culture becomes the perfect environment for cultivating malpractice and fraud.

It all sounds pretty bad. Is psychology doomed?

CC: No. And I say this emphatically: there is still time to turn this around. Beneath all of these problems, psychology has a strong foundation; we’ve just forgotten about it in the rat race of modern academia. There is a growing movement to reform research practices in psychology, particularly among the younger generation. We can solve many problems by adopting open scientific practices—practices such as pre-registering study designs to reduce bias, making data and study materials as publicly available as possible, and changing the way we assess scientists for career advancement. Many of these problems are common to other fields in the life sciences and social sciences, which means that if we solve them in psychology we can solve them in those areas too. In short, it is time for psychology to grow up, step up, and take the lead.

How will we know when we’ve fixed the deadly sins?

CC: The main test is that our published results should become a lot more reliable and repeatable. As things currently stand, there is a high chance that any new result published in a psychology journal is a false discovery. So we’ll know we’ve cracked these problems when we can start to believe the published literature and truly rely on it. When this happens, and open practices become the norm, the closed practices and weak science that define our current culture will seem as primitive as alchemy.

Chris Chambers is professor of cognitive neuroscience in the School of Psychology at Cardiff University and a contributor to the Guardian science blog network. He is the author of The 7 Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice.

Joel Brockner: Can Job Autonomy Be a Double-Edged Sword?

This post was originally published on the Psychology Today blog.

“You can arrive to work whenever convenient.”

“Work from home whenever you wish.”

“You can play music at work at any time.”

These are examples of actual workplace policies from prominent companies such as Aetna, American Express, Dell, Facebook, Google, IBM, and Zappos. They have joined the ranks of many organizations in giving employees greater job autonomy, that is, more freedom to decide when, where, and how to do their work. And why not? Research by organizational psychologists such as Richard Hackman and Greg Oldham and by social psychologists such as Edward Deci and Richard Ryan, has shown that job autonomy can have many positive effects. The accumulated evidence is that employees who experience more autonomy are more motivated, creative, and satisfied with their jobs.

Against this backdrop of the generally favorable effects of job autonomy, recent research has shown that it also may have a dark side: unethical behavior. Jackson Lu, Yoav Vardi, Ely Weitz and I discovered such results in a series of field and laboratory studies soon to be published in the Journal of Experimental Social Psychology. In field studies conducted in Israel, employees from a wide range of industries rated how much autonomy they had and how often they engaged in unethical behavior, such as misrepresenting their work hours or wasting work time on private phone calls. Those who had greater autonomy said that they engaged in more unethical behavior on the job. In laboratory experiments conducted in the United States we found that it may not even be necessary for people to have actual autonomy for them to behave unethically; merely priming them with the idea of autonomy may do the trick. In these studies participants were randomly assigned to conditions differing in how much the concept of autonomy was called to mind. This was done with a widely used sentence-unscrambling task in which people had to rearrange multiple series of words into grammatically correct sentences. For example, those in the high-autonomy condition were given words such as, “have many as you as days wish you vacation may” which could be rearranged to form the sentence, “You may have as many vacation days as you wish.” In contrast, those in the low-autonomy condition were given words such as, “office in work you must the,” which could be rearranged to, “You must work in the office.” After completing the sentence-unscrambling exercise participants did another task in which they were told that the amount of money they earned depended on how well they performed. The activity was structured in a way that enabled us to tell whether participants lied about their performance. Those who were previously primed to experience greater autonomy in the sentence-unscrambling task lied more. Job autonomy gives employees a sense of freedom which usually has positive effects on their productivity and morale but also can lead them to feel that they can do whatever they want, including not adhering to rules of morality.

All behavior is a function of what people want to do (motivation) and what they are capable of doing (ability). Consider the unethical behavior elicited by high levels of autonomy. Having high autonomy may not have made people want to behave unethically. However, it may have enabled the unethical behavior by making it possible for people to engage in it. Indeed, the distinction between people wanting to behave unethically versus having the capability of doing so may help answer two important questions:

(1) What might mitigate the tendency for job autonomy to elicit unethical behavior?

(2) If job autonomy can lead to unethical behavior should companies re-evaluate whether to give job autonomy to its employees? That is, can job autonomy be introduced in a way that maximizes its positive consequences (e.g., greater creativity) without introducing the negative effect of unethical behavior?

With respect to the first question, my hunch is that people who have job autonomy and therefore are able to behave unethically will not do so if they do not want to behave unethically. For example, people who are high on the dimension of moral identity, for whom behaving morally is central to how they define themselves would be less likely to behave unethically even when a high degree of job autonomy enabled or made it possible for them to do so.

With respect to the second question, I am not recommending that companies abandon their efforts to provide employees with job autonomy. Our research suggests, rather, that the consequences of giving employees autonomy may not be summarily favorable. Taking a more balanced view of how employees respond to job autonomy may shed light on how organizations can maximize the positive effects of job autonomy while minimizing the negative consequence of unethical behavior.

Whereas people generally value having autonomy, some people want it more than others. People who want autonomy a lot may be less likely to behave unethically when they experience autonomy. For one thing, they may be concerned that the autonomy they covet may be taken away if they were to take advantage of it by behaving unethically. This reasoning led us to do another study to evaluate when the potential downside of felt autonomy can be minimized while its positive effects can be maintained. Once again, we primed people to experience varying degrees of job autonomy with the word-unscrambling exercise. Half of them then went on to do the task which measured their tendency to lie about their performance, whereas the other half completed an entirely different task, one measuring their creativity. Once again, those who worked on the task in which they could lie about their performance did so more when they were primed to experience greater autonomy. And, as has been found in previous research those who did the creativity task performed better at it when they were primed to experience greater autonomy.

Regardless of whether they did the task that measured unethical behavior or creativity, participants also indicated how much they generally valued having autonomy. Among those who generally valued having autonomy to a greater extent, (1) the positive relationship between experiencing job autonomy and behaving unethically diminished, whereas (2) the positive relationship between experiencing job autonomy and creativity was maintained. In other words, as long as people valued having autonomy, the experience of autonomy had the positive effect of enhancing creativity without introducing the dangerous side effect of unethical behavior. So, when organizations introduce job autonomy policies like those mentioned at the outset, they may gain greater overall benefits when they ensure that their employees value having autonomy. This may be achieved by selecting employees who value having autonomy as well as by creating a corporate culture which emphasizes the importance of it. More generally, a key practical takeaway from our studies is that when unethical behavior is enabled, whether through job autonomy or other factors, it needs to be counterbalanced by conditions that make employees not want to go there.

BrocknerJoel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School. He is the author of The Process Matters: Engaging and Equipping People for Success.

Face Value: Man or Woman?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. In honor of the book’s release, we’re sharing examples from his research every week.

Todorov

It is easy to identify the woman in the image on the right and the man in the image on the left. But the two images are almost identical with one subtle difference: the skin surface in the image on the left is a little bit darker. The eyes and lips of the faces are identical, but the rest of the image on the left was darkened, and the rest of the image on the right was lightened. This manipulation makes the face on the left look masculine and the face on the right look feminine. This is one way to induce the gender illusion. Here is another one.

Based on research reported in

  1. Russell (2009). “A sex difference in facial contrast and its exaggeration by cosmetics.” Perception 38, 1211–1219.

Todorov

Face Value: Eyebrows

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

It is easier to recognize Richard Nixon when his eyes are removed than when his eyebrows are removed from the image. Our intuitions about what facial features are important are insufficient at best and misleading at worst.

 

Based on research reported in

  1. Sadr, I. Jarudi, and P. Sinha (2003). “The role of eyebrows in face recognition.” Perception 32, 285–293.

Todorov

Face Value: Can you recognize the celebrities?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

 

A: Justin Bieber and Beyoncé

 

Todorov

The Great Mother—Jackets throughout the years

Goddess, monster, gate, pillar, tree, moon, sun, vessel, and every animal from snakes to birds: the maternal has been represented throughout history as both nurturing and fearsome, a primordial image of the human psyche. In celebration of Mother’s Day, we dipped into the archives for a tour of the various covers of a landmark book, Erich Neumann’s The Great Mother.

Face Value: Who is more likely to have committed a violent crime?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

 

The face on the right was manipulated to be perceived as more criminal looking and the face on the left as less criminal looking.

Note that these immediate impressions need not be grounded in reality. They are our visual stereotypes of what constitutes criminal appearance. Note also the large number of differences between the two faces: shape, color, texture, individual features, placement of individual features, and so on. Yet we can easily identify global characteristics that differentiate these faces. More masculine appearance makes a face appear more criminal. In contrast, more feminine appearance makes a face appear less criminal. But keep in mind that it is impossible to describe all the variations between the two faces in verbal terms.

Based on research reported in

  1. Funk, M. Walker, and A. Todorov (2016). “Modeling perceived criminality and remorse in faces using a data-driven computational approach.” Cognition & Emotion, http://dx.doi.org/10.1080/02699931.2016.1227305.

 

Todorov

Alexander Todorov on the science of first impressions

TodorovWe make up our minds about others after seeing their faces for a fraction of a second—and these snap judgments predict all kinds of important decisions. For example, politicians who simply look more competent are more likely to win elections. Yet the character judgments we make from faces are as inaccurate as they are irresistible; in most situations, we would guess more accurately if we ignored faces. So why do we put so much stock in these widely shared impressions? What is their purpose if they are completely unreliable? In Face Value, Alexander Todorov, one of the world’s leading researchers on the subject, answers these questions as he tells the story of the modern science of first impressions. Here he responds to a few questions about his new book.

What inspired you to write this book?

AT: I have been doing research on how people perceive faces for more than 10 years. Typically, we think of face perception as recognizing identity and emotional expressions, but we do much more than that. When we meet someone new, we immediately evaluate their face and these evaluations shape our decisions. This is what we informally call first impressions. First impressions pervade everyday life and often have detrimental consequences. Research on first impressions from facial appearance has been quite active during the last decade and we have made substantive progress in understanding these impressions. My book is about the nature of first impressions, why we cannot help but form impressions, and why these impressions will not disappear from our lives.

In your book, you argue that first impressions from facial appearance are irresistible. What is the evidence?

AT: As I mentioned, the study of first impressions has been a particularly active area of research and the findings have been quite surprising. First, we form impressions after seeing a face for less than one-tenth of a second. We decide not only whether the person is attractive but also whether he or she is trustworthy, competent, extroverted, or dominant. Second, we agree on these impressions and this agreement emerges early in development. Children, just like adults, are prone to using face stereotypes. Third, these impressions are consequential. Unlucky people who appear “untrustworthy” are more likely to get harsher legal punishments. Those who appear “trustworthy” are more likely to get loans on better financial terms. Politicians who appear more “competent” are more likely to get elected. Military personnel who appear more “dominant” are more likely to achieve higher ranks. My book documents both the effortless nature of first impressions and their biasing effects on decisions.

The first part of your book is about the appeal of physiognomy—the pseudoscience of reading character from faces. Has not physiognomy been thoroughly discredited?

AT: Yes and no. Most people today don’t believe in the great physiognomy myth that we can read the character of others from their faces, but the evidence suggests that we are all naïve physiognomists: forming instantaneous impressions and acting on these impressions. Moreover, fueled by recent research advances in visualizing the content of first impressions, physiognomy appears in many modern disguises: from research papers claiming that we can discern the political, religious, and sexual orientations of others from images of their faces to private ventures promising to profile people based on images of their faces and offering business services to companies and governments. This is nothing new. The early 20th century physiognomists, who called themselves “character analysts,” were involved in many business ventures. The modern physiognomists are relying on empirical and computer science methods to legitimize their claims. But as I try to make clear in the book, the modern claims are as far-stretched as the claims of the old physiognomists. First, different images of the same person can lead to completely different impressions. Second, often our decisions are more accurate if we completely ignore face information and rely on common knowledge.

You mentioned research advances that visualize the content of first impressions. What do you mean?

AT: Faces are incredibly complex stimuli and we are inquisitively sensitive to minor variations in facial appearance. This makes the study of face perception both fascinating and difficult. In the last 10 years, we have developed methods that capture the variations in facial appearance that lead to specific impressions such as trustworthiness. The best way to illustrate the methods is by providing visual images, because it is impossible to describe all these variations in verbal terms. Accordingly, the book is richly illustrated. Here is a pair of faces that have been extremely exaggerated to show the variations in appearance that shape our impressions of trustworthiness.

Faces

Most people immediately see the face on the left as untrustworthy and the face on the right as trustworthy. But notice the large number of differences between the two faces: shape, color, texture, individual features, placement of individual features, and so on. Yet we can easily identify global characteristics that differentiate these faces. Positive expressions and feminine appearance make a face appear more trustworthy. In contrast, negative expressions and masculine appearance make a face appear less trustworthy. We can and have built models of many other impressions such as dominance, extroversion, competence, threat, and criminality. These models identify the contents of our facial stereotypes.

To the extent that we share face stereotypes that emerge early in development, isn’t it possible that these stereotypes are grounded in our evolutionary past and, hence, have a kernel of truth?

AT: On the evolutionary scale, physiognomy has a very short history. If you imagine the evolution of humankind compressed within 24 hours, we have lived in small groups during the entire 24 hours except for the last 5 minutes. In such groups, there is abundant information about others coming from first-hand experiences (like observations of behavior and interactions) and from second-hand experiences (like testimonies of family, friends, and acquaintances). That is for most of human history, people did not have to rely on appearance information to infer the character of others. These inferences were based on much more reliable and easily accessible information. The emergence of large societies in the last few minutes of the day changed all that. The physiognomists’ promise was that we could handle the uncertainty of living with strangers by knowing them from their faces. It is no coincidence that the peaks of popularity of physiognomists’ ideas were during times of great migration. Unfortunately, the physiognomists’ promise is as appealing today as it was in the past.

Are there ways to minimize the effects of first impressions on our decisions?

AT: We need to structure decisions so that we have access to valid information and minimize the access to appearance information. A good real life example is the increase of the number of women in prestigious philharmonic orchestras. Until recently, these orchestras were almost exclusively populated by men. What made the difference was the introduction of blind auditions. The judges could hear the candidates’ performance but their judgments could not be swayed by appearance, because they could not see the candidates.

So why are faces important?

AT: Faces play an extremely important role in our mental life, though not the role the physiognomists imagined. Newborns with virtually no visual experience prefer to look at faces than at other objects. After all, without caregivers we will not survive. In the first few months of life, faces are one of the most looked upon objects. This intensive experience with faces develops into an intricate network of brain regions dedicated to the processing of faces. This network supports our extraordinary face skills: recognizing others and detecting changes in their emotional and mental states. There are likely evolutionary adaptations in the human face—our bare skin, elongated eyes with white sclera, and prominent eyebrows—but these adaptations are about facilitating the reading of other minds, about communicating and coordinating our actions, not about inferring character.

Alexander Todorov is professor of psychology at Princeton University, where he is also affiliated with the Princeton Neuroscience Institute and the Woodrow Wilson School of Public and International Affairs. He is the author of Face Value: The Irresistible Influence of First Impressions.