Keith Oatley on Our Minds, Our Selves: A Brief History of Psychology

OatleyAdvances in psychology have revolutionized our understanding of the human mind. Imaging technology allows researchers to monitor brain activity, letting us see what happens when we perceive, think, and feel. But technology is only part of how ideas about the mind and brain have developed over the past century and a half. In Our Minds, Our Selves, distinguished psychologist and writer Keith Oatley provides an engaging, original, and authoritative history of modern psychology told through the stories of its most important breakthroughs and the men and women who made them.

What prompted you to write this book?

There didn’t seem to be a book about the mind that people could read and say, “Oh, that’s why I see a certain person in this way, but feel myself to be rather different,” or “So that’s what goes on in a conversation.” I wanted to write a book on psychology that throws light on everyday human life, that gives the reader a sense of important turning points in research, and that focuses on the deeper principles of how the mind works, principles that help us think about our selves and others.

We like to think that we’re in direct touch with reality, but you say that’s not quite how it is.

In a way we are in touch with reality, but the mind isn’t a place into which reality can enter through eyes and ears. It’s the other way round: we project what we know from our inner understandings onto what comes in from the senses. Think of reading. If you did not know how to read, what you would see on a page would be black bits and white spaces. But since you can read, you project meanings onto these bits and spaces. With people it’s the same. You can read other people’s minds, project your understandings onto them.

You start with the problem of consciousness. Isn’t that a bit difficult?

Consciousness may seem difficult and people have argued about it for centuries, but the basic idea is straightforward. The brain contains some 86 billion nerve cells, each of which has connections to hundreds or thousands of others. We couldn’t possibly be aware of everything that goes on as these neurons interact with each other. The brain gives us a set of conclusions from these processes. This was the theory that Helmholtz proposed. Not many people know it, but really he was the main founder of our understanding of mind. The conclusions the mind offers are what we become conscious of: “That’s what it’s like out there in the world, laid out in space, with people to meet, objects to use, places to go.” From physics we might get a different depiction, perhaps of protons and electrons and waves, but that wouldn’t be of much help to us, in our ordinary lives. The conclusions the brain offers come into our conscious awareness, from sampling patterns of light and sound, to tell us: “That’s what this person means.” Or looking back to something remembered: “That’s what happened.” Or looking forward, with a plan: “Here’s what I might do.”

You say the book is about the principles of mind. What do you mean?

The deepest principle is that the mind offers us conclusions by being able to make models of the world, and even of our own self. A clock is a model of the rotation of the earth. We use it to get up in the morning, or to go and meet a friend. With some kinds of models, we can do more: we can see what would happen if we make alterations to the model, because models are things we can change. We translate an idea, or an aspiration, into our model of the world. Then we can manipulate the model, change it, to create new states. We call this thinking. Then we translate back again, from the resulting model-states into terms of the world again, to see something in a particular way, or to say this, or to do that.

You said there are other principles, too. What’s another one?

The characteristic of our human species that separates from other animals is our ability to cooperate. From an early age, human children, but not chimpanzees, can recognize when someone is trying to do something, but isn’t quite able to, and can know how to help. A two-year-old child, for instance, can see an adult with her hands full of books who seems to be wanting to put the books into a cupboard, but because her hands are full can’t open the door. The two-year-old will open the cupboard door for this person. And children of this age start to make joint plans, for instance when they play. They don’t just play on their own, they play together. Even a simple game like hide-and-seek requires cooperation. Plans that involve goals and activities shared with others become more important than anything else for us: how to join with another in living together, how to raise a family, how to cooperate with others at our place of work for ends that are useful. This principle widens so that we humans form communities and cultures, in which what goes for the whole group becomes important. So we try to be helpful, we are upset by injustice, we don’t want to tolerate people who are destructive. This is called morality. We strive to make the world a better place, not just for our selves, individually, but for everyone.

Is human intelligence going to be overtaken by artificial intelligence?

The most recent kinds of artificial intelligence are starting to think in ways similar to how we humans think, by forming intuitions from many examples, and projecting meanings from these intuitions onto new inputs. Often, when we humans have encountered a new group of people, or a new situation, we have become antagonistic; we have reacted as if the situation is one of conflict. With newer forms of artificial intelligence, we will need to think hard, to take on what is known from psychology, history, and social science, to fashion not conflict but cooperation with these new forms.

Keith Oatley is a distinguished academic researcher and teacher, as well as a prize-winning novelist. He has written for scientific journals, the New York Times, New Scientist, Psychology Today, and Scientific American Mind. He is the author of many books, including Such Stuff as Dreams and The Passionate Muse, and a coauthor of the leading textbook on emotion. He is professor emeritus of cognitive psychology at the University of Toronto and lives in Toronto.

Omnia El Shakry: Genealogies of Female Writing

Arabic

Throughout Women’s History Month, join Princeton University Press as we celebrate scholarship by and about women.

by Omnia El Shakry

In the wake of the tumultuous year for women that was 2017, many female scholars have been reflecting upon their experiences in the academy, ranging from sexual harassment to the everyday experiences of listening to colleagues mansplain or even intellectually demean women’s work. Indeed, I can vividly recall, as a young assistant professor, hearing a senior male colleague brush off what has now become a canonical text in the field of Middle East studies as “merely” an example of gender history, with no wider relevance to the region. Gender history rolled off his tongue with disdain and there was an assumption that it was distinct from real history.

Few now, however, would deign to publicly discount the role that female authors have played in the vitality of the field of Middle East studies. In recognition of this, the Middle East Studies Association of North America has inaugurated new book awards honoring the pioneering efforts of two women in the field, Nikkie Keddie and Fatima Mernissi. I can still remember the first time I read Mernissi’s work while an undergraduate at the American University in Cairo. Ever since my freshman year, I had enrolled in Cultural Anthropology courses with Soraya Altorki—a pioneering anthropologist who had written about Arab Women in the Field and the challenges of studying one’s own society. In her courses, and elsewhere, I was introduced to Lila Abu-Lughod’s Veiled Sentiments, an ethnography of poetry and everyday discourse in a Bedouin community in Egypt’s Western desert. Abu-Lughod’s narrative was sensitive to questions of positionality, a lesson she both drew from and imbued with feminism. A second piece of writing, this time an article by Stefania Pandolfo on “Detours of Life” that interpreted the internal logic of imagining space and bodies in a Moroccan village gave me a breathtaking view of ethnography, the heterogeneity of lifeworlds, and the work of symbolic interpretation. 

In hindsight I can see that these early undergraduate experiences of reading, and studying with, female anthropologists profoundly impacted my own writing. Although I would eventually become a historian, I remained interested in the ethnographic question of encounters, and specifically of how knowledge is produced through encounters­—whether the encounter between the colonizer and the colonized or between psychoanalysis and Islam. In my most recent book, The Arabic Freud: Psychoanalysis and Islam in Modern Egypt, I ask what it might mean to think of psychoanalysis and Islam together, not as a “problem” but as a creative encounter of ethical engagement. Rather than conceptualizing modern intellectual thought as something developed in Europe, merely to be diffused at its point of application elsewhere, I imagine psychoanalytic knowledge as something elaborated across the space of human difference.

There is yet another female figure who stands at the door of my entry into writing about the Middle East. My grandmother was a strong presence in my early college years. Every Friday afternoon I would head over to her apartment, just a quick walk away from my dorm in downtown Cairo. We would eat lunch, laugh and talk, and watch the subtitled American soap operas that were so popular back then. Since she could not read or write, we would engage in a collective work of translation while watching and I often found her retelling of the series to be far more imaginative than anything network television writers could ever have produced.

Writing for me is about the creative worlds of possibility and of human difference that exist both within, but also outside, of the written word. As historians when we write we are translating between the living and the dead, as much as between different life worlds, and we are often propelled by intergenerational and transgenerational bonds that include the written word, but also exceed it.

Omnia El Shakry is professor of history at the University of California, Davis. She is the author of The Arabic Freud: Psychoanalysis and Islam in Modern Egypt.

Andrew Scull: On the response to mass shootings

ScullAmerica’s right-wing politicians have developed a choreographed response to the horrors of mass shootings. In the aftermath of Wednesday’s massacre of the innocents, President Trump stuck resolutely to the script. Incredibly, he managed to avoid even mentioning the taboo word “guns.” In his official statement on this week’s awfulness, he offers prayers for the families of the victims—as though prayers will salve their wounds, or prevent the next outrage of this sort; they now fall thick and fast upon us. And he spouted banalities: “No child, no teacher, should ever be in danger in an American school.” That, of course, was teleprompter Trump. The real Trump, as always, had surfaced hours earlier on Twitter. How had such a tragedy come to pass?  On cue, we get the canned answer: the issue was mental health: “So many signs that the Florida shooter was mentally disturbed.”  Ladies and gentlemen, we have a mental health problem don’t you see, not a gun problem.

Let us set aside the crass hypocrisy of those who have spent so much time attempting to destroy access to health care (including mental health care) for tens of millions of people bleating about the need to provide treatment for mental illness. Let us ignore the fact that President Trump, with a stroke of a pen, set aside regulations that made it a little more difficult for “deranged” people to obtain firearms. They have Second Amendment rights too, or so it would seem. Let us overlook the fact that in at least two of the recent mass shootings, the now-dead were worshipping the very deity their survivors and the rest of us are invited to pray to when they were massacred. Let us leave all of that out of account. Do we really just have a mental health problem here, and would addressing that problem make a dent in the rash of mass killings?

Merely to pose the question is to suggest how fatuous this whole approach is. Pretend for a moment that all violence of this sort is the product of mental illness, not, as is often the case, the actions of evil, angry, or viciously prejudiced souls. Is there the least prospect that any conceivable investment in mental health care could anticipate and forestall gun massacres? Of course not. Nowhere in recorded history, on no continent, in no country, in no century, has any society succeeded in eliminating or even effectively addressing serious forms of mental illness. Improving the lot of those with serious mental illness is a highly desirable goal. Leaving the mentally disturbed to roam or rot on our sidewalks and in our “welfare” hotels, or using a revolving door to move them in and out of jail—the central elements of current mental health “policy”—constitutes a national disgrace. But alleviating that set of problems (as unlikely as that seems in the contemporary political climate) will have zero effect on gun violence and mass shootings.

Mental illness is a scourge that afflicts all civilized societies. The Bible tells us, “The poor ye shall always have with you.”  The same, sadly, is true of mental illness. Mental distress and disturbance constitute one of the most profound sources of human suffering, and simultaneously constitute one of the most serious challenges of both a symbolic and practical sort to the integrity of the social fabric. Whether one looks to classical Greece and Rome, to ancient Palestine or the Islamic civilization that ruled much of the Mediterranean for centuries, to the successive Chinese empires or to feudal and early modern Europe, everywhere people have wrestled with the problem of insanity, and with the need to take steps to protect themselves against the depredations of the minority of the seriously mentally ill people who pose serious threats of violence. None of these societies, or many more I could mention, ever saw the levels of carnage we Americans now accept as routine and inevitable.

Mental illness is an immutable feature of human existence. Its association with mass slaughter most assuredly has not been. Our ancestors were not so naïve as to deny that madness was associated with violence. The mentally ill, in the midst of their delusions, hallucinations, and fury were sometimes capable of horrific acts: consider the portrait in Greek myth of Heracles dashing out the brains of his children, in his madness thinking them the offspring of his mortal enemy Euryththeus; Lucia di Lammermoor stabbing her husband on their wedding night; or Zola’s anti-hero of La Bete humaine, Jacques Lantier, driven by passions that escape the control of his reason, raping and killing the object of his desire: these and other fictional representations linking mental illness to animality and violence are plausible to those encountering them precisely because they match the assumptions and experience of the audiences toward whom they are directed. And real-life maddened murderers were to be found in all cultures across historical time. Such murders were one of the known possible consequences of a descent into insanity. But repeated episodes of mass killing by deranged individuals, occurring as a matter of routine?  Nowhere in the historical record can precursors of the contemporary American experience be found. It is long past time to stop blaming an immutable feature of human culture—severe mental illness—for routine acts of deadly violence that are instead the produce of a resolute refusal to face the consequences of unbridled access to a deadly form of modern technology.

Claims that the mowing down of unarmed innocents is a mental health problem cannot explain why, in that event, such massacres are exceedingly rare elsewhere in the contemporary world, while they are now routine in the United States. Mental illness, as I have stressed, is a universal feature of human existence. Mass shootings are not. Australia and Britain (to take but two examples) found themselves in the not-too-distant past having to cope with horrendous mass killings that involved guns. Both responded with sensible gun control policies, and have been largely spared a repetition of the horrors routinely visited upon innocent Americans. Our society’s “rational” response, by contrast, is to rush out and buy more guns, inflating the profits of those who profit from these deaths, and ensuring more episodes of mass murder.

The problem in the United States is not crazy people. It is crazy gun laws.

Andrew Scull is Distinguished Professor of Sociology and Science Studies at the University of California, San Diego. He is the author of Masters of Bedlam: The Transformation of the Mad-Doctoring Trade and Madness in Civilization: A Cultural History of Insanity, from the Bible to Freud, from the Madhouse to Modern Medicine.

Geoff Mulgan on Big Mind: How Collective Intelligence Can Change Our World

A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies. Highlighting differences between environments that stimulate intelligence and those that blunt it, Geoff Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. Read on to learn more about the ideas in Big Mind.

So what is collective intelligence?

My interest is in how thought happens at a large scale, involving many people and often many machines. Over the last few years many experiments have shown how thousands of people can collaborate online analyzing data or solving problems, and there’s been an explosion of new technologies to sense, analyze and predict. My focus is on how we use these new kinds of collective intelligence to solve problems like climate change or disease—and what risks we need to avoid. My claim is that every organization can work more successfully if it taps into a bigger mind—mobilizing more brains and computers to help it.

How is it different from artificial intelligence?

Artificial intelligence is going through another boom, embedded in everyday things like mobile phones and achieving remarkable break throughs in medicine or games. But for most things that really matter we need human intelligence as well as AI, and an over reliance on algorithms can have horrible effects, whether in financial markets or in politics.

What’s the problem?

The problem is that although there’s huge investment in artificial intelligence there’s been little progress in how intelligently our most important systems work—democracy and politics, business and the economy. You can see this in the most everyday aspect of collective intelligence—how we organize meetings, which ignores almost everything that’s known about how to make meetings effective.

What solutions do you recommend?

I show how you can make sense of the collective intelligence of the organizations you’re in—whether universities or businesses—and how to become better. Much of this is about how we organize our information commons. I also show the importance of countering the many enemies of collective intelligence—distortions, lies, gaming and trolls.

Is this new?

Many of the examples I look at are quite old—like the emergence of an international community of scientists in the 17th and 18th centuries, the Oxford English Dictionary which mobilized tens of thousands of volunteers in the 19th century, or NASA’s Apollo program which at its height employed over half a million people in more than 20,000 organizations. But the tools at our disposal are radically different—and more powerful than ever before.

Who do you hope will read the book?

I’m biased but think this is the most fascinating topic in the world today—how to think our way out of the many crises and pressures that surround us. But I hope it’s of particular interest to anyone involved in running organizations or trying to work on big problems.

Are you optimistic?

It’s easy to be depressed by the many examples of collective stupidity around us. But my instinct is to be optimistic that we’ll figure out how to make the smart machines we’ve created serve us well and that we could on the cusp of a dramatic enhancement of our shared intelligence. That’s a pretty exciting prospect, and much too important to be left in the hands of the geeks alone.

MulganGeoff Mulgan is chief executive of Nesta, the UK’s National Endowment for Science, Technology and the Arts, and a senior visiting scholar at Harvard University’s Ash Center. He was the founder of the think tank Demos and director of the Prime Minister’s Strategy Unit and head of policy under Tony Blair. His books include The Locust and the Bee.

Joel Brockner: The Passion Plea

This post originally appears on the blog of Psychology Today

BrocknerIt’s tough to argue with the idea that passion is an admirable aspect of the human condition. Passionate people are engaged in life; they really care about their values and causes and being true to them. However, a big minefield of passion is when people use it to excuse or explain away unseemly behavior. We saw this during the summer of 2017 in how the White House press secretary, Sarah Huckabee Sanders, responded to the infamous expletive-laced attack of Anthony Scaramucci on his then fellow members of the Trump team, Steve Bannon and Reince Priebus. According to The New York Times, (July 27, 2017),  “Ms. Sanders said mildly that Mr. Scaramucci was simply expressing strong feelings, and that his statement made clear that ‘he’s a passionate guy and sometimes he lets that passion get the better of him.’ ” Whereas Ms. Sanders acknowledged that Mr. Scaramucci behaved badly (his passion got the better of him), her meta-message is that it was no big deal, as implied by the words “mildly” and “simply” in the quote above.

The passion plea is by no means limited to the world of politics. Executives who are seen as emotionally rough around the edges by their co-workers often defend their behavior with statements like, “I’m just being passionate,” or “I am not afraid to tell it like it is,” or, “My problem is that I care too much.”

The passion plea distorts reality by glossing over the distinction between what is said and how it is said. Executives who deliver negative feedback in a harsh tone are not just being passionate. Even when the content of the negative feedback is factual, harsh tones convey additional messages – notably a lack of dignity and respect. Almost always, there are ways to send the same strong messages or deliver the same powerful feedback in ways that do not convey a lack of dignity and respect. For instance, Mr. Scaramucci could have said something like, “Let me be as clear as possible: I have strong disagreements with Steve Bannon and Reince Priebus.” It may have been less newsworthy, but it could have gotten the same message across. Arguably, Mr. Scaramucci’s 11-day tenure as White House director of communications would have been longer had he not been so “passionate” and instead used more diplomatic language.

Similarly, executives that I coach rarely disagree when it is made evident that they could have sent the same strong negative feedback in ways that would have been easier for their co-workers to digest. Indeed, this is the essence of constructive criticism, which typically seeks to change the behavior of the person on the receiving end. Rarely are managers accused of coming on “too strong” if they deliver negative feedback in the right ways. For example, instead of saying something about people’s traits or characters (e.g., “You aren’t reliable”) it would be far better to provide feedback with reference to specific behavior (e.g., “You do not turn in your work on time”). People usually are more willing and able to respond to negative feedback about what they do rather than who they are. Adding a problem-solving approach is helpful as well, such as, “Some weeks you can be counted on to do a good job whereas other weeks not nearly as much. Why do you think that is happening, and what can we do together to ensure greater consistency in your performance?” Moreover, the feedback has to be imparted in a reasonable tone of voice, and in a context in which people on the receiving end are willing and able to take it in. For instance, one of my rules in discussing with students why they didn’t do well on an assignment is that we not talk immediately after they received the unwanted news. It is far better to have a cooling-off period in which defensiveness goes down and open-mindedness goes up.

If our goal is to alienate people or draw negative attention to ourselves then we should be strong and hard-driving, even passionate, in what we say as well as crude and inappropriate in how we say it. However, if we want to be a force for meaningful change or a positive role model, it is well within our grasp to be just as strong and hard-driving in what we say while being respectful and dignified in how we say it.

Joel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School.

Alexandra Logue: Not All Excess Credits Are The Students’ Fault

This post was originally published on Alexandra Logue’s blog

A recent article in Educational Evaluation and Policy Analysis reported on an investigation of policies punishing students for graduating with excess credits.  Excess credit hours are the credits that a student obtains in excess of what is required for a degree, and many students graduate having taken several courses more than what was needed.

To the extent that tuition does not cover the cost of instruction, and/or that financial aid is paying for these excess credits, someone other than the student—the college or the government—is paying for these excess credits.  Graduating with excess credits also means that a student is occupying possibly scarce classroom seats longer than s/he needs to and is not entering the work force with a degree and paying more taxes as soon as s/he could.  Thus there are many reasons why colleges and/or governments might seek to decrease excess credits.  The article considers cases in which states have imposed sanctions on students who graduate with excess credits, charging more for credits taken significantly above the number required for a degree.  The article shows that such policies, instead of resulting in students graduating sooner, have instead resulted in greater student debt.  But the article does not identify the reasons why this may be the case.  Perhaps one reason is because students do not have control over those excess credits.

For example, as described in my forthcoming book, Pathways to Reform: Credits and Conflict at The City University of New York, students may accumulate excess credits because of difficulties they have transferring their credits.  When students transfer, there can be significant delays in having the credits that they obtained at their old institution evaluated by their new institution.  At least at CUNY colleges, the evaluation process can take many months.  During that period, a student either has to stop out of college or take a risk and enroll in courses that may or may not be needed for the student’s degree.  Even when appropriate courses are taken, all too often credits that a student took at the old college as satisfying general education (core) requirements or major requirements become elective credits, or do not transfer at all. A student then has to repeat courses or take extra courses in order to satisfy all of the requirements at the new college.  Given that a huge proportion of students now transfer, or try to transfer, their credits (49% of bachelor’s degree recipients have some credits from a community college, and over one-third of students in the US? transfer within six years of starting college), a great number of credits are being lost.

Nevertheless, a 2010 study at CUNY found that a small proportion of the excess credits of its bachelor’s degree recipients was due to transfer—students who never transferred graduated with only one or two fewer excess credits, on average, than did students who did transfer.  Some transfer students may have taken fewer electives at their new colleges in order to have room in their programs to make up nontransferring credits from their old colleges, without adding many excess credits.

But does this mean that we should blame students for those excess credits and make them pay more for them?  Certainly some of the excess credits are due to students changing their majors late and/or to not paying attention to requirements and so taking courses that don’t allow them to finish their degrees, and there may even be some students who would rather keep taking courses than graduate.

But there are still other reasons that students may accumulate extra credits, reasons for which the locus of control is not the student.  Especially in financially strapped institutions, students may have been given bad or no guidance by an advisor.  In addition, students may have been required to take traditional remedial courses, which can result in a student acquiring many of what CUNY calls equated credits, on top of the required college-level credits (despite the fact that there are more effective ways to deliver remediation without the extra credits). Or a student may have taken extra courses that s/he didn’t need to graduate in order to continue to enroll full-time, so that the student could continue to be eligible for some types of financial aid and/or (in the past) health insurance. Students may also have made course-choice errors early in their college careers, when they were unaware of excess-credit tuition policies that would only have an effect years later.

The fact that the imposition of excess-credit tuition policies did not affect the number of excess credits accumulated but instead increased student debt by itself suggests that, to at least some degree, the excess credits are not something that students can easily avoid, and/or that there are counter-incentives operating that are even stronger than the excess tuition.

Before punishing students, or trying to control their behavior, we need to have a good deal of information about all of the different contingencies to which students are subject.  Students should complete their college’s requirements as efficiently as possible.  However, just because some students demonstrate delayed graduation behavior does not mean that they are the ones who are controlling that behavior.  Decreasing excess credits needs to be a more nuanced process, with contingencies and consequences tailored appropriately to those students who are abusing the system, and those who are not.

LogueAlexandra W. Logue is a research professor at the Center for Advanced Study in Education at the Graduate Center, CUNY. From 2008 to 2014, she served as executive vice chancellor and university provost of the CUNY system. She is the author of Pathways to Reform: Credits and Conflict at The City University of New York.

Omnia El Shakry: Psychoanalysis and Islam

Omnia El Shakry‘s new book, The Arabic Freud, is the first in-depth look at how postwar thinkers in Egypt mapped the intersections between Islamic discourses and psychoanalytic thought.

What are the very first things that pop into your mind when you hear the words “psychoanalysis” and “Islam” paired together?  For some of us the connections might seem improbable or even impossible. And if we were to be brutally honest the two terms might even evoke the specter of a so-called “clash of civilizations” between an enlightened, self-reflective West and a fanatical and irrational East.

It might surprise many of us to know, then, that Sigmund Freud, the founding figure of psychoanalysis, was ever-present in postwar Egypt, engaging the interest of academics, novelists, lawyers, teachers, and students alike. In 1946 Muhammad Fathi, a Professor of Criminal Psychology in Cairo, ardently defended the relevance of Freud’s theories of the unconscious for the courtroom, particularly for understanding the motives behind homicide. Readers of Nobel laureate Naguib Mahfouz’s 1948 The Mirage were introduced to the Oedipus complex, graphically portrayed in the novel, by immersing themselves in the world of its protagonist—pathologically erotically attached and fixated on his possessive mother. And by 1951 Freudian theories were so well known in Egypt that a secondary school philosophy teacher proposed prenuptial psychological exams in order to prevent unhappy marriages due to unresolved Oedipus complexes!

Scholars who have tackled the question of psychoanalysis and Islam have tended to focus on it as problem, by assuming that psychoanalysis and Islam have been “mutually ignorant” of each other, and they have placed Islam on the couch, as it were, alleging that it is resistant to the “secular” science of psychoanalysis. In my book, The Arabic Freud, I undo the terms of this debate and ask, instead, what it might mean to think of psychoanalysis and Islam together, not as a “problem,” but as a creative encounter of ethical engagement.

What I found was that postwar thinkers in Egypt saw no irreconcilable differences between psychoanalysis and Islam. And in fact, they frequently blended psychoanalytic theories with classical Islamic concepts. For example, when they translated Freud’s concept of the unconscious, the Arabic term used, “al-la-shuʿur,” was taken from the medieval mystical philosopher Ibn ʿArabi, renowned for his emphasis on the creative imagination within Islamic spirituality.

Islamic thinkers further emphasized similarities between Freud’s interpretation of dreams and Islamic dream interpretation, and they noted that the analyst-analysand (therapist-patient) relationship and the spiritual master-disciple relationship of Sufism (the phenomenon of mysticism in Islam) were nearly identical. In both instances, there was an intimate relationship in which the “patient” was meant to forage their unconscious with the help of their shaykh (spiritual guide) or analyst, as the case might be. Both Sufism and psychoanalysis, then, were characterized by a relationship between the self and the other that was mediated by the unconscious. Both traditions exhibited a concern for the relationship between what was hidden and what was shown in psychic and religious life, both demonstrated a preoccupation with eros and love, and both mobilized a highly specialized vocabulary of the self.

What, precisely, are we to make of this close connection between Islamic mysticism and psychoanalysis? On the one hand, it helps us identify something of a paradox within psychoanalysis, namely that for some psychoanalysis represents a non-religious and even atheistic world view. And there is ample evidence for this view within Freud’s own writings, which at times pathologized religion in texts such as The Future of an Illusion and Civilization and Its Discontents. At the same time, in Freud and Man’s Soul, Bruno Bettelheim argued that in the original German Freud’s language was full of references to the soul, going so far as to refer to psychoanalysts as “a profession of secular ministers of souls.” Similarly, psychoanalysis was translated into Arabic as “tahlil al-nafs”—the analysis of the nafs, which means soul, psyche, or self and has deeply religious connotations. In fact, throughout the twentieth century there have been psychoanalysts who have maintained a receptive attitude towards religion and mysticism, such as Marion Milner or Sudhir Kakar. What I take all of this to mean is that psychoanalysis as a tradition is open to multiple, oftentimes conflicting, interpretations and we can take Freud’s own ambivalence towards religion, and towards mysticism in particular, as an invitation to rethink the relationship between psychoanalysis and religion.

What, then, if religious forms of knowledge, and the encounter between psychoanalysis and Islam more specifically, might lead us to new insights into the psyche, the self, and the soul? What would this mean for how we think about the role of religion and ethics in the making of the modern self? And what might it mean for how we think about the relationship between the West and the Islamic world?

FreudOmnia El Shakry is Professor of History at the University of California, Davis. She is the author of The Great Social Laboratory: Subjects of Knowledge in Colonial and Postcolonial Egypt and the editor of Gender and Sexuality in Islam. Her new book, The Arabic Freud, is out this September.

Chris Chambers: The Seven Deadly Sins of Psychology

ChambersPsychological science has made extraordinary discoveries about the human mind, but can we trust everything its practitioners are telling us? In recent years, it has become increasingly apparent that a lot of research in psychology is based on weak evidence, questionable practices, and sometimes even fraud. The Seven Deadly Sins of Psychology by Chris Chambers diagnoses the ills besetting the discipline today and proposes sensible, practical solutions to ensure that it remains a legitimate and reliable science in the years ahead.

Why did you decide to write this book?

CC: Over the last fifteen years I‘ve become increasingly fed up with the “academic game” in psychology, and I strongly believe we need to raise standards to make our research more transparent and reliable. As a psychologist myself, one of the key lessons I’ve learned is that there is a huge difference between how the public thinks science works and how it actually works. The public have this impression of scientists as objective truth seekers on a selfless mission to understand nature. That’s a noble picture but bears little resemblance to reality. Over time, the mission of psychological science has eroded from something that originally was probably quite close to that vision but has now become a contest for short-term prestige and career status, corrupted by biased research practices, bad incentives and occasionally even fraud.

Many psychologists struggle valiantly against the current system but they are swimming against a tide. I trained within that system. I understand how it works, how to use it, and how it can distort your thinking. After 10 years of “playing the game” I realized I didn’t like the kind of scientist I was turning into, so I decided to try and change the system and my own practices—not only to improve science but to help younger scientists avoid my predicament. At its heart this book lays out my view of how we can reinvigorate psychology by adopting an emerging philosophy called “open science.” Some people will agree with this solution. Many will not. But, above all, the debate is important to have.

It sounds like you’re quite skeptical about science generally.

CC: Even though I’m quite critical about psychology, the book shouldn’t be seen as anti-science—far from it. Science is without doubt the best way to discover the truth about the world and make rational decisions. But that doesn’t mean it can’t or shouldn’t be improved. We need to face the problems in psychology head-on and develop practical solutions. The stakes are high. If we succeed then psychology can lead the way in helping other sciences solve similar problems. If we fail then I believe psychology will fade into obscurity and become obsolete.

Would it matter if psychology disappeared? Is it really that important?

CC: Psychology is a huge part of our lives. We need it in every domain where it is important to understand human thought or behavior, from treating mental illness, to designing traffic signs, to addressing global problems like climate change, to understanding basic (but extraordinarily complex) mental functions such as how we see or hear. Understanding how our minds work is the ultimate journey of self-discovery and one of the fundamental sciences. And it’s precisely because the world needs robust psychological science that researchers have an ethical obligation to meet the high standards expected of us by the public.

Who do you think will find your book most useful?

CC: I have tried to tailor the content for a variety of different audiences, including anyone who is interested in psychology or how science works. Among non-scientists, I think the book may be especially valuable for journalists who report on psychological research, helping them overcome common pitfalls and identify the signs of bad or weak studies. At another level, I’ve written this as a call-to-arms for my fellow psychologists and scientists in closely aligned disciplines, because we need to act collectively in order to fix these problems. And the most important readers of all are the younger researchers and students who are coming up in the current academic system and will one day inherit psychological science. We need to get our house in order to prepare this generation for what lies ahead and help solve the difficulties we inherited.

So what exactly are the problems facing psychology research?

CC: I’ve identified seven major ills, which (a little flippantly, I admit) can be cast as seven deadly sins. In order they are Bias, Hidden Flexibility, Unreliability, Data Hoarding, Corruptibility, Internment, and Bean Counting. I won’t ruin the suspense by describing them in detail, but they all stem from the same root cause: we have allowed the incentives that drive individual scientists to fall out of step with what’s best for scientific advancement. When you combine this with the intense competition of academia, it creates a research culture that is biased, closed, fearful and poorly accountable—and just as a damp bathroom encourages mold, a closed research culture becomes the perfect environment for cultivating malpractice and fraud.

It all sounds pretty bad. Is psychology doomed?

CC: No. And I say this emphatically: there is still time to turn this around. Beneath all of these problems, psychology has a strong foundation; we’ve just forgotten about it in the rat race of modern academia. There is a growing movement to reform research practices in psychology, particularly among the younger generation. We can solve many problems by adopting open scientific practices—practices such as pre-registering study designs to reduce bias, making data and study materials as publicly available as possible, and changing the way we assess scientists for career advancement. Many of these problems are common to other fields in the life sciences and social sciences, which means that if we solve them in psychology we can solve them in those areas too. In short, it is time for psychology to grow up, step up, and take the lead.

How will we know when we’ve fixed the deadly sins?

CC: The main test is that our published results should become a lot more reliable and repeatable. As things currently stand, there is a high chance that any new result published in a psychology journal is a false discovery. So we’ll know we’ve cracked these problems when we can start to believe the published literature and truly rely on it. When this happens, and open practices become the norm, the closed practices and weak science that define our current culture will seem as primitive as alchemy.

Chris Chambers is professor of cognitive neuroscience in the School of Psychology at Cardiff University and a contributor to the Guardian science blog network. He is the author of The 7 Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice.

Joel Brockner: Can Job Autonomy Be a Double-Edged Sword?

This post was originally published on the Psychology Today blog.

“You can arrive to work whenever convenient.”

“Work from home whenever you wish.”

“You can play music at work at any time.”

These are examples of actual workplace policies from prominent companies such as Aetna, American Express, Dell, Facebook, Google, IBM, and Zappos. They have joined the ranks of many organizations in giving employees greater job autonomy, that is, more freedom to decide when, where, and how to do their work. And why not? Research by organizational psychologists such as Richard Hackman and Greg Oldham and by social psychologists such as Edward Deci and Richard Ryan, has shown that job autonomy can have many positive effects. The accumulated evidence is that employees who experience more autonomy are more motivated, creative, and satisfied with their jobs.

Against this backdrop of the generally favorable effects of job autonomy, recent research has shown that it also may have a dark side: unethical behavior. Jackson Lu, Yoav Vardi, Ely Weitz and I discovered such results in a series of field and laboratory studies soon to be published in the Journal of Experimental Social Psychology. In field studies conducted in Israel, employees from a wide range of industries rated how much autonomy they had and how often they engaged in unethical behavior, such as misrepresenting their work hours or wasting work time on private phone calls. Those who had greater autonomy said that they engaged in more unethical behavior on the job. In laboratory experiments conducted in the United States we found that it may not even be necessary for people to have actual autonomy for them to behave unethically; merely priming them with the idea of autonomy may do the trick. In these studies participants were randomly assigned to conditions differing in how much the concept of autonomy was called to mind. This was done with a widely used sentence-unscrambling task in which people had to rearrange multiple series of words into grammatically correct sentences. For example, those in the high-autonomy condition were given words such as, “have many as you as days wish you vacation may” which could be rearranged to form the sentence, “You may have as many vacation days as you wish.” In contrast, those in the low-autonomy condition were given words such as, “office in work you must the,” which could be rearranged to, “You must work in the office.” After completing the sentence-unscrambling exercise participants did another task in which they were told that the amount of money they earned depended on how well they performed. The activity was structured in a way that enabled us to tell whether participants lied about their performance. Those who were previously primed to experience greater autonomy in the sentence-unscrambling task lied more. Job autonomy gives employees a sense of freedom which usually has positive effects on their productivity and morale but also can lead them to feel that they can do whatever they want, including not adhering to rules of morality.

All behavior is a function of what people want to do (motivation) and what they are capable of doing (ability). Consider the unethical behavior elicited by high levels of autonomy. Having high autonomy may not have made people want to behave unethically. However, it may have enabled the unethical behavior by making it possible for people to engage in it. Indeed, the distinction between people wanting to behave unethically versus having the capability of doing so may help answer two important questions:

(1) What might mitigate the tendency for job autonomy to elicit unethical behavior?

(2) If job autonomy can lead to unethical behavior should companies re-evaluate whether to give job autonomy to its employees? That is, can job autonomy be introduced in a way that maximizes its positive consequences (e.g., greater creativity) without introducing the negative effect of unethical behavior?

With respect to the first question, my hunch is that people who have job autonomy and therefore are able to behave unethically will not do so if they do not want to behave unethically. For example, people who are high on the dimension of moral identity, for whom behaving morally is central to how they define themselves would be less likely to behave unethically even when a high degree of job autonomy enabled or made it possible for them to do so.

With respect to the second question, I am not recommending that companies abandon their efforts to provide employees with job autonomy. Our research suggests, rather, that the consequences of giving employees autonomy may not be summarily favorable. Taking a more balanced view of how employees respond to job autonomy may shed light on how organizations can maximize the positive effects of job autonomy while minimizing the negative consequence of unethical behavior.

Whereas people generally value having autonomy, some people want it more than others. People who want autonomy a lot may be less likely to behave unethically when they experience autonomy. For one thing, they may be concerned that the autonomy they covet may be taken away if they were to take advantage of it by behaving unethically. This reasoning led us to do another study to evaluate when the potential downside of felt autonomy can be minimized while its positive effects can be maintained. Once again, we primed people to experience varying degrees of job autonomy with the word-unscrambling exercise. Half of them then went on to do the task which measured their tendency to lie about their performance, whereas the other half completed an entirely different task, one measuring their creativity. Once again, those who worked on the task in which they could lie about their performance did so more when they were primed to experience greater autonomy. And, as has been found in previous research those who did the creativity task performed better at it when they were primed to experience greater autonomy.

Regardless of whether they did the task that measured unethical behavior or creativity, participants also indicated how much they generally valued having autonomy. Among those who generally valued having autonomy to a greater extent, (1) the positive relationship between experiencing job autonomy and behaving unethically diminished, whereas (2) the positive relationship between experiencing job autonomy and creativity was maintained. In other words, as long as people valued having autonomy, the experience of autonomy had the positive effect of enhancing creativity without introducing the dangerous side effect of unethical behavior. So, when organizations introduce job autonomy policies like those mentioned at the outset, they may gain greater overall benefits when they ensure that their employees value having autonomy. This may be achieved by selecting employees who value having autonomy as well as by creating a corporate culture which emphasizes the importance of it. More generally, a key practical takeaway from our studies is that when unethical behavior is enabled, whether through job autonomy or other factors, it needs to be counterbalanced by conditions that make employees not want to go there.

BrocknerJoel Brockner is the Phillip Hettleman Professor of Business at Columbia Business School. He is the author of The Process Matters: Engaging and Equipping People for Success.

Face Value: Man or Woman?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. In honor of the book’s release, we’re sharing examples from his research every week.

Todorov

It is easy to identify the woman in the image on the right and the man in the image on the left. But the two images are almost identical with one subtle difference: the skin surface in the image on the left is a little bit darker. The eyes and lips of the faces are identical, but the rest of the image on the left was darkened, and the rest of the image on the right was lightened. This manipulation makes the face on the left look masculine and the face on the right look feminine. This is one way to induce the gender illusion. Here is another one.

Based on research reported in

  1. Russell (2009). “A sex difference in facial contrast and its exaggeration by cosmetics.” Perception 38, 1211–1219.

Todorov

Face Value: Eyebrows

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

It is easier to recognize Richard Nixon when his eyes are removed than when his eyebrows are removed from the image. Our intuitions about what facial features are important are insufficient at best and misleading at worst.

 

Based on research reported in

  1. Sadr, I. Jarudi, and P. Sinha (2003). “The role of eyebrows in face recognition.” Perception 32, 285–293.

Todorov

Face Value: Can you recognize the celebrities?

In Face Value: The Irresistible Influence of First Impressions, Princeton professor of psychology Alexander Todorov delves into the science of first impressions. Throughout the month of May, we’ll be sharing examples of his research. 

 

Todorov

 

A: Justin Bieber and Beyoncé

 

Todorov