Matthew Salganik: The Open Review of Bit by Bit, Part 3—Increased access to knowledge

Bit

This is the third post in a three part series about the Open Review of Bit by Bit: Social Research in the Digital Age. This post describes how Open Review led to increased access to knowledge. In particular, I’ll provide information about the general readership patterns, and I’ll specifically focus on readership of the versions of the manuscript that were machine translated into more than 100 languages. The other posts in this series describe how Open Review led to a better book and higher sales.

Readership

During the Open Review period, people from all over the world were able to read Bit by Bit before it was even published. The map at the top of the page shows the locations of readers around the world.

In total, we had 23,514 sessions and 79,426 page views from 15,650 users. Also, unlike annotations, which decreased over time, there was a relatively constant level of traffic, averaging about 500 sessions per week.

Bit

How does these sessions begin? The most common channels were direct navigation followed by organic search. Only about 20% of the traffic came from referrals (following links) and social.

Bit

What devices were people using? About 30% of sessions were on mobile phones. Therefore, responsive design is important to ensure access.  

Bit

In fact, mobile was more common for users from developing countries. For example, in the US, there were about 6 desktop sessions for every 1 mobile session. In India, however, there were about 3.5 mobile sessions for every desktop session. Also, there were more mobile sessions from India than mobile sessions from US.  Here are the top 10 country-platform combinations.

Bit

Machine Translations

In addition to posting the book in English, we also machine translated the book into more than 100 languages using Google Translate. Of course, Google Translate is not perfect, but reading a bad translation might be better than no translation at all. And because Google Translate is getting better quickly, a few years from now machine translation might be a viable approach for many languages.

So, did these machine translations get used? No and yes. In terms of page views, no other single language had more than 2%. So, this seems to argue against the value of machine translation. On the other hand, if you add up all the page views in languages other than English it becomes a sizable number. The non-English page lead to a 20% increase in page views (65,428 English to 79,426 Total).

Bit

If you are considering Open Review of your manuscript, you might be wondering if machine translation was worth it. There were two main costs: adjusting the website to handle multiple languages and the money we had to pay Google for the translations. Now that we’ve open sourced our code, you won’t need to work about the fixed cost related to website design. But, we did pay approximately $3000 USD to Google for translations in August 2016 (I expect that the cost of machine translation will come down). In terms of benefits, they are not really clear. I don’t know if people actually learned anything from these machine translations, and I don’t think they did much to support the other goals of the Open Review: better books and higher sales.  But, it did certainly capture people’s attention when I said that the book was available in 100 languages, and it showed a commitment to access. Future authors and publishers will have to decide what makes sense in their case, but as machine translation continues to improve, I’m optimistic that multiple languages will be part of the Open Review process in some way in the future.

This post is the third post in a three part series about the Open Review of Bit by Bit: Social Research in the Digital Age. The other posts in this series describe how Open Review led to a better book and higher sales.

You can put your own manuscript through Open Review using the Open Review Toolkit, either by downloading the open-source code or hiring one of the preferred partners. The Open Review Toolkit is supported by a grant from the Alfred P. Sloan Foundation.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Matthew Salganik: The Open Review of Bit by Bit, Part 2—Higher sales

This post is the second in a three part series about the Open Review of Bit by Bit: Social Research in the Digital Age. This post describes how Open Review led to higher sales. The other posts in this series describe how Open Review led to a better book and increased access to knowledge.

Before talking about sales in more detail, I think I should start by acknowledging that it is a bit unusual for authors to talk about this stuff. But sales are an important part of the Open Review process because of one simple and inescapable fact: publishers need revenue. My editor is amazing, and she’s spent a lot of time making Bit by Bit better, as have her colleagues that do production and design. These people need to be paid salaries, and those salaries have come from somewhere. If you want to work with a publisher—even a non-profit publisher—then you have to be sensitive to the fact that they need revenue to be sustainable. Fortunately, in addition to better books and increased access to knowledge, Open Review also helps sell books. So for the rest of this post, I’m going to provide a purely economic assessment of the Open Review process.

One of the first questions that some people ask about Open Review is: “Aren’t you giving your book away for free?”  And the answer is definitely no. Open Review is free like Google is free.

Notice that Google makes a lot of money without ever charging you anything. That’s because you are giving Google something valuable, your data and your attention. Then, Google monetizes what you provide them. Open Review is the same.

In addition to improving the manuscript, which should lead to more sales, there are three main that Open Review increases sales: collecting email addresses, providing market intelligence, and promoting course adoptions.

Email addresses

After discussions with my editor, we decided that the main business metric during the Open Review of Bit by Bit was collecting email addresses of people who wanted to be notified when the book was complete. These addresses are valuable to the publisher because they can form the basis of a successful launch for the book. 

How did we collect email address?  Simple, we just asked people like this:

Bit

During the Open Review process we collected 340 unique valid emails address. Aside from a spike at the beginning, these arrive at a pace of about 1 per day with no sign of slowing down.

Bit

Who are these people? One quick way to summarize it is to look at the email ending (.com, .edu, .jp, etc). Based on this data, it seems that that Open Review helped us collect email address from people all over the world.

Bit

Another way to summarize the types of people who provided their email address is to look at the email suffixes (everything that comes after @). This shows, for example, which schools and companies are most represented.

Just collecting 340 email addresses was enough to significantly increase sales of Bit by Bit. And, in future Open Review projects, authors and publishers can get better at collecting email addresses. Just as Amazon is constantly running experiments to get you to buy more stuff, and the New York Times is running experiments to get you to click on more headlines, we were running experiments to collect more addresses. And unlike the experiments by Amazon and the New York Times, our experiments were overseen by Princeton’s Human Subjects Institutional Research Board.  

We tried six different ways to collect email addresses, and then we let Google Analytics use a multi-armed bandit approach find the best one. Here’s how they compared:

Bit

These differences are not huge, but they illustrate that Open Review websites can use the same kind of conversation optimization techniques that are common on modern, commercial websites. And I’m confident that future Open Review projects could be have an even higher rate of email sign-ups with additional design improvements and experimentation.

Market intelligence

In addition to collecting email addresses, the Open Review process also provides market intelligence that helped tailor the marketing of the book. For example, using a tool called Google Webmaster you can see which parts of your book are being linked to:

Bit

From this information, we learned that in addition to the book itself, people were most interested in the Open Review process and the chapter on Ethics. Then, when we were developing marketing copy for the book, we tried to emphasize this chapter.

Using Google Webmaster, you can also see which search terms are leading people to your book. In my case, you will see that 9 of the top 10 terms are not in English (in fact 48 of the top 50 terms are not in English). This is because of the machine translation process, which I talk about more in the post on increased access to knowledge. I was hoping that we would receive more organic search traffic in English, but as learned during this project: it is very hard to show up in the top 10 in organic search for most keywords.

Bit

In case you are curious, গবেষণা নকশা means “research design” in Bengali (Bangla).  

A final way that this market intelligence was helpful was in selling foreign rights to the book. For example, I provide this map of global traffic to representatives from Princeton University Press before they went to the London Book Fair to sell the foreign rights to Bit by Bit. This traffic shows in a very concrete way that there was an interest in the book outside of the United States.

Bit

 

 

Course adoptions

Finally, in addition to email addresses to help launch the book and market intelligence, Open Review accelerates course adoptions. My understanding is there is typically a slow ramp-up in course adoptions over the period of several years. But that slow ramp-up would be problematic for my book, which is freshest right when published and will gradually go stale over time. Given that the lifespan for this edition is limited, early course adoptions are key, and Open Review helped with that. I know of about 10 courses (list here) that have adopted the book in whole or in part during the Open Review process. This helped prime the pump for course adoptions when the book went on sale.

In this post, I’ve tried to describe the business case for Open Review, and I’ve shown how Open Review can help with collecting email addresses, gathering market intelligence, and speeding course adoptions. I think that purely on economic terms Open Review makes sense for publishers and authors for some books. If more people explore and develop Open Review as a model, I expect that these economic benefits would increase.  Further, this simply economic analysis does not count the benefits that come from better books and increased access to knowledge, two things that both authors and publishers value.

This post is the second in a three part series about the Open Review of Bit by Bit. You can also read more about how the Open Review of Bit by Bit led to a better book and increased access to knowledge. And, you can put your own manuscript through Open Review using the Open Review Toolkit, either by downloading the open-source code or hiring one of the preferred partners. The Open Review Toolkit is supported by a grant from the Alfred P. Sloan Foundation.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Matthew Salganik: The Open Review of Bit by Bit, Part 1—Better books

My new book Bit by Bit: Social Research in the Digital Age is for social scientists who want to do more data science, data scientists who want to do more social science, and anyone interesting in the combination of these two fields. The central premise of Bit by Bit is that the digital age creates new opportunities for social research. As I was writing Bit by Bit, I also began thinking about how the digital age creates new opportunities for academic authors and publishers. The more I thought about it, the more it seemed that we could publish academic books in a more modern way by adopting some of the same techniques that I was writing about. I knew that I wanted Bit by Bit to be published in this new way, so I created a process called Open Review that has three goals: better books, higher sales, and increased access to knowledge. Then, much as doctors used to test new vaccines on themselves, I tested Open Review on my own book.

This post is the first in a three part series about the Open Review of Bit by Bit. I will describe how Open Review led to a better book. After I explain the mechanics of Open Review, I’ll focus on three ways that Open Review led to a better book: annotations, implicit feedback, and psychological effects. The other posts in this series describe how Open Review led to higher sales and increased access to knowledge.

How Open Review works

When I submitted my manuscript for peer review, I also created a website that hosted the manuscript for a parallel Open Review. During Open Review, anyone in the world could come and read the book and annotate it using hypothes.is, an open source annotation system. Here’s a picture of what it looked like to participants.

Bit

In addition to collecting annotations, the Open Review website also collected all kinds of other information. Once the peer review process was complete, I used the information from the peer review and the Open Review to improve the manuscript.

Bit

In the rest of this post, I’ll describe how the Open Review of Bit by Bit helped improve the book, and I’ll focus three things: annotations, implicit feedback, and psychological effects.

Annotations

The most direct way that Open Review produced better books is through annotations. Readers used hypothes.is, an open source annotation system, to leave annotations like those shown in the image at the top of this post.

During the Open Review period, 31 people contributed 495 annotations. These annotations were extremely helpful, and they led to many improvements in Bit by Bit. People often ask how these annotations compare to peer review, and I think it is best to think of them as complementary. The peer review was done by experts, and the feedback that I received often pushed me to write a slightly different book. The Open Review, on the other hand, was done by a mix of experts and novices, and the feedback was more focused on helping me write the book that I was trying to write. A further difference is the granularity of the feedback. During peer review, the feedback often involved removing or adding entire chapters, whereas doing Open Review the annotations were often focused on improving specific sentences.

The most common annotations were related to clunky writing. For example, an annotation by differentgranite urged me avoid unnecessarily switching between “golf club” and “driver.” Likewise an annotation by fasiha pointed out that I was using “call data” and “call logs” in a way that was confusing. There were many, many small changes like these helped improve the manuscript.

In addition to helping with writing, some annotations showed me that I had skipped a step in my argument. For example an annotation by kerrymcc pointed out that when I was writing about asking people questions, I skipped qualitative interviews and jumped right to surveys. In the revised manuscript, I’ve added a paragraph that explains this distinction and why I focus on surveys.

The changes in the annotations described above might have come from a copy editor (although my copy editor was much more focused on grammar than writing). But, some of the annotations during Open Review could not have come from any copy editor. For example, an annotation by jugander pointed me to a paper I had not seen that was a wonderful illustration of a concept that I was trying to explain. Similarly, an annotation by pkrafft pointed out a very subtle problem in the way that I was describing the Computer Fraud and Abuse Act. These annotations were both from people with deep expertise in computational social science and they helped improve the intellectual content of the book.

A skeptic might read these examples and not be very impressed.  It is certainly true that the Open Review process did not lead to massive changes to the book. But, these examples—and dozens of other—are small improvements that I did make. Overall, I think that these many small improvements added up to a major improvement.

Here are a few graphs summarizing the annotations.

Annotations by person: Most annotations were submitted by a small number of people.

Bit

Annotations by date: Most annotations were submitted relatively early in the process. The spike in late November occurred when a single person read the entire manuscript and made many helpful annotations.

Bit

Annotations by chapter: Chapters later in the book received fewer annotations, but the ethics chapter was somewhat of an exception.

Bit

Annotations by url: Here are the 20 sections of the book that received the most annotations.  In this case, I don’t see a clear pattern, but this might be helpful information for other projects.

Bit

One last thing to keep in mind about these annotations is that they underestimate the amount of feedback that I received because they only count annotations that received through the Open Review website. In fact, when people heard about Open Review, they sometimes invited me to give a talk or asked for a pdf of the manuscript on which they could comment. Basically, the Open Review website is a big sign that says “I want feedback” and that feedback that comes in a variety of forms in addition the annotations.

One challenge with the annotations is that they come in continuously, but I tended to make my revisions in chunks. Therefore, there was often a long lag between when the annotation was made and when I responded. I think that participants in the Open Review process might have been more engaged if I had responded more quickly. I hope that future Open Review authors can figure out a better workflow for responding to and incorporating annotations into the manuscript.

Implicit feedback

In addition to the annotations, the second way that Open Review can lead to better books is through implicit feedback. That is, readers were voting with their clicks about which parts of the book are interesting or boring. And this “reader analytics” are apparently a hot thing in the commercial book publishing world. To be honest, this feedback proved less helpful than I had hoped, but that might be because I didn’t have a good dashboard in place. Here are five elements that I’d recommend for an Open Review dashboard (and all of them are possible with Google Analytics):

  • Which parts of the book are being read the most?
  • What are the main entry pages?
  • What are the main exit pages?
  • What pages have the highest completion rate (based on scroll depth)?
  • What pages have lowest completion rate (based on scroll depth)?

Psychological effects

There is one last way that Open Review led to a better a book: it made me more energized to make revisions. To be honest, for me, writing Bit by Bit was frustrating and exhausting. It was a huge struggle to get the point where the manuscript was ready for peer review and Open Review. Then, after receiving the feedback from peer review, I needed to revise the manuscript. Without the Open Review process—which I found exciting and rejuvenating—I’m not sure if I would have had the mental energy that was need to make revisions.

In conclusion, Open Review definitely helped make Bit by Bit better, and there are many ways that Open Review could be improved.

I want to say again that I’m grateful to everyone that contributed to the Open Review process:

benzevenbergen, bp3, cfelton, chase171, banivos, DBLarremore, differentgranite, dmerson, dmf, efosse, fasiha, huntr, jboy, jeschonnek.1, jtorous, jugander, kerrymcc, leohavemann, LMZ, Nick_Adams, nicolemarwell, nir, person, pkrafft, rchew, sculliwag, sjk, Stephen_L_Morgan, toz, vnemana

You can also read more about how the Open Review of Bit by Bit lead to higher sales and increased access to knowledge. And, you can put your own manuscript through Open Review using the Open Review Toolkit, either by downloading the open-source code or hiring one of the preferred partners. The Open Review Toolkit is supported by a grant from the Alfred P. Sloan Foundation.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Matthew Salganik: Invisibilia, the Fragile Families Challenge, and Bit by Bit

Salganik

This week’s episode of Invisibilia featured my research on the Fragile Families Challenge. The Challenge is a scientific mass collaboration that combines predictive modeling, causal inference, and in-depth interviews to yield insights that can improve the lives of disadvantaged children in the United States. Like many research projects, the Fragile Families Challenge emerged from a complex mix of inspirations. But, for me personally, a big part of the Fragile Families Challenge grew out of writing my new book Bit by Bit: Social Research in the Digital Age. In this post, I’ll describe how Bit by Bit helped give birth to the Fragile Families Challenge.

Bit by Bit is about social research in the age of big data. It is for social scientists who want to do more data science, data scientists who want to do more social science, and anyone interested in the combination of these two fields. Rather than being organized around specific data sources or machine learning methods, Bit by Bit progresses through four broad research designs: observing behavior, asking questions, running experiments, and creating mass collaboration. Each of these approaches requires a different relationship between researchers and participants, and each enables us to learn different things.

As I was working on Bit by Bit, many people seemed genuinely excited about most of the book—except the chapter on mass collaboration. When I talked about this chapter with colleagues and friends, I was often greeted with skepticism (or worse). Many of them felt that mass collaboration simply had no place in social research. In fact, at my book manuscript workshop—which was made up of people that I deeply respected—the general consensus seemed to be that I should drop this chapter from Bit by Bit.  But I felt strongly that it should be included, in part because it enabled researchers to do new and different kinds of things. The more time I spent defending the idea of mass collaboration for social research, the more I became convinced that it was really interesting, important, and exciting. So, once I finished up the manuscript for Bit by Bit, I set my sights on designing the mass collaboration that became the Fragile Families Challenge.

The Fragile Families Challenge, described in more detail at the project website and blog, should be seen as part of the larger landscape of mass collaboration research. Perhaps the most well known example of a mass collaboration solving a big intellectual problem is Wikipedia, where a mass collaboration of volunteers created a fantastic encyclopedia that is available to everyone.

Collaboration in research is nothing new, of course. What is new, however, is that the digital age enables collaboration with a much larger and more diverse set of people: the billions of people around the world with Internet access. I expect that these new mass collaborations will yield amazing results not just because of the number of people involved but also because of their diverse skills and perspectives. How can we incorporate everyone with an Internet connection into our research process? What could you do with 100 research assistants? What about 100,000 skilled collaborators?

As I write in Bit by Bit, I think it is helpful to roughly distinguish between three types of mass collaboration projects: human computation, open call, and distributed data collectionHuman computation projects are ideally suited for easy-task-big-scale problems, such as labeling a million images. These are projects that in the past might have been performed by undergraduate research assistants. Contributions to human computation projects don’t require specialized skills, and the final output is typically an average of all of the contributions. A classic example of a human computation project is Galaxy Zoo, where a hundred thousand volunteers helped astronomers classify a million galaxies. Open call projects, on the other hand, are more suited for problems where you are looking for novel answers to clearly formulated questions. In the past, these are projects that might have involved asking colleagues. Contributions to open call projects come from people who may have specialized skills, and the final output is usually the best contribution. A classic example of an open call is the Netflix Prize, where thousands of scientists and hackers worked to develop new algorithms to predict customers’ ratings of movies. Finally, distributed data collection projects are ideally suited for large-scale data collection. These are projects that in the past might have been performed by undergraduate research assistants or survey research companies. Contributions to distributed data collection projects typically come from people who have access to locations that researchers do not, and the final product is a simple collection of the contributions. A classic example of a distributed data collection is eBird, in which hundreds of thousands of volunteers contribute reports about birds they see.

Given this way of organizing things, you can think of the Fragile Families Challenge as an open call project, and when designing the Challenge, I draw inspiration from the other open call projects that I wrote about such as the Netflix Prize, Foldit, and Peer-to-Patent.

If you’d like to learn more about how mass collaboration can be used in social research, I’d recommend reading Chapter 5 of Bit by Bit or watching this talk I gave at Stanford in the Human-Computer Interaction Seminar. If you’d like to learn more about the Fragile Families Challenge, which is ongoing, I’d recommend our project website and blog.  Finally, if you are interested in social science in the age of big data, I’d recommend reading all of Bit by Bit: Social Research in the Digital Age.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Matthew J. Salganik on Bit by Bit: Social Research in the Digital Age

In just the past several years, we have witnessed the birth and rapid spread of social media, mobile phones, and numerous other digital marvels. In addition to changing how we live, these tools enable us to collect and process data about human behavior on a scale never before imaginable, offering entirely new approaches to core questions about social behavior. Bit by Bit is the key to unlocking these powerful methods—a landmark book that will fundamentally change how the next generation of social scientists and data scientists explores the world around us. Matthew Salganik has provided an invaluable resource for social scientists who want to harness the research potential of big data and a must-read for data scientists interested in applying the lessons of social science to tomorrow’s technologies. Read on to learn more about the ideas in Bit by Bit.

Your book begins with a story about something that happened to you in graduate school. Can you talk a bit about that? How did that lead to the book?

That’s right. My dissertation research was about fads, something that social scientists have been studying for about as long as there have been social scientists. But because I happened to be in the right place at the right time, I had access to an incredibly powerful tool that my predecessors didn’t: the Internet. For my dissertation, rather than doing an experiment in a laboratory on campus—as many of my predecessors might have—we built a website where people could listen to and download new music. This website allowed us to run an experiment that just wasn’t possible in the past. In my book, I talk more about the scientific findings from that experiment, but while it was happening there was a specific moment that changed me and that directly led to this book. One morning, when I came into my basement office, I discovered that overnight about 100 people from Brazil had participated in my experiment. To me, this was completely shocking. At that time, I had friends running traditional lab experiments, and I knew how hard they had to work to have even 10 people participate. However, with my online experiment, 100 people participated while I was sleeping. Doing your research while you are sleeping might sound too good to be true, but it isn’t. Changes in technology—specifically the transition from the analog age to the digital age—mean that we can now collect and analyze social data in new ways. Bit by Bit is about doing social research in these new ways.

Who is this book for?

This book is for social scientists who want to do more data science, data scientists who want to do more social science, and anyone interested in the hybrid of these two fields. I spend time with both social scientists and data scientists, and this book is my attempt to bring the ideas from the communities together in a way that avoids the jargon of either community.  

In your talks, I’ve heard that you compare data science to a urinal.  What’s that about?

Well, I compare data science to a very specific, very special urinal: Fountain by the great French artist Marcel Duchamp. To create Fountain, Duchamp had a flash of creativity where he took something that was created for one purpose—going to the bathroom—and turned it a piece of art. But most artists don’t work that way. For example, Michelangelo, didn’t repurpose. When he wanted to create a statue of David, he didn’t look for a piece of marble that kind of looked like David: he spent three years laboring to create his masterpiece. David is not a readymade; it is a custommade.

These two styles—readymades and custommades—roughly map onto styles that can be employed for social research in the digital age. My book has examples of data scientists cleverly repurposing big data sources that were originally created by companies and governments. In other examples, however, social scientists start with a specific question and then used the tools of the digital age to create the data needed to answer that question. When done well, both of these styles can be incredibly powerful. Therefore, I expect that social research in the digital age will involve both readymades and custommades; it will involve both Duchamps and Michelangelos.

Bit by Bit devotes a lot attention to ethics.  Why?

The book provides many of examples of how researchers can use the capabilities of the digital age to conduct exciting and important research. But, in my experience, researchers who wish to take advantage of these new opportunities will confront difficult ethical decisions. In the digital age, researchers—often in collaboration with companies and governments—have increasing power over the lives of participants. By power, I mean the ability to do things to people without their consent or even awareness. For example, researchers can now observe the behavior of millions of people, and researchers can also enroll millions of people in massive experiments. As the power of researchers is increasing, there has not been an equivalent increase in clarity about how that power should be used. In fact, researchers must decide how to exercise their power based on inconsistent and overlapping rules, laws, and norms. This combination of powerful capabilities and vague guidelines can force even well-meaning researchers to grapple with difficult decisions. In the book, I try to provide principles that can help researchers—whether they are in universities, governments, or companies—balance these issues and move forward in a responsible way.

Your book went through an unusual Open Review process in addition to peer review. Tell me about that.

That’s right. This book is about social research in the digital age, so I also wanted to publish it in a digital age way. As soon as I submitted the book manuscript for peer review, I also posted it online for an Open Review during which anyone in the world could read it and annotate it. During this Open Review process dozens of people left hundreds of annotations, and I combined these annotations with the feedback from peer review to produce a final manuscript. I was really happy with the annotations that I received, and they really helped me improve the book.

The Open Review process also allowed us to collect valuable data. Just as the New York Times is tracking which stories get read and for how long, we could see which parts of the book were being read, how people arrived to the book, and which parts of the book were causing people to stop reading.

Finally, the Open Review process helped us get the ideas in the book in front of the largest possible audience. During Open Review, we had readers from all over the world, and we even had a few course adoptions. Also, in addition to posting the manuscript in English, we machine translated it into more than 100 languages, and we saw that these other languages increased our traffic by about 20%.

Was putting your book through Open Review scary?

No, it was exhilarating. Our back-end analytics allowed me see that people from around the world were reading it, and I loved the feedback that I received. Of course, I didn’t agree with all the annotations, but they were offered in a helpful spirit, and, as I said, many of them really improved the book.

Actually, the thing that is really scary to me is putting out a physical book that can’t be changed anymore. I wanted to get as much feedback as possible before the really scary thing happened.

And now you’ve made it easy for other authors to put their manuscripts through Open Review?

Absolutely. With a grant from the Sloan Foundation, we’ve released the Open Review Toolkit. It is open source software that enables authors and publishers to convert book manuscripts into a website that can be used for Open Review. And, as I said, during Open Review, you can receive valuable feedback to help improve your manuscript, feedback that is very complimentary to the feedback from peer review. During Open Review, you can also collect valuable data to help launch your book. Furthermore, all of these good things are happening at the same time that you are increasing access to scientific research, which is a core value of many authors and academic publishers.

SalganikMatthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Elizabeth Currid-Halkett: Conspicuous consumption is over. It’s all about intangibles now

In 1899, the economist Thorstein Veblen observed that silver spoons and corsets were markers of elite social position. In Veblen’s now famous treatise The Theory of the Leisure Class, he coined the phrase ‘conspicuous consumption’ to denote the way that material objects were paraded as indicators of social position and status. More than 100 years later, conspicuous consumption is still part of the contemporary capitalist landscape, and yet today, luxury goods are significantly more accessible than in Veblen’s time. This deluge of accessible luxury is a function of the mass-production economy of the 20th century, the outsourcing of production to China, and the cultivation of emerging markets where labour and materials are cheap. At the same time, we’ve seen the arrival of a middle-class consumer market that demands more material goods at cheaper price points.

However, the democratisation of consumer goods has made them far less useful as a means of displaying status. In the face of rising social inequality, both the rich and the middle classes own fancy TVs and nice handbags. They both lease SUVs, take airplanes, and go on cruises. On the surface, the ostensible consumer objects favoured by these two groups no longer reside in two completely different universes.

Given that everyone can now buy designer handbags and new cars, the rich have taken to using much more tacit signifiers of their social position. Yes, oligarchs and the superrich still show off their wealth with yachts and Bentleys and gated mansions. But the dramatic changes in elite spending are driven by a well-to-do, educated elite, or what I call the ‘aspirational class’. This new elite cements its status through prizing knowledge and building cultural capital, not to mention the spending habits that go with it – preferring to spend on services, education and human-capital investments over purely material goods. These new status behaviours are what I call ‘inconspicuous consumption’. None of the consumer choices that the term covers are inherently obvious or ostensibly material but they are, without question, exclusionary.

The rise of the aspirational class and its consumer habits is perhaps most salient in the United States. The US Consumer Expenditure Survey data reveals that, since 2007, the country’s top 1 per cent (people earning upwards of $300,000 per year) are spending significantly less on material goods, while middle-income groups (earning approximately $70,000 per year) are spending the same, and their trend is upward. Eschewing an overt materialism, the rich are investing significantly more in education, retirement and health – all of which are immaterial, yet cost many times more than any handbag a middle-income consumer might buy. The top 1 per cent now devote the greatest share of their expenditures to inconspicuous consumption, with education forming a significant portion of this spend (accounting for almost 6 per cent of top 1 per cent household expenditures, compared with just over 1 per cent of middle-income spending). In fact, top 1 per cent spending on education has increased 3.5 times since 1996, while middle-income spending on education has remained flat over the same time period.

The vast chasm between middle-income and top 1 per cent spending on education in the US is particularly concerning because, unlike material goods, education has become more and more expensive in recent decades. Thus, there is a greater need to devote financial resources to education to be able to afford it at all. According to Consumer Expenditure Survey data from 2003-2013, the price of college tuition increased 80 per cent, while the cost of women’s apparel increased by just 6 per cent over the same period. Middle-class lack of investment in education doesn’t suggest a lack of prioritising as much as it reveals that, for those in the 40th-60th quintiles, education is so cost-prohibitive it’s almost not worth trying to save for.

While much inconspicuous consumption is extremely expensive, it shows itself through less expensive but equally pronounced signalling – from reading The Economist to buying pasture-raised eggs. Inconspicuous consumption in other words, has become a shorthand through which the new elite signal their cultural capital to one another. In lockstep with the invoice for private preschool comes the knowledge that one should pack the lunchbox with quinoa crackers and organic fruit. One might think these culinary practices are a commonplace example of modern-day motherhood, but one only needs to step outside the upper-middle-class bubbles of the coastal cities of the US to observe very different lunch-bag norms, consisting of processed snacks and practically no fruit. Similarly, while time in Los Angeles, San Francisco and New York City might make one think that every American mother breastfeeds her child for a year, national statistics report that only 27 per cent of mothers fulfill this American Academy of Pediatrics goal (in Alabama, that figure hovers at 11 per cent).

Knowing these seemingly inexpensive social norms is itself a rite of passage into today’s aspirational class. And that rite is far from costless: The Economist subscription might set one back only $100, but the awareness to subscribe and be seen with it tucked in one’s bag is likely the iterative result of spending time in elite social milieus and expensive educational institutions that prize this publication and discuss its contents.

Perhaps most importantly, the new investment in inconspicuous consumption reproduces privilege in a way that previous conspicuous consumption could not. Knowing which New Yorker articles to reference or what small talk to engage in at the local farmers’ market enables and displays the acquisition of cultural capital, thereby providing entry into social networks that, in turn, help to pave the way to elite jobs, key social and professional contacts, and private schools. In short, inconspicuous consumption confers social mobility.

More profoundly, investment in education, healthcare and retirement has a notable impact on consumers’ quality of life, and also on the future life chances of the next generation. Today’s inconspicuous consumption is a far more pernicious form of status spending than the conspicuous consumption of Veblen’s time. Inconspicuous consumption – whether breastfeeding or education – is a means to a better quality of life and improved social mobility for one’s own children, whereas conspicuous consumption is merely an end in itself – simply ostentation. For today’s aspirational class, inconspicuous consumption choices secure and preserve social status, even if they do not necessarily display it.Aeon counter – do not remove

Elizabeth Currid-Halkett is the James Irvine Chair in Urban and Regional Planning and professor of public policy at the Price School, University of Southern California. Her latest book is The Sum of Small Things: A Theory of the Aspirational Class (2017). She lives in Los Angeles.

This article was originally published at Aeon and has been republished under Creative Commons.

Dalton Conley & Jason Fletcher on how genomics is transforming the social sciences

GenomeSocial sciences have long been leery of genetics, but in the past decade, a small but intrepid group of economists, political scientists, and sociologists have harnessed the genomics revolution to paint a more complete picture of human social life. The Genome Factor shows how genomics is transforming the social sciences—and how social scientists are integrating both nature and nurture into a unified, comprehensive understanding of human behavior at both the individual and society-wide levels. The book raises pertinent questions: Can and should we target policies based on genotype? What evidence demonstrates how genes and environments work together to produce socioeconomic outcomes? Recently, The Genome Factor‘s authors, Dalton Conley and Jason Fletcher, answered some questions about their work.

What inspired you to write The Genome Factor?

JF: Our book discusses how findings and theories in genetics and biological sciences have shaped social science inquiry—the theories, methodologies, and interpretations of findings used in economics, sociology, political science, and related disciplines —both historically and in the newer era of molecular genetics. We have witnessed, and participated in, a period of rapid change and cross-pollination between the social and biological sciences. Our book draws out some of the major implications of this cross-pollination—we particularly focus on how new findings in genetics has overturned ideas and theories in the social sciences. We also use a critical eye to evaluate what social scientists and the broader public should believe about the overwhelming number of new findings produced in genetics.

What insights did you learn in writing the book?

JF: Genetics, the human genome project in particular, has been quite successful and influential in the past two decades, but has also experienced major setbacks and is still reeling from years of disappointments and a paradigm shift. There has been a major re-evaluation and resetting of expectations the clarity and power of genetic effects. Only 15 years ago, a main model was on the so-called OGOD model—one gene, one disease. While there are a few important examples where this model works, it has mostly failed. This failure has had wide implications on how genetic analysis is conducted as well as a rethinking of previous results; many of which are now thought to false findings. Now, much analysis is conducted using data 10s or 100s of thousands of people because the thinking is that most disease is caused by tens, hundreds, or even thousands of genes that each have a tiny effect. This shift has major implications for social science as well. It means genetic effects are diffuse and subtle, which makes it challenging to combine genetic and social science research. Genetics has also shifted from a science of mechanistic understanding to a large scale data mining enterprises. As social scientists, this approach is in opposition to our norms of producing evidence. This is something we will need to struggle through in the future.

How did you select the topics for the book chapters?

JF: We wanted to tackle big topics across multiple disciplines. We discuss some of the recent history of combining genetics and social science, before the molecular revolution when “genetics” were inferred from family relationships rather than measured directly. We then pivot to provide examples of cutting edge research in economics and sociology that has incorporated genetics to push social science inquiry forward. One example is the use of population genetic changes as a determinant of levels of economic development across the world. We also focus our attention to the near future and discuss how policy decisions may be affected by the inclusion of genetic data into social science and policy analysis. Can and should we target policies based on genotype? What evidence do we have that demonstrates how genes and environments work together to produce socioeconomic outcomes?

What impact do you hope The Genome Factor will have?

JF: We hope that readers see the promise as well as the perils of combining genetic and social science analysis. We provide a lot of examples of ongoing work, but also want to show the reader how we think about the larger issues that will remain as genetics progresses. We seek to show the reader how to look through a social science lens when thinking about genetic discoveries. This is a rapidly advancing field, so the particular examples we discuss will be out of date soon, but we want our broader ideas and lens to have longer staying power. As an example, advances in gene editing (CRISPR) have the potential to fundamentally transform genetic analysis. We discuss these gene editing discoveries in the context of some of their likely social impacts.

Dalton Conley is the Henry Putnam University Professor of Sociology at Princeton University. His many books include Parentology: Everything You Wanted to Know about the Science of Raising Children but Were Too Exhausted to Ask. He lives in New York City. Jason Fletcher is Professor of Public Affairs, Sociology, Agricultural and Applied Economics, and Population Health Sciences at the University of Wisconsin–Madison. He lives in Madison. They are the authors of The Genome Factor: What the Social Genomics Revolution Reveals about Ourselves, Our History, and the Future.

Ethicist Jason Brennan: Brexit, Democracy, and Epistocracy

By Jason Brennan

The Washington Post reports that there is a sharp uptick today in the number of Britons Googling basic questions about what the European Union is and what the implications of leaving are. This is a bit like deciding to study after you’ve already taken the final exam.

Technically, the Brexit referendum is not binding. Parliament could decide to hold their own vote on whether to leave the European Union. Perhaps they should. Perhaps the UK’s leaders owe it to the people to thwart their expressed will.

Leaving the EU is no small affair. It probably will have enormous effects on the UK, Europe, and much of the rest of the world. But just what these effects will be is unclear. To have even a rudimentary sense of the pros and cons of Brexit, a person would need to possess tremendous social scientific knowledge. One would need to know about the economics and sociology of trade and immigration, the politics of centralized regulation, and the history of nationalist movements. But there is no reason to think even a tenth of the UK’s population has a basic grasp of the social science needed to evaluate Brexit.

Political scientists have been studying voter knowledge for the past 60 years. The results are uniformly depressing. Most voters in most countries are systematically ignorant of even the most basic political facts, let alone more the social scientific theories needed to make sense of these facts.

This brings us to the central injustice of democracy, and why holding a referendum was a bad idea. Imagine, as an analogy, that you are sick. You go to a doctor. But suppose your “doctor” doesn’t study the facts, doesn’t know any medicine, and makes her decisions about how to treat you on a whim, on the basis of prejudice or wishful thinking. Imagine the doctor not only prescribes you a course of treatment, but literally forces you, at gunpoint, to accept the treatment.

We’d find this behavior intolerable. Your doctor owes you a duty of care. She owes it to you to deliver an expert opinion on the basis of good information, a strong background knowledge of medicine, and only after considering the facts in a rational and scientific way. To force you to follow the decisions incompetent and bad faith doctor is unjust.

But this is roughly what happens in democracy. Most voters are ignorant of both basic political facts and the background social scientific theories needed to evaluate the facts. They process what little information they have in highly biased and irrational ways. They decided largely on whim. And, worse, we’re each stuck having to put up with the group’s decision. Unless you’re one of the lucky few who has the right and means to emigrate, you’re forced to accept your democracy’s poorly chosen decisions.

There’s a big dilemma in the design of political institutions. Should we be ruled by the few or the many? What this amounts to is the choice between being ruled by the smart but selfish or dumb but nice. When only a small number of people hold power, they tend to use this power for their own ends at the expense of everyone else. If a king holds all the power, his decisions matter. He will likely use that power in a smart way, but smart for himself, rather than smart for everybody. Suppose instead we give everyone power. In doing so, we largely remove the incentive and ability for people to use power in self-serving ways at the expense of everyone else. But, at the same time, we remove the incentive for people to use power wisely. Since individual votes count for so little, individual voters have no incentive to become well-informed or to process information with any degree of care. Democracy incentivizes voters to be dumb.

Going back to the doctor analogy, here’s the dilemma: Suppose you could choose between two doctors. The first doctor prescribes you medicine based on what’s good for her, not you. The second is a complete fool who prescribes you medicine on whim and fancy, without reference to the facts. Roughly, with some exaggeration, that’s what the choice between monarchy or democracy amounts to. Neither is appealing.

What if there were a third way, though? In my forthcoming book, Against Democracy, I explore a way of splitting the difference. The trick is to find a political system that both 1) spreads power out enough to prevent people from using power selfishly and 2) weeds out or at least reduces the power of incompetent decision-makers.

In some sense, republican democracy, with checks and balances, was meant to do just that. And to a significant degree it succeeds. But perhaps a new system, epistocracy, could do even better.

In an epistocracy, political power is to some degree apportioned according to knowledge. An epistocracy might retain the major institutions we see in republican democracy, such as parties, mass elections, constitutional review, and the like. But in an epistocracy, not everyone has equal basic political power. An epistocracy might grant some people additional voting power, or might restrict the right to vote only to those that could pass a very basic test of political knowledge. Any such system will be subject to abuse, and will suffer from significant government failures. But that’s true of democracy too. The interesting question is whether epistocracy, warts and all, would perform better than democracy, warts and all.

All across the West, we’re seeing the rise of angry, resentful, nationalist, xenophobic and racist movements, movements made up mostly of low-information voters. Perhaps it’s time to put aside the childish and magical theory that democracy is intrinsically just, and start asking the serious question of whether there are better alternatives. The stakes are high.

brennanJason Brennan is the Robert J. and Elizabeth Flanagan Family Associate Professor of Strategy, Economics, Ethics, and Public Policy at the McDonough School of Business at Georgetown University. He is the author of The Ethics of Voting (Princeton), Why Not Capitalism?, and Libertarianism. He is the coauthor of Markets without Limits, Compulsory Voting, and A Brief History of Liberty. His new book, Against Democracy, is out this August. He writes regularly for Bleeding Heart Libertarians, a blog.

Mark Zuckerberg chooses Michael Chwe’s RATIONAL RITUAL for Facebook Books!

Rational Ritual: Culture, Coordination, and Common Knowledge by Michael Chwe has been selected by none other than Mark Zuckerberg as the latest pick in his “Year of Books.” Analyzing rituals across histories and cultures, Rational Ritual shows how a single and simple concept, common knowledge, holds the key to the coordination of any number of actions, from those used in advertising to those used to fuel revolutions.

From Mark Zuckerberg’s Facebook post:

The book is about the concept of “common knowledge” and how people process the world not only based on what we personally know, but what we know other people know and our shared knowledge as well.

This is an important idea for designing social media, as we often face tradeoffs between creating personalized experiences for each individual and crafting universal experiences for everyone. I’m looking forward to exploring this further.

Zuckerberg isn’t the first to take note of Michael Chwe’s talent for making unusual and intriguing connections. As Virginia Postrel wrote in the New York Times, “[His] work, like his own academic career, bridges several social sciences.” Not long ago his book, Jane Austen, Game Theorist created a stir on social media, triggering debates and garnering a hugely popular feature by Jennifer Schuessler.

A Q&A with Chwe will be coming out on Facebook Books in the coming weeks. In the meantime, head over to Facebook to comment on Rational Ritual, or follow the discussion.  Congratulations, Michael Chwe!

Running Randomized Evaluations

Glennerster_RunningRandomized “The popularity of randomized evaluations among researchers and policymakers is growing and holds great promise for a world where decision making will be based increasingly on rigorous evidence and creative thinking. However, conducting a randomized evaluation can be daunting. There are many steps, and decisions made early on can have unforeseen implications for the life of the project. This book, based on more than a decade of personal experience by a foremost practitioner and a wealth of knowledge gathered over the years by researchers at J-PAL, provides both comfort and guidance to anyone seeking to engage in this process.”–Esther Duflo, codirector of J-PAL and coauthor of Poor Economics

Running Randomized Evaluations: A Practical Guide
Rachel Glennerster & Kudzai Takavarasha

This book provides a comprehensive yet accessible guide to running randomized impact evaluations of social programs. Drawing on the experience of researchers at the Abdul Latif Jameel Poverty Action Lab, which has run hundreds of such evaluations in dozens of countries throughout the world, it offers practical insights on how to use this powerful technique, especially in resource-poor environments.

This step-by-step guide explains why and when randomized evaluations are useful, in what situations they should be used, and how to prioritize different evaluation opportunities. It shows how to design and analyze studies that answer important questions while respecting the constraints of those working on and benefiting from the program being evaluated. The book gives concrete tips on issues such as improving the quality of a study despite tight budget constraints, and demonstrates how the results of randomized impact evaluations can inform policy.

Suggested courses:

  • Program evaluation courses taught in Master in Public Administration/ International Development, Master of Business Administration, and Master of Public Administration programs.
  • Masters of Public Policy courses focusing on economics and impact evaluation.

Endorsements

Table of Contents

Chapter 1 pdf-icon
Request an examination copy.