Matthew Salganik: The Open Review of Bit by Bit, Part 2—Higher sales

This post is the second in a three part series about the Open Review of Bit by Bit: Social Research in the Digital Age. This post describes how Open Review led to higher sales. The other posts in this series describe how Open Review led to a better book and increased access to knowledge.

Before talking about sales in more detail, I think I should start by acknowledging that it is a bit unusual for authors to talk about this stuff. But sales are an important part of the Open Review process because of one simple and inescapable fact: publishers need revenue. My editor is amazing, and she’s spent a lot of time making Bit by Bit better, as have her colleagues that do production and design. These people need to be paid salaries, and those salaries have come from somewhere. If you want to work with a publisher—even a non-profit publisher—then you have to be sensitive to the fact that they need revenue to be sustainable. Fortunately, in addition to better books and increased access to knowledge, Open Review also helps sell books. So for the rest of this post, I’m going to provide a purely economic assessment of the Open Review process.

One of the first questions that some people ask about Open Review is: “Aren’t you giving your book away for free?”  And the answer is definitely no. Open Review is free like Google is free.

Notice that Google makes a lot of money without ever charging you anything. That’s because you are giving Google something valuable, your data and your attention. Then, Google monetizes what you provide them. Open Review is the same.

In addition to improving the manuscript, which should lead to more sales, there are three main that Open Review increases sales: collecting email addresses, providing market intelligence, and promoting course adoptions.

Email addresses

After discussions with my editor, we decided that the main business metric during the Open Review of Bit by Bit was collecting email addresses of people who wanted to be notified when the book was complete. These addresses are valuable to the publisher because they can form the basis of a successful launch for the book. 

How did we collect email address?  Simple, we just asked people like this:

Bit

During the Open Review process we collected 340 unique valid emails address. Aside from a spike at the beginning, these arrive at a pace of about 1 per day with no sign of slowing down.

Bit

Who are these people? One quick way to summarize it is to look at the email ending (.com, .edu, .jp, etc). Based on this data, it seems that that Open Review helped us collect email address from people all over the world.

Bit

Another way to summarize the types of people who provided their email address is to look at the email suffixes (everything that comes after @). This shows, for example, which schools and companies are most represented.

Just collecting 340 email addresses was enough to significantly increase sales of Bit by Bit. And, in future Open Review projects, authors and publishers can get better at collecting email addresses. Just as Amazon is constantly running experiments to get you to buy more stuff, and the New York Times is running experiments to get you to click on more headlines, we were running experiments to collect more addresses. And unlike the experiments by Amazon and the New York Times, our experiments were overseen by Princeton’s Human Subjects Institutional Research Board.  

We tried six different ways to collect email addresses, and then we let Google Analytics use a multi-armed bandit approach find the best one. Here’s how they compared:

Bit

These differences are not huge, but they illustrate that Open Review websites can use the same kind of conversation optimization techniques that are common on modern, commercial websites. And I’m confident that future Open Review projects could be have an even higher rate of email sign-ups with additional design improvements and experimentation.

Market intelligence

In addition to collecting email addresses, the Open Review process also provides market intelligence that helped tailor the marketing of the book. For example, using a tool called Google Webmaster you can see which parts of your book are being linked to:

Bit

From this information, we learned that in addition to the book itself, people were most interested in the Open Review process and the chapter on Ethics. Then, when we were developing marketing copy for the book, we tried to emphasize this chapter.

Using Google Webmaster, you can also see which search terms are leading people to your book. In my case, you will see that 9 of the top 10 terms are not in English (in fact 48 of the top 50 terms are not in English). This is because of the machine translation process, which I talk about more in the post on increased access to knowledge. I was hoping that we would receive more organic search traffic in English, but as learned during this project: it is very hard to show up in the top 10 in organic search for most keywords.

Bit

In case you are curious, গবেষণা নকশা means “research design” in Bengali (Bangla).  

A final way that this market intelligence was helpful was in selling foreign rights to the book. For example, I provide this map of global traffic to representatives from Princeton University Press before they went to the London Book Fair to sell the foreign rights to Bit by Bit. This traffic shows in a very concrete way that there was an interest in the book outside of the United States.

Bit

 

 

Course adoptions

Finally, in addition to email addresses to help launch the book and market intelligence, Open Review accelerates course adoptions. My understanding is there is typically a slow ramp-up in course adoptions over the period of several years. But that slow ramp-up would be problematic for my book, which is freshest right when published and will gradually go stale over time. Given that the lifespan for this edition is limited, early course adoptions are key, and Open Review helped with that. I know of about 10 courses (list here) that have adopted the book in whole or in part during the Open Review process. This helped prime the pump for course adoptions when the book went on sale.

In this post, I’ve tried to describe the business case for Open Review, and I’ve shown how Open Review can help with collecting email addresses, gathering market intelligence, and speeding course adoptions. I think that purely on economic terms Open Review makes sense for publishers and authors for some books. If more people explore and develop Open Review as a model, I expect that these economic benefits would increase.  Further, this simply economic analysis does not count the benefits that come from better books and increased access to knowledge, two things that both authors and publishers value.

This post is the second in a three part series about the Open Review of Bit by Bit. You can also read more about how the Open Review of Bit by Bit led to a better book and increased access to knowledge. And, you can put your own manuscript through Open Review using the Open Review Toolkit, either by downloading the open-source code or hiring one of the preferred partners. The Open Review Toolkit is supported by a grant from the Alfred P. Sloan Foundation.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.

Matthew Salganik: Invisibilia, the Fragile Families Challenge, and Bit by Bit

Salganik

This week’s episode of Invisibilia featured my research on the Fragile Families Challenge. The Challenge is a scientific mass collaboration that combines predictive modeling, causal inference, and in-depth interviews to yield insights that can improve the lives of disadvantaged children in the United States. Like many research projects, the Fragile Families Challenge emerged from a complex mix of inspirations. But, for me personally, a big part of the Fragile Families Challenge grew out of writing my new book Bit by Bit: Social Research in the Digital Age. In this post, I’ll describe how Bit by Bit helped give birth to the Fragile Families Challenge.

Bit by Bit is about social research in the age of big data. It is for social scientists who want to do more data science, data scientists who want to do more social science, and anyone interested in the combination of these two fields. Rather than being organized around specific data sources or machine learning methods, Bit by Bit progresses through four broad research designs: observing behavior, asking questions, running experiments, and creating mass collaboration. Each of these approaches requires a different relationship between researchers and participants, and each enables us to learn different things.

As I was working on Bit by Bit, many people seemed genuinely excited about most of the book—except the chapter on mass collaboration. When I talked about this chapter with colleagues and friends, I was often greeted with skepticism (or worse). Many of them felt that mass collaboration simply had no place in social research. In fact, at my book manuscript workshop—which was made up of people that I deeply respected—the general consensus seemed to be that I should drop this chapter from Bit by Bit.  But I felt strongly that it should be included, in part because it enabled researchers to do new and different kinds of things. The more time I spent defending the idea of mass collaboration for social research, the more I became convinced that it was really interesting, important, and exciting. So, once I finished up the manuscript for Bit by Bit, I set my sights on designing the mass collaboration that became the Fragile Families Challenge.

The Fragile Families Challenge, described in more detail at the project website and blog, should be seen as part of the larger landscape of mass collaboration research. Perhaps the most well known example of a mass collaboration solving a big intellectual problem is Wikipedia, where a mass collaboration of volunteers created a fantastic encyclopedia that is available to everyone.

Collaboration in research is nothing new, of course. What is new, however, is that the digital age enables collaboration with a much larger and more diverse set of people: the billions of people around the world with Internet access. I expect that these new mass collaborations will yield amazing results not just because of the number of people involved but also because of their diverse skills and perspectives. How can we incorporate everyone with an Internet connection into our research process? What could you do with 100 research assistants? What about 100,000 skilled collaborators?

As I write in Bit by Bit, I think it is helpful to roughly distinguish between three types of mass collaboration projects: human computation, open call, and distributed data collectionHuman computation projects are ideally suited for easy-task-big-scale problems, such as labeling a million images. These are projects that in the past might have been performed by undergraduate research assistants. Contributions to human computation projects don’t require specialized skills, and the final output is typically an average of all of the contributions. A classic example of a human computation project is Galaxy Zoo, where a hundred thousand volunteers helped astronomers classify a million galaxies. Open call projects, on the other hand, are more suited for problems where you are looking for novel answers to clearly formulated questions. In the past, these are projects that might have involved asking colleagues. Contributions to open call projects come from people who may have specialized skills, and the final output is usually the best contribution. A classic example of an open call is the Netflix Prize, where thousands of scientists and hackers worked to develop new algorithms to predict customers’ ratings of movies. Finally, distributed data collection projects are ideally suited for large-scale data collection. These are projects that in the past might have been performed by undergraduate research assistants or survey research companies. Contributions to distributed data collection projects typically come from people who have access to locations that researchers do not, and the final product is a simple collection of the contributions. A classic example of a distributed data collection is eBird, in which hundreds of thousands of volunteers contribute reports about birds they see.

Given this way of organizing things, you can think of the Fragile Families Challenge as an open call project, and when designing the Challenge, I draw inspiration from the other open call projects that I wrote about such as the Netflix Prize, Foldit, and Peer-to-Patent.

If you’d like to learn more about how mass collaboration can be used in social research, I’d recommend reading Chapter 5 of Bit by Bit or watching this talk I gave at Stanford in the Human-Computer Interaction Seminar. If you’d like to learn more about the Fragile Families Challenge, which is ongoing, I’d recommend our project website and blog.  Finally, if you are interested in social science in the age of big data, I’d recommend reading all of Bit by Bit: Social Research in the Digital Age.

Matthew J. Salganik is professor of sociology at Princeton University, where he is also affiliated with the Center for Information Technology Policy and the Center for Statistics and Machine Learning. His research has been funded by Microsoft, Facebook, and Google, and has been featured on NPR and in such publications as the New Yorker, the New York Times, and the Wall Street Journal.