I don’t get them often, but by some statistical fluke I last summer I got invitations to and attended the faculty summits at Google, Microsoft, and Facebook within a period of two weeks. I’ll spoke about the summits themselves in a different post. In this one, I’ll talk about one existential question that arose while attending them: given the tremendous resources of these companies, does the research we CS faculty do actually make a difference?
Coincidentally, the day of the Google summit, one of my past collaborators announced that he was abandoning his tenured faculty position to take up a job at Google. His primary reason was “to make a difference”. While some of us faculty believe that the quest for knowledge is its own justification, a lot of faculty do share that motivation to make a difference with our research. Is that possible in CS at the present time?
The usual narrative talks about all the important companies that emerged from academia, like Google from Stanford and RSA from MIT. But it’s hard to assess the counterfactual: if universities hadn’t launched these companies, would the same innovations have appeared at existing companies?
There are some plausible arguments that our research is superfluous. Google, Facebook, and Microsoft are full of brilliant researchers and engineers who are just as capable as we are of coming up with fundamental innovations. And they have far more resources than we do to actually pursue those innovations. They far outnumber our students (the Taulbee survey says about 1500 CS PhDs graduate per year, which suggest that we have around 7500 enrolled. The companies employ tens of thousands of computer scientists including, I suspect, more PhDs than we have enrolled). Their equipment dwarfs what any academic can access—scanning the entire web is straightforward at these companies thanks to their spidering engines, but really challenging for any academic. Their user base is huge, permitting massive A/B studies at scales that we academics cannot command, that can therefore measure incredibly small effects.
Historically, academics have also been tasked with carrying out “high risk/reward” research—stuff that was unlikely to pay off but would have a big impact if it did. Companies avoided such research. That was and still is meaningful when creating a new chip architecture or solving artificial intelligence, which requires massive investment before you see if your risk has paid off. But nowadays, in many domains of computer science, it’s possible to throw together a prototype system in just a few weeks. I think this changes the risk calculus, since a company can afford to cheaply take many high risks knowing that some of them will pay off. Google’s 20% time is a great example of how this can work.
On the flip side, I believe there is important research that companies could do but choose not to, where we academics still have an opportunity to make a difference. Perhaps this was best represented by a slogan at Facebook: “done is better than perfect”. I agree with the sentiment, but it does reflect that fact that these fast-moving companies don’t have time to really dwell on a problem. They also often don’t take the time to tell anyone else about what they did. Academics have the luxury to take their time, and I think there is value to doing so. If we can work out a more “perfect” solution, and document it by publication, then the next company to have that problem will have access to a solution that is better than the first try, and immediately available to them without demanding investment of research or development effort. It may be depressing for faculty to consider that, instead of being at the cutting edge of research, we’re “backfilling”, bringing into public view ideas that companies already knew. But there’s clear social benefit to doing so—a way for us to make a difference. As a great example, Martin Hellman “didn’t care what the NSA knew” when he developed public key cryptography because he sensed there would be huge value to its being known publicly.
As a selfish example, my group has developed list.it, a lightweight note taking client. It’s not all that different from commercial tools like EverNote or . It’s got about 20,000 users, which is great for an academic project but nothing compared to the commercial tools. And those tools’ builders have presumably done some careful analysis of their users’ notes and usage patterns, and used that information to improve their tools. But they haven’t told anyone else what they found. We can make a contribution by publishing our analysis, and hopefully advance the public state of the art so that future note-taking tools can start from a better baseline.
Another example comes from work we did on load-balancing web servers. One result was the founding of Akamai, a typical example of the emergence of a company from academia. But first came our publication of a load-balancing technique called consistent hashing. Since then, the technique has found a place in peer-to-peer systems and more recently in . If we’d just been a company we might not have bothered to publish the work—we might even have kept it secret for competitive advantage—forcing those later applications to reinvent the idea or come up with a different one.
We do also have the opportunity to have cutting-edge impact, when we tackle research that companies don’t care to do, because the business payoff seems too far off or non-existent. In this category I’m particularly interested in tools to improve civic discourse and tools that will let individuals keep their personal data to themselves. I don’t see a way to convince any company that improved civic discourse will help them sell their products. And keeping data private will likely have a huge negative impact on the advertising revenue on which these companies rely. So academia needs to push these topics.
But if we do want to have this kind of forward impact we need to play to our strengths rather than the companies’. We need to remember that they can build things faster and better than we can. Perhaps this means that if there’s an engineering project that we know how to do, then we shouldn’t do it. Instead we need to concentrate on areas where we have questions and don’t have answers. But even more important, we need to recognize that often building the system simply isn’t an interesting contribution—it’s something any company could have done. Rather, our contribution to society comes from studying that system in use, investing energy in drawing meaningful lessons from it, and publishing them.