The Trouble with {The Trouble with Social Computing Systems Research}

A few weeks ago, I finished writing a thought piece with Mark Ackerman, Ed Chi, and Rob Miller about the state of systems research in social computing. It grew out of conversations with a lot of researchers in the area, and examines questions of novelty, evaluation, and the industry/academia question in the field.

I submitted the paper to alt.chi, where it generated quite a bit of discussion in the alt.chi open reviewing process: twenty two reviews (twenty one “very high interest”s, and one “high interest”). To be honest, I was really blown away by the positive response. I chose alt.chi as a venue because wanted to get a lot of feedback, and that worked out in spades.

In the spirit of alt.chi’s open process, I’m now going to open up some of those reviews back to the community so that I can make the paper even better. (While I wouldn’t do this kind of thing for typical paper, I think that alt.chi reviews are written with a higher expectation of openness, so it’s OK in this case.) These are some of the most cogent points I took away from the feedback, and they are what I’m going to try and address before Friday’s final deadline. I’d love to see continued discussion here in the comments if you have thoughts. There are a lot, but I’ve tried to highlight main points.

Here’s the submitted PDF, and the original abstract:

Social computing has led to an explosion of research in understanding users, and has the potential to similarly revolutionize systems research. However, the number of papers designing and building new sociotechnical systems has not kept pace. In this paper we analyze the reasons for this disparity, ranging from misaligned methodological incentives, evaluation expectations and research relevance compared to industry. We suggest improvements for the community to consider and evolve so that we can chart the future of our field.

Here we go — these are my favorite comments, both from the reviewing process and out-of-band emails I got. Please share any thoughts or reactions! (I’ve stripped reviewer names and affiliations for privacy reasons.)

  • Is it a rant?: “The paper felt a bit too much like a list of particular criticisms ACs have raised against your papers in the past. It was unclear how principled and complete of an exploration of the problems of  social computing systems research it is. How pervasive are criticisms of exponential growth and snowball sampling, really? Aren’t they just easy stand-ins for ACs to sidestep underlying, thornier problems?”
  • Discussion of industry vs. academia: “[It] was too simplistic. I think a third way that should be explored much more is to what extent academia can partner with either large industry or small startups. See Joel Brandt’s collaboration w/ Adobe, Niki Kittur’s with Wikimedia, etc.”
  • Distinction between spread and steady state: “In a ‘living’ social computing system, there is no simple steady state. To maintain the appearance of continuity, the system itself has to be constantly updated, changed, tweaked to respond to the changing balance and makeup of the user community; to keep up the arms race against spammers, etc. Steady state is an illusion created by the never-ending work of the maintainers of social computing systems.”
  • The 4:1 submission ratio: “There is an implicit claim that the number of papers submitted, or accepted, is roughly equivalent to the impact of a particular type of research. The ratio of “understanding users” to “systems” was 4:1 – so what?  Is this a declining trend or steady state? Most papers end up being read (and cited) infrequently. This may be especially true about papers that study and describe populations in systems with half-lives of 2-3 years. How many study papers that are 10+ years old do you still consider worthwhile? How many systems papers? Is there a real imbalance at that scale?”
  • Snowball sampling disagreement: “I think that, in most cases, this is undesirable, except in cases where the target user demographic is the same as our social networks (e.g., highly educated tech early adopters)”
  • Field study difficulty: There is an unnecessary slam on lab research as being too easy. We need to be more balanced here.
  • Arguments aren’t particularly “controversial”: we’re not taking a stand that’s horribly divisive. (That’s fine with me. I’m OK with just drawing out the issues.)
  • Generalizability: Some reviewers felt that these results could generalize beyond social computing to other areas. Others felt that we should broaden even to traditional CSCW topics like small group collaboration and communication. Many people felt that these arguments resonated even outside of our direct community. I’m honestly less sure of my footing here; I don’t want to overclaim.
  • Stronger argument why academia matters: “The argument could be made stronger for why should social computing systems should have a place at CHI or in academia if they can be done in industry with more access to data and better resources. The authors mention market incentives that can be avoided in academia. However, the majority of researchers have to find funding from the NSF or from industry so there are markets in both cases.”
  • Why do social computing systems matter?: “This submission could be stronger, especially for young PhD researchers, if it clearly outlined what contributions social computing systems research brings to the table. Why is it important that it be done?” “More discussion on the goals and the assessment of quality of social computing research would be extremely helpful.”
  • Qualitative studiers: Don’t forget about Studiers in anthropology, cultural and media studies. “These qualitative studiers often ask for research to a) engage in actual conversations  with users and b) discuss the larger cultural and societal implications of one’s system.”
  • Big Data vs. Industry: “It does not, however speak to the so-called Big Data movement we have seen in Social Computing (and that has been addressed in various forms by myself, Scott Golder, d boyd, and others.  While this is a bit orthogonal, it does address the sampling questions also detailed in the article.”
  • Builder/Studiers too simplistic?: “I think that there’s continually the problem in CHI that it’s a conference of minorities, and it’s a case of 20% builders, 20% studiers, 20% designers, 20% usability people studying Fitts Law until their socks fall off and so on. I’m not sure I agree with their characterization that ‘the prevalence of Studiers in social computing means that Studiers are often the most available reviewers for a systems paper on a social computing topic’. My experience is that whoever I want to review my paper – studiers for a study, builders for a technical system – I’ll end up with someone from the wrong place who can’t understand.
  • Replication: “If replication isn’t highly valued in our community, then one possible outcome is that the expectations for a social computing systems paper become quite high. The paper would have to not only introduce the system, but also provide a solid evaluation of it, because the bias against replication implies that future evaluations aren’t likely to be forthcoming.”

3 Responses to “The Trouble with {The Trouble with Social Computing Systems Research}”