Public opinion is in the eye of the beholder. Or at least that’s what it seems like after recent reports of a supposed study of public opinion on nanotechnology in
I am not quite sure who is to blame, but I am still in awe over the fact that a single small-group meeting gets misconstrued into a “study of public opinion” and triggers international coverage. The one group that is not to blame are the organizers, who never claimed to provide an accurate assessment of public opinion. In fact, the project, launched by the Bundesinstitut für Risikobewertung (BfR) [German Federal Institute for Risk Assessment], was designed as a consensus conference and explicitly labeled as such:
None of the blogs or media outlets who covered the story, of course, bothered to read the original description of the meeting on BfR’s web site. Otherwise they would have noticed that what most media reports call a “survey” was a single meeting of 16 consumers. More importantly, these were not 16 randomly selected consumers, which would hardly qualify as a survey either. Rather, BfR contacted a random 6,000 citizens of the larger Berlin/Brandenburg area in northeast
“The staging of a consumer conference puts BfR’s statutory remit on risk communication into practice by directly involving groups of consumers in the discussions about the risks and benefits prior to the introduction of a broadly based consumer application of this technology. This is the first time that a public agency in
has used this tool.” Germany
Based on my calculations, this translates into a cooperation rate of 0.6%, which would be a concern even if we could assume that self-selection happens randomly. But it doesn’t of course. And what BfR ended up with is a small, self-selected group of citizens from a specific region in
And all research in communication and political science shows that even if researchers invest significant amounts of money and go through multiple rounds of recruitment, it is difficult to get a representative group of participants in these meetings. I recommend Dan Merkle’s piece on Deliberative Polling in Public Opinion Quarterly a while back. He goes step by step through the problems and biases that have plagued similar efforts.
But again, none of this would be an issue if media outlets had just stuck with what the BfR was trying to do in the first place:
“The consumer conference draws on the model of the consensus conference. This tool was developed and is used in
What this whole story highlights, of course, is the need for sound social science in the area of emerging technologies. Public discourse about new technologies tends to focus more and more on the policy implications or the moral aspects of the issue rather than the scientific facts. An that is good, given the immense societal implications that many of these new technologies carry with them. The recent debates about stem cell research and GMOs are good examples. As a result, federal funding guidelines and other regulations of research are all directly tied to the public opinion dynamics surrounding these emerging technologies.
This also means, of course, that we are in desperate need of social science research that systematically tracks opinions, helps us understand the interplay of opinion formation and media coverage, and explores the dynamics between policy groups, media, and citizens. Consensus conferences and technology forums are important components of this research. But they are not surveys or tools of assessing public opinion. Journalists who make inferences from sentiments expressed at consensus conferences to public opinion among the larger public are therefore simply misleading their audiences.