Friday, January 14, 2011

Risks and benefits of emerging technologies? Depends on how you ask

Studies tapping public perceptions of the risks and benefits surrounding new technologies have long relied on a single-item measure asking respondents a variant of the following question:  "Do the risks associated with technology x outweigh the benefits; do the benefits outweigh the risks; or are the risks and benefits approximately the same?"  More recently, we raised concerns about this single item measure and suggested that -- especially for nanotechnology -- a more application-specific look at risk perceptions might be useful
First, responses may be biased based on what has been called “response order effects” (e.g., Schuman and Presser, 1981). Asking respondents first whether “the benefits outweigh the risks”, followed by response options for “the risks outweighing the benefits,” or “risks and benefits being about equal,” for instance, is a much different question than one that offers the “risks outweighing the benefits” as the first response option. Specifically, research in survey methodology suggests that “the earlier in the list [or response options] an acceptable answer appears, the more popular it will be” (Tourangeau et al., 2000: 250) among respondents. 
Second, this form of measurement forces respondents to make subjective summative judgments about the relative importance of several risks and benefits. Such judgments, unfortunately, are often skewed, given people’s tendency to remember unfavorable information about a topic better than favorable information (e.g., Gilovich, 1991). 
Third, nanotechnology has often been described as an enabling technology, similar to the Internet. This means that it has applications in many different subfields, with significantly different risk–benefit trade-offs attached to each of them (e.g., the medical field vs. military applications). Abstract measures of risk perceptions, unfortunately, cannot tap these distinctions.
A new study by researchers from NC State, Wisconsin, and Arizona State in Public Understanding of Science now quantifies the potential response biases introduced by this single-item measure. Specifically, comparisons of results from two surveys and across different measures of risk/benefits perceptions suggest that single-item measures of risk and benefits perceptions may be slanting answers toward higher risk perceptions.  In fact, people perceived more benefits than risks when given the opportunity to evaluate these attributes separately, as opposed to being asked to make a quick summary judgment in a single item.  Interestingly, this pattern holds both issues tested in this study (biofuels and nanotechnology).  As lead researcher Andrew R. Binder explains in the NC State press release:
 “There was a significant discrepancy among people who responded to the overarching question that the risks of emerging science outweighed the benefits when compared to their responses to the questions about the specific risks and benefits,” says Binder. “Namely, those same people really  perceived more benefits than risks when given the opportunity to evaluate these attributes separately.
“For example, in the nanotechnology survey, 50 percent of respondents who said risks outweighed benefits actually evaluated nanotechnology positively in the other portion of the survey,” Binder says. “In fact, only 35.4 percent of respondents who thought risks outweighed benefits actually calculated more risks than benefits in the specific section of the survey.” The researchers found similar, though less pronounced, results in the biofuels survey.
The study also showed that people who said that benefits outweighed risks in response to the overarching question consistently perceived more benefits than risks in the specific question section of the surveys.
“The bottom line is that social scientists and journalists need to be very careful when relying on data from a single, overarching survey question,” Binder says. “These oversimplified questions can result in misleading poll data and create problems for policymakers who base their decisions on those findings. They can also be problematic because they may contribute to different polls showing widely different results, which weakens the public’s faith in surveys generally.”


Gaythia said...

I think that it would be very interesting to compare this research with the work of political consultants who design “push-polls”. In that case, questions are designed with the deliberate intent of giving recipients the impression that they are being asked for their opinion, when actually the political consultant is being paid to craft questions that guide public sentiment in a particular direction.
I believe that this might provide further insights into the mechanisms by which questions are evaluated and how the questions themselves affect not only the immediate answer, but also shape long term opinion. Also, I believe that experience with these sorts of manipulative polls do affect public opinion of, and response to, non-biased polling attempts.