Thursday, October 22, 2009
Wednesday, October 21, 2009
Germany's Umweltbundesamt (UBA) [Federal Environmental Agency] will release a new study today advising consumers to avoid products using nanoparticles, as long as their effects on the environment and human health are largely unknown. The federal agency is also calling for regulations on labeling and reporting products containing nanomaterials. This would affect the more than 800 German companies that use the new technology in their procts.
See here for the full story from news magazine Der Stern, based on an initial report in the Sueddeutsche Zeitung.
The wave of coverage surrounding the UBA report, also drew renewed attention to the August, 2009 story linking deaths among Chinese factory workers to exposure to high dosages of nano particles in a factory.
Sunday, October 18, 2009
How much of a say should the public have in the direction of science (and how much should be left to the experts?)
"... Many of the survey data we collected at the University of Wisconsin and at Arizona State (Scheufele & Corley, 2008) show that the public trusts scientists to do a good job on the science behind emerging technologies. But some applications in the area of nanotechnology, for instance, have also raised ethical concerns about human enhancement or the creation of synthetic life that have more to do with how we use emerging technologies than the science behind them.
So who should shape societal debates about the science and its applications? On the one hand we have a chronically underinformed public who shows limited interest in scientific issues (or political issues, for that matter). As a result, they often make decisions or form policy stances about emerging technologies with little information about the science behind them (Scheufele, 2006b). And this is a description, not a criticism. In fact, we all use information shortcuts or heuristics every day when faced with the need to make choices with incomplete information. Should we be worried about the suspicious looking guy lingering outside our apartment? And what toothpaste should we buy, given virtually unlimited choices in the supermarket? Eventually, we find answers to all of these questions without collecting all available information. We trust certain brands, we rely on previous experience, and we make gut decisions.
Why is that? The answer is simple. We are all cognitive misers or satisficers to varying degrees (Fiske & Taylor, 1991). We use as little information as we think we can get away with or only as much as we think we need to make a decent decision. That is just human nature. And we’re all miserly for different reasons and for different issues. Why don’t most scientists follow Miley Cyrus’s personal life? Probably because they don’t care, and because they see no payoff from learning more about B-list celebrities for either their personal or professional lives. Many citizens, of course, feel the same way about science. Why would they spend time learning about emerging technologies, as long as they feel that they can trust regulatory agencies and universities to produce and manage scientific discoveries responsibly?
But this is exactly the problem that science communicators have battled with for a long time. We should not be concerned about the fact that audiences know little about specific technologies, but that they know little about science. One in four (25%) members of the general public understand the concept of a scientific study, and only about two in five can correctly describe a scientific experiment (42%) or the scientific process more broadly (41%) (National Science Board, 2008). And most empirical studies suggest that this won’t change anytime soon. As a result, my colleague Dominique Brossard here at Wisconsin has argued for a long time that a key variable in well-functioning scientific societies is what she calls “deference toward scientific authority” (Brossard & Nisbet, 2007; Brossard, Scheufele, Kim, & Lewenstein, 2009; Lee & Scheufele, 2006), i.e., the ability to negotiate personal value systems and beliefs with a willingness to defer to scientific expertise for factual information about emerging technologies. And this has nothing to do with blindly trusting scientists. In fact, our work at Wisconsin has shown that values are a critical component of how people make decisions about science, and justifiably so (Brossard, et al., 2009; Ho, Brossard, & Scheufele, 2008). Concerns about destroying unborn life as part of embryonic stem cell research, for instance, can’t be addressed with more science. They can only be resolved in a comprehensive societal debate that deals with values and scientific facts at the same time.
This brings us to the second group – scientists – and their role in guiding scientific progress. In short, the input that scientists can provide into societal debates surrounding emerging technologies is critical. In fact, I have argued many times before that scientists have not played as much of a role in participating in societal debates as they should have (Nisbet & Scheufele, 2007, forthcoming; Scheufele, 2006a, 2007; Scheufele et al., 2009), and that science and society are worse off as a result.
And what we need is not just feedback from the most vocal or most opinionated scientists in a given field, but rather a systematic understanding of what the leading experts in a given field think are prudent approaches to scientific development. The problem with that approach is the U.S. media system. U.S. journalists tend to cover scientific issues by showing “both sides.” This misguided understanding of objectivity often creates science journalism that pits a vast majority of scientists against a small number of vocal dissenters. The recent (and ongoing) debate about global warming is a good example of that pattern.
So is there a better approach to determining scientific consensus on an issue? And the answer is “yes.” Elizabeth Corley in the School of Public Policy at Arizona State and I recently published a series of papers from a systematic survey of leading U.S. scientists in the field of nanotechnology (Corley, Scheufele, & Hu, 2009; Scheufele, et al., 2009; Scheufele et al., 2007). We asked these scientists about their views on public-scientist interactions, about their recommendations for regulations, and about their perceptions of the potential risks and benefits surrounding nanotechnology. And the scientists’ insights are invaluable for societal decision making about these new technologies, including their recommendations for regulatory frameworks at the international level and for risk assessments in specific areas (Corley et al., 2009).
But our survey also showed that scientists sometimes rely on information shortcuts and heuristics, just like everyone else. We found that scientists, when they’re being asked for policy recommendations about emerging technologies, do rely on their professional judgments about the risks and benefits connected to nanotechnology. But our data also showed that – after controlling for their professional judgments – scientists’ personal ideologies have a significant impact on their support for regulations.
These findings, of course, say less about scientists and their expertise than they do about the lack of conclusive data about risks related to nanotechnology. Policy makers need to realize that when they ask scientists to give them advice about inconclusive findings, they will get both their professional judgment and their personal views.
Brossard, D., & Nisbet, M. C. (2007). Deference to scientific authority among a low information public: Understanding U.S. opinion on agricultural biotechnology. International Journal of Public Opinion Research, 19(1), 24-52.
Brossard, D., Scheufele, D. A., Kim, E., & Lewenstein, B. V. (2009). Religiosity as a perceptual filter: Examining processes of opinion formation about nanotechnology. Public Understanding of Science, 18(5), 546–558.
Corley, E. A., Scheufele, D. A., & Hu, Q. (2009). Of risks and regulations: How leading U.S. nanoscientists form policy stances about nanotechnology. Journal of Nanoparticle Research, 11(7), 1573-1585.
Ho, S. S., Brossard, D., & Scheufele, D. A. (2008). Effects of value predispositions, mass media use, and knowledge on public attitudes toward embryonic stem cell research. International Journal of Public Opinion Research, 20(2), 171-192.
Lee, C. J., & Scheufele, D. A. (2006). The influence of knowledge and deference toward scientific authority: A media effects model for public attitudes toward nanotechnology. Journalism & Mass Communication Quarterly, 83(4), 819-834.
National Science Board. (2008). Science and Engineering Indicators 2008 (Chapter 7). National Science Foundation Retrieved January 21, 2008, from http://www.nsf.gov/statistics/seind08/.
Nisbet, M. C., & Scheufele, D. A. (2007). The future of public engagement. The Scientist, 21(10), 38-44.
Nisbet, M. C., & Scheufele, D. A. (forthcoming). What's next for science communication? Promising directions and lingering distractions. American Journal of Botany.
Scheufele, D. A. (2006b). Messages and heuristics: How audiences form attitudes about emerging technologies. In J. Turney (Ed.), Engaging science: Thoughts, deeds, analysis and action (pp. 20-25). London: The Wellcome Trust.
Scheufele, D. A., Brossard, D., Dunwoody, S., Corley, E. A., Guston, D. H., & Peters, H. P. (2009). Are scientists really out of touch? The Scientist. Retrieved from http://www.the-scientist.com/news/display/55875/.
Scheufele, D. A., & Corley, E. A. (2008). The science and ethics of good communication. Next Generation Pharmaceutical, 4(1), 66.
Sunday, October 11, 2009
Population surveys are one of the most important tools for tapping how much citizens know about science and technology, how they perceive potential risks and benefits, and what their attitudes are about emerging technologies or research on particular applications.The encyclopedia, including the full chapter on surveys, is scheduled to appear with Sage in July 2010.
Sample surveys are defined as systematic studies of a geographically dispersed population by interviewing a sample of only certain members in an attempt to generalize to their population. Two terms of this definition are particularly important: “systematic” and “generalizable.”
The idea of systematically studying a population is a first main goal of sample surveys. Surveys therefore typically rely on a standardized questionnaire is in order to gather reliable and valid information from a wide variety of respondents. Reliability, in this context, refers to the idea that the same instrument – applied to comparable samples – will produce consistent results. But reliability is not enough. It is very possible, for example, that a questionnaire consistently measures the wrong construct. Validity therefore adds a second quality criterion, and refers to the idea that questionnaires need to provide not just consistent but also unbiased and accurate measurements of people’s behaviors, attitudes, etc.
Reliability and validity are tied to a number of factors in the survey process. But two aspects are particularly important when constructing a questionnaire: the overall structure of the questionnaire and wording of specific questions.
When structuring a survey questionnaire, the first concern is length. If a survey takes too much time to complete, it will likely result in significant incompletion rates. Unfortunately, the respondents who tend drop out of lengthy surveys are not a random subset of the population. Rather, they tend to be – among other characteristics – younger, more mobile, and employed full-time. As a result, excessively long survey instruments often produce samples that are plagued by systematic non-response among particular groups in the population, and are therefore limited in terms of their generalizability (see below).
A second concern with respect to questionnaire construction is the way questions are ordered on the questionnaire. Well-constructed questionnaires typically ask easy to answer questions first and sensitive or embarrassing questions later in the questionnaire. One of the most common pitfalls in survey instruments are priming effects, i.e., the notion that some questions can make certain considerations (for instance, risk or benefits of a specific technology) more salient in a respondent’s mind and therefore influence how he or she answers subsequent questions (for an overview, see Zaller & Feldman, 1992)
In addition to questionnaire structure, the wording of specific questions is a critical variable in building a valid instrument. In particular, well-constructed questionnaires use language and terminology that is designed to avoid biases. Such biases may stem from language that is likely to be more accessible to some respondents than others (e.g., terms that are more likely to be understood by certain ethnic groups or education-based cohorts) or that favors respondents who are more interested in or know more about science and technology in the first place. Any wording that feeds into these potential biases introduces systematic measurement error, since it does not produce an equally valid measure across all groups of the population.
These concerns about systematic measurement error are particular relevant for a researcher’s ability to generalize from a sample to the general population. This is both a statistical and a substantive problem.
From a statistical perspective, surveys are designed to allow researchers to make inferences from observed sampling statistics (e.g., 52 percent of the sample favor more research on a particular technology) to unobservable population parameters (the proportion of people favoring this research in the population). For surveys based on probability sampling (i.e., surveys that give each person in the population the same, known chance of being selected into the sample) the margin of error provides an indicator of how close the statistic observed in a sample is to the population, and how certain researchers can be about this inference (usually calculated with a certainty of 95%). For the example above, a margin of error of +/-3% would therefore indicate that we can be 95% certain that the true level of support for more research in the population falls somewhere between 49% and 55%.
But generalizability of survey results goes beyond just statistical considerations – especially for scientific issues, such as nanotechnology or stem cell research. Given the interplay between societal dynamics, scientific complexities, and a lack of widespread awareness, some have raised concerns about the appropriateness of using large scale surveys to tap public reactions to science and technology. These concerns typically fall into one of two categories that are both extremely important for any type of polling: first, what are we doing with people who are not fully aware or knowledgeable about the issue that we are interested in, and, second, can we capture an issue in all its complexities in a short survey?
The concern about unaware respondents is not unique to polling about science and technology. Political surveys routinely show that large proportions of the U.S. public are unable to accurately place presidential candidates relative to one another, even on simple issues, such as gun control (e.g., Patterson, 2002). And in fact, attitude formation about political and scientific issues – for many citizens – has little to do with awareness of or knowledge about the specifics of a particular issue (Scheufele, 2006).
In order make sure that all respondents have the same minimal baseline understanding of the technology that is being studied, surveys typically provide a short introduction to the issue as part of the question. Ideally, this introduction is comprehensive, but does not influence answers to subsequent questions by priming respondents about particular risks or benefits of the technology.
The second concern that is often raised related to the substantive generalizability of survey results about science and technology is the issue of how much detail a telephone survey can get into. Some have argued, in fact, that the systematic nature of standardized surveys is directly at odds with the need for an in-depth and contextualized understanding of how citizens interact with emerging technologies.
And of course these critics are right to a certain degree. Phone surveys, for instance, have clear constraints with respect to length and to the number of questions that can be asked about a single topic. Respondents participate on a voluntary basis and they spend a substantial amount of time on the phone with the interviewer. If researchers ask too many questions about a given topic or if the interview is too long, people tend to get bored or even annoyed and hang up. And this is not just a problem of having fewer respondents overall. Rather, as outlined earlier, if an interview is too long or goes into too much detail it usually creates problems with representativeness.
What we end up with, in this case, is a sample of people that is no longer representative of the overall population. And that, of course, hurts the validity of a poll because it no longer does what it is intended to do, i.e., capture the opinions of everybody in a given population, not just people who are more interested in a given issue or who happen to have more time to respond to a pollster's questions.
As a result, it is important to understand surveys for what they are, i.e.., one method of data collection that allows researchers to tap behaviors, levels of knowledge, and public attitudes toward science and technology in a very systematic and generalizable fashion. This comes with trade-offs related to the complexity of data that surveys provide. In particular, large scale population surveys are concerned with social patterns across large groups of respondents, and pay less attention to the potential complexity of a particular respondent’s belief system, for instance, and how it has developed over the course of his or her life.
Surveys can also be limited in how much they allow for causal inferences. This is particularly problematic for cross-sectional surveys, i.e., data collections at one point in time. Cross-sectional surveys may show a statistical correlation between exposure to science news in newspapers and scientific literacy, for instance, but they typically cannot provide conclusive evidence on the direction of this link. In other words, are knowledgeable respondents more likely to read the science section in newspapers, or does exposure to science news promote learning about science? Answers to these questions are typically provided by other research designs, some survey-based and some not.
Among the survey-based approaches that allow researchers to make some inferences about causality are longitudinal survey designs. These fall into three categories. Trend studies use multiple data collections with different samples to track responses to the same question over time. While trend studies can help researchers identify aggregate-level changes, they do not provide insights into how individual respondents change over time. Panel studies address this problem by providing multiple data collections over time for the exact same set of respondents. Cohort studies, finally are concerned with the effects that socialization or other influences have during certain periods of people’s lives. Is there a difference, for example between respondents who went to college during the first moon landing and those who went to college in the 1990s with respect to levels of interest in science and technology and science media use over the course of their life? In order to answer these questions, cohort analyses examine different subgroups (or cohorts), often defined by age, and compare their development as they grow older.
Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley.
Patterson, T. E. (2002). The vanishing voter: Public involvement in an age of uncertainty. New York: Alfred A. Knopf Publishers.
Scheufele, D. A. (2006). Messages and heuristics: How audiences form attitudes about emerging technologies. In J. Turney (Ed.), Engaging science: Thoughts, deeds, analysis and action (pp. 20-25). London: The Wellcome Trust.
Zaller, J., & Feldman, S. (1992). A simple theory of survey response: Answering questions versus revealing preferences. American Journal of Political Science, 36(3), 579-616.
Tuesday, October 06, 2009
The tensions between markets and regulations are to some degree explained by clear ideological rifts within the Wisconsin population. While a majority registered Democrats support the use of government subsidies for biofuels research (60.6%), less than 40% of registered Republicans do (38.9%). Similarly, three out of four (75.6%) Republicans believe that the free market should regulate biofuels -- a view that is shared by only 43.7% of Democrats. A majority of both Democrats (60.0%) and Republicans (51.3%), however, agree s that without government regulations, the oil industry will never invest in the development of biofuels.
Click here for the official UW-Madison press release with more details on the study.
Friday, October 02, 2009
Click here for a PDF copy of the full report.