Elsevier

Computers in Human Behavior

Volume 26, Issue 6, November 2010, Pages 1327-1335
Computers in Human Behavior

The effects of survey administration on disclosure rates to sensitive items among men: A comparison of an internet panel sample with a RDD telephone sample

https://doi.org/10.1016/j.chb.2010.04.006Get rights and content

Abstract

Research using Internet surveys is an emerging field, yet research on the legitimacy of using Internet studies, particularly those targeting sensitive topics, remains under-investigated. The current study builds on the existing literature by exploring the demographic differences between Internet panel and RDD telephone survey samples, as well as differences in responses with regard to experiences of intimate partner violence perpetration and victimization, alcohol and substance use/abuse, PTSD symptomatology, and social support. Analyses indicated that after controlling for demographic differences, there were few differences between the samples in their disclosure of sensitive information, and that the online sample was more socially isolated than the phone sample. Results are discussed in terms of their implications for using Internet samples in research on sensitive topics.

Introduction

Researchers in the social and health-related sciences struggle with the challenge of achieving accurate disclosures in survey research, especially when that research addresses sensitive issues such as the perpetration and victimization of violence, illegal behaviors, and symptoms of mental illness. There are many factors that contribute to the under-reporting of such experiences, including the fear of being judged by researchers, embarrassment about their experiences, and possible re-victimization which may result from reporting victimization experiences (Rubin and Babbie, 1993, Schuman and Converse, 1971). Topics that are sensitive in nature can lead respondents to be influenced by social desirability (Tourangeau & Yan, 2007). This makes studying these issues particularly challenging, and requires a better understanding of the impact of different survey designs on study findings. Demographic differences may also result from using different data collection methodologies. This study centers on a comparison of an Internet panel sample with a random digit dial (RDD) telephone sample, particularly with regard to how these two methods differ when measuring sensitive issues such as intimate partner violence (IPV), alcohol and drug use, and mental health indicators.

For decades, survey researchers have examined how response rates differ when the type of survey or survey administration changes (Dillman, 1978, Dillman, 2000, Fowler, 1995). Research in the 1960s and 1970s focused on the concerns of moving from face-to-face interviews when conducting survey research to using the more cost effective measures of mail and telephone surveys (Blankenship, 1977, Cummings, 1979, Lucas et al., 1977, Perry Jr, 1968, Siemiatycki, 1979, Tremblay and Dillman, 1977). A number of studies specifically examined how these different modes of survey administration might affect the results of studies focusing on sensitive topics, such as substance use and interpersonal violence. The results of such studies found little-to-no differences between the modes of different survey administration (Lawrence et al., 1995, Reddy et al., 2006, Rosenbaum et al., 2006).

As more of the nation moves toward using the Internet, this mode of communication is becoming increasingly attractive to researchers. There are many ways that the Internet can be used to collect data. First, individuals can be recruited via email to participate in an Internet survey and a link to the survey is embedded within the body of the email. This method is used when an email distribution list exists. “Panel studies” are similar in nature. Here a survey research center has already identified a group of individuals who would be willing to participate in Internet surveys; they are sent an email and recruited to participate. Finally, one can also recruit individuals to participate via websites. In such instances, an individual/research team can post announcements concerning a study and the study link on special interest websites that match the content of a given survey. Here, the researchers hope that the posting will catch the attention of potential participants and that they will click on the survey link. Similarly, one could use pop-up advertising that provides information about an Internet study and a link to the survey (Dennis, 2009, Dillman, 2000).

The primary reason that Internet surveys are appealing to researchers is the low cost of administration. Estimates concerning the cost in savings are substantial, especially when compared to telephone surveys. Researchers note that Internet surveys are between 15% and 20% (Einhart, 2003) and sometimes as much as 50% less expensive than RDD telephone surveys (Roster, Rogers, Albaum, & Klein, 2004). There are other important advantages, including consistency in administration of the survey questions, anonymity that can be guaranteed to participants, and the lack of an interviewer, which might make participants more likely to disclose highly sensitive information (Tourangeau & Yan, 2007). Nonetheless, concerns exist about the representativeness of Internet-based samples (Chang and Krosnick, 2010, Ross et al., 2005).

Like other methods of survey administration, including face-to-face, mail, and telephone surveys, Internet surveys have their own set of challenges. One of the most important challenges is the representativeness of a sample that is gleaned via the Internet. Survey researchers have noted a number of problems in this regard. Internet users constitute only 72.5% of the U.S. population (Nielson Online., 2009), whereas 94% have a telephone line (Fricker, Galesic, Tourangeau, & Ting, 2005). Telephone surveys have been shown to be more representative of the U.S. population, at least in terms of education, income, race/ethnicity, and age, when compared to Internet surveys (Chang & Krosnick, 2010). Younger individuals, in general, who are more comfortable using the Internet, are more likely to participate in Internet research as compared with older adults, especially those of retirement age (Ross et al., 2005, Roster et al., 2004). Other research confirms that Internet survey users from the general population are typically younger, are more likely to be white and less racially diverse, are better educated (Schillewaert & Meulemeester, 2005), and are more informed about world events than individuals participating in RDD telephone surveys (Duffy, Smith, Terhanian, & Bremer, 2005). This gap between Internet and non-Internet users is often referred to as “the digital divide” (Fricker et al., 2005).

There are some inconsistencies with regard to the demographic characteristics of Internet survey participants. Some studies have found that women are sometimes more likely to respond to Internet surveys (Roster et al., 2004); others have found that men are more likely to respond (Schillewaert & Meulemeester, 2005). A review of Internet surveys totaling over 100,000 participants aggregated from other studies found that higher socioeconomic classes were only slightly overrepresented. The final conclusions of these authors were that as compared to traditional paper-and-pencil samples, which often resort to the use of college student samples, Internet surveys, which are intended to be representative, are so with respect to many important demographic characteristics, including gender, class, geographic location, and age, but not with respect to race (Gosling, Vazire, Srivastava, & John, 2004).

On the other hand, with the increased use of “screening calls” and cellular telephones, data from telephone surveys is increasingly unrepresentative of the general population (Pew Research Center for the People, 2006). In a recent study, researchers found that online respondents were more similar to the U.S. Census Current Population Survey benchmarks than traditional landline telephone respondents, in which a number of important population groups were underrepresented including men, the younger and less educated, and ethnic minorities (Dennis & Li, 2007). The demographic makeup of an Internet sample, however, is dependent on the recruitment method. Researchers who recruited survey participants for Internet surveys through the same means of RDD telephone surveys had samples that deviated significantly less from the U.S population in terms of racial and gender composition as compared to a sample that was recruited solely through online advertisements (Chang & Krosnick, 2010). The complexity and inconsistency of these findings speak to the importance of using comprehensive demographic controls when conducting survey research via the Internet. For example, when basic socio-demographic controls are utilized in analyses, differences in responses to non-sociodemographic questions that are sometimes attributed to the type of survey administration usually disappear (Schillewaert & Meulemeester, 2005).

The level of anonymity that is permitted with an Internet survey may have an impact on disclosure rates. Compared to telephone surveys, online surveys provide more anonymity, primarily because they are self-administered and because a computer allows one to submit information without any source of identification, unlike a mail survey which at the very least would have to be stamped with the date and zip code. Telephone surveys have the disadvantage of not offering complete anonymity, since respondents have to verbalize their responses to an interviewer which may bias their responses (Tourangeau & Yan, 2007).

There is evidence that participants may feel more free to be honest in their opinions and experiences without the presence of an interviewer, a phenomenon termed the “politeness to stranger” effect, whereby participants “tone-down” unpleasant disclosures in an effort to appear socially appropriate (Schuman & Converse, 1971). Research participants who provide data via the Internet have been known to rate named companies more negatively than participants providing answers to an interviewer (Roster et al., 2004). In a similar vein, Internet survey users were more critical than telephone survey participants about topics that could be deemed politically and socially sensitive, including government spending on welfare, attitudes toward African Americans, space exploration, and aid to foreign nations (Dennis & Li, 2007).

Research has found that some Internet survey participants report higher levels of personally sensitive information than when this information is collected by other means. For example, a panel of college students answering via the Internet reported higher levels of alcohol consumption than those responding as part of an RDD telephone study (Heeren et al., 2008). College students have also been found to report more socially desirable responses when reporting over the telephone. For example, Parks, Pardi, and Bradizza (2006) found that college women, when reporting to a telephone interviewer about their alcohol consumption, were more likely to report that they had tried to control their drinking or that they drank in response to an argument with a friend, than Internet participants who, in turn, were more likely to report that their drinking had changed their personality and that at times they felt that they were “going crazy.” Notably, there were no differences in reporting around sexual behaviors, such as having unprotected sex, sex that they later regretted, or having been the victim of a sexual assault. Despite this study’s limited generalizability, these findings are consistent with others already reviewed in this paper and are also consistent with a literature review by Tourangeau and Yan (2007) concerning disclosure rates of illicit substance use.

Another study on college students found no difference between three modes of data collection: paper, Internet, and telephone. Knapp and Kirk (2003) compared the responses of 352 undergraduate students to questions asking about increasingly personal behavior, including mischievous behavior, general honesty, interpersonal relationships, illegal behavior, substance use, and sexual behavior. The researchers found no differences in disclosure rates of these behaviors between the three groups. There was, however, a survey administration glitch that occurred in this study which resulted in the Internet sample being one-half to one-third the size of the comparison groups; this could have reduced the power for statistical analyses.

Research using Internet surveys is an emerging field, yet research on the legitimacy of using Internet studies, particularly those targeting sensitive topics, remains under-investigated. The current study builds on the existing literature by further exploring the differences between Internet panel and RDD telephone survey samples. Both samples in this study were recruited as part of a larger study exploring issues of intimate partner violence (IPV) among men who were involved in relationships with women in the past year. Thus, this study also investigates differences in responses when sensitive topics are the focus of the study. Specifically, we assess both demographic differences between the samples and differences in responses with regard to experiences of IPV perpetration and victimization, alcohol and substance use/abuse, PTSD symptomatology, and social support.

The goals of the current study are to:

  • (1)

    Compare the Internet sample with the RDD phone sample on basic demographics. We will also roughly compare both samples to population-based data to investigate their relative representativeness.

  • (2)

    Investigate whether disclosure rates for sensitive issues, including IPV perpetration and victimization, alcohol and substance use/abuse, and PTSD symptoms, differ between the two samples, and whether after controlling for any differences in demographics, differences in disclosure rates still remain.

Section snippets

Sample and procedures

The data for this study is part of a larger study investigating issues related to men sustaining IPV from their female partners. As part of this study, a community sample of 520 men (mean age: 44 years, SD = 10.88; 84.8% White, 8.3% Black, 5.0% Hispanic/Latino, 3.1% Asian, 1.0% Native American), ages 18–59, who had been involved in a heterosexual relationship lasting at least 1 month in the previous year, was recruited. The average length of the intimate relationship they referred to was 164.90 

Sample characteristics

For our first series of analyses, we investigated whether there were any individual or relationship demographic differences between the online and phone samples. Table 1 presents the results. As shown, respondents from the online sample were significantly older, had significantly lower incomes, were less likely to be Black, and were more likely to be and/or have their partners be disabled; there were no differences in education. Moreover, respondents in the online sample were significantly less

Discussion

In this study, we investigated demographic differences between Internet and RDD phone samples and how type of survey administration could affect the disclosure of sensitive information. Both of our samples consisted of men between the ages of 18 and 59 who reported being involved in a heterosexual relationship lasting at least 1 month in the previous year, and responded to questions concerning IPV, PTSD, alcohol/substance use and abuse, and social support. Our results indicated that the samples

Acknowledgments

The project described was supported by Grant No. 5R21MH074590 from the National Institute of Mental Health. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIMH. Special thanks go to Murray Straus and the members of the University of New Hampshire’s Family Research Laboratory seminar program for their feedback on an earlier version of this article.

References (51)

  • Dillman, D. A. (1978). Mail and telephone surveys: The tailored design method. New York: John Wiley &...
  • Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: John Wiley &...
  • B. Duffy et al.

    Comparing data from online and face-to-face surveys

    International Journal of Market Research

    (2005)
  • N. Einhart

    The opinion catcher

    Business 2.0

    (2003)
  • First, M. B., Gibbon, M., Spitzer, R. L., & Williams, J. B. (1996). Structured clinical interview for DSM-IV Axis I...
  • A. Fontana et al.

    PTSD among Vietnam theater veterans: A causal model of etiology in a community sample

    Journal of Nervous and Mental Disease

    (1994)
  • F.J. Fowler

    Improving survey questions: Design and evaluation

    (1995)
  • S. Fricker et al.

    An experimental comparison of web and telephone surveys

    Public Opinion Quarterly

    (2005)
  • S.D. Gosling et al.

    Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires

    American Psychologist

    (2004)
  • T. Heeren et al.

    A comparison of results from an alcohol survey of a prerecruited Internet panel and the national epidemiologic survey on alcohol and related conditions

    Alcoholism: Clinical and Experimental Research

    (2008)
  • Hines, D.A., & Douglas, E. M. (in press). Intimate terrorism by women towards men: Does it exist? Journal of...
  • R.C. Kessler et al.

    Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the national comorbidity survey replication

    Archives of General Psychiatry

    (2005)
  • D.G. Kilpatrick et al.

    A 2-year longitudinal analysis of the relationship between violent assault and substance use in women

    Journal of Consulting and Clinical Psychology

    (1997)
  • Knapp, H., & Kirk, S. A. (2003). Using pencil and paper, Internet and touch-tone phones for self-administered surveys:...
  • H. Koivumaa-Honkanen et al.

    Self-reported life satisfaction and 20-year mortality in healthy Finnish adults

    American Journal of Epidemiology

    (2000)
  • Cited by (0)

    View full text