Previous PageNext Page

METHODS

The gambling and problem gambling survey in Oregon was completed in three stages. In the first stage of the project, Gemini Research consulted with the Board of Directors from the Oregon Gambling Addiction Treatment Foundation as well as from Gilmore Research Group, the organization responsible for data collection, regarding the final design of the questionnaire and the stratification of the sample. In the second stage of the project, staff from Gilmore Research completed telephone interviews with a sample of 1,502 residents of Oregon aged 18 years and older. All interviews were completed between May 1 and June 8, 1997 and the average length of these interviews was 13 minutes. Gilmore Research then provided Gemini Research with the data for the third stage of the project which included analysis of the data and preparation of this report.

Questionnaire

The questionnaire for the survey in Oregon was composed of four major sections (see Appendix B for a copy of the questionnaire). The first section included questions about 14 different types of gambling available to residents of the state. For each type of gambling, respondents were asked whether they had ever tried this type of gambling, whether they had tried it in the past year, and, if so, how often they had done so in the past month. Respondents were also asked to estimate their typical monthly expenditures on the types of gambling that they had tried in the past year.

The second section of the questionnaire was composed of the lifetime and current South Oaks Gambling Screen items. The third section of the questionnaire consisted of an alternative screen for pathological gambling based on the DSM-IV, the most recent diagnostic criteria for pathological gambling. These two sections of the questionnaire were rotated so that half of the respondents answered the SOGS questions first and half of the respondents answered the DSM-IV questions first. The final section of the questionnaire included questions about the demographic characteristics of each respondent.

Sample Design

Information about how survey samples are developed is important in assessing the validity and reliability of the results of the survey. While a fully random design is the most desirable approach in developing a representative sample of the population, this approach often results in under-sampling demographic groups with low rates of telephone ownership. These groups most often include young adults, minorities and individuals with low education and income. Increasingly, researchers use stratified random designs to guard against under-sampling. To determine whether a representative sample was obtained, it is helpful to calculate the response rate for the sample as a whole as well as to examine how closely the sample matches the known demographic characteristics of the population. If substantial differences are detected, post-stratification weights can be applied during analysis to ensure that the results of the survey can be generalized to the larger population.

To obtain a representative sample for the Oregon survey, random selection of households and random selection of respondents within households were used during the first part of the data collection process. During data collection, completed interviews were monitored to determine whether the sample was meeting quotas for males and young adults.

After completing approximately 1,000 interviews, we elected to begin screening for male respondents and for respondents under the age of 35 in eligible households in order to obtain adequate representation of men and young adults in the sample. Rather than exclude an eligible household once it was contacted, we changed the introductory screen to recruit eligible respondents within the household in the following order:

· male under 35
· female under 35
· male over 18
· female over 18

Response Rate

Survey professionals in general have found that response rates for telephone surveys have declined in recent years. These declines are related to the proliferation of fax machines, answering machines, blocking devices and other telecommunications technology that make it more difficult to identify and recruit eligible individuals. These declines are also related to the amount of political polling and market research that is now done by telephone and to the higher likelihood that eligible households will refuse to participate in any surveys.

The consequence has been that response rates for telephone surveys are now calculated in several different ways although all of these approaches involve dividing the number of respondents by the number of contacts believed to be eligible. 2 Differences in response rates result from different ways of calculating the denominator, i.e. the number of individuals eligible to respond. The most liberal approach is called the Upper Bound method and takes into account only those individuals who refuse to participate or who terminate an interview. This approach is used by the federal government because of controversies about the eligibility of numbers that could not be reached. The Upper Bound method of calculating the response rate for the Oregon survey yields a response rate of 61%.

The most conservative approach is the method adopted by the Council of American Survey Research Organizations (CASRO). The CASRO method uses the known status of portions of the sample that are contacted to impute characteristics of portions of the sample that were not reached. The CASRO method of calculating the response rate for the Oregon survey yields a completion rate of 51% if over-quota eligible respondents are assumed to be disqualified and 48% if over-quota eligibles are assumed to qualify as "good numbers."

While the CASRO approach yields response rates that are lower than desired for the Oregon survey, the crucial question is the impact that these response rates have on our confidence in the results of the survey and, in particular, the prevalence estimates of problem and pathological gambling in Oregon. Lesieur (1994) has noted that all of the potential biases introduced by the telephone interview process lead to the assumption that problem gambling prevalence rates established through telephone surveys are highly conservative. In further support of our belief that problem gambling prevalence estimates are conservative but reliable, work in British Columbia to investigate potential sources of non-response in problem gambling surveys found no significant differences between respondents and refusers in gambling behavior, SOGS items or demographics (Angus Reid & Gemini Research 1994).

Weighting the Sample

To determine whether the sample was representative of the population, the demographics of the sample were compared with demographic information from the United States Bureau of the Census. Since comparisons are with the 1990 census, some of the differences between the sample and the census, such as age and income, may be due to changes in the characteristics of the population over the past seven years.

After comparing the demographic characteristics of the sample with the known demographics of the population in Oregon, we elected to weight the sample for age. While the difference between the actual sample and the known characteristics of the population was not great (six percentage points), we were concerned about the impact that such age differences would have, given what is known about the demographic characteristics of problem gamblers in the general population. Table 1 shows key demographic characteristics of the actual and weighted samples and compares these characteristics to information from the 1990 census (the most recent information available on detailed characteristics of the population). The table shows that the weighted Oregon sample is representative of the population in terms of gender, age, ethnicity and marital status.

Table 1: Comparing the Demographics of the Actual and Weighted Sample
and the General Population

   

Actual
Sample
%

Weighted
Sample
%

1990
Census
%

   

(N=1,502)

(N=1,502)

 
         

Gender

Male

44.8

45.2

48.0

 

Female

55.2

54.8

52.0

         

Age

18 - 20

4.2

5.2

5.6

 

21 - 29

14.0

17.0

17.0

 

30 - 54

50.3

48.9

47.7

 

55 and over

31.5

29.0

29.6

         

Ethnicity

White

92.5

92.3

92.8

 

Non-White

7.5

7.7

7.2

         

Marital Status

Married

57.7

57.2

57.3

 

Widowed

14.3

9.0

6.9

 

Divorced/Separated

9.7

13.4

12.7

 

Never Married

18.4

20.4

23.0

Data Analysis and Reporting

For easier comparisons of data from the survey with results of similar surveys in other states, detailed demographic data on age, ethnicity, education, income and marital status were collapsed to have fewer values. Age was collapsed into four groups ("18 to 20," "21 to 29," "30 to 54" and "55 and Over") for purposes of analysis. Ethnicity was collapsed from six groups into two groups ("White" and "Non-White" which includes Native Americans, Asians and Hispanics as well as Blacks). Marital status was collapsed from five groups into four groups ("Married," "Widowed," "Separated/Divorced" and "Never Married"). Education was collapsed from five groups into two groups ("Less than High School" and "High School Graduate"). Employment was collapsed from seven groups into three groups ("Working," "Unemployed" and "Other" which includes respondents who are going to school, keeping house, disabled or retired). Household income was collapsed from six groups into three groups ("Less than $25,000," "$25,000 to $50,000" and "$50,000 or More") for purposes of analysis and comparison.

Chi-square analysis and analyses of variance were used to test for statistical significance. In order to adjust for the large number of statistical tests conducted, p-values smaller than .01 are considered highly significant while p-values at the more conventional .05 level are considered significant. In reading the tables in this report that contain demographic data, asterisks in the right-hand column indicate that one of the figures in that category is significantly different from other figures in the same category.

2 We would like to express our appreciation to Patricia Fullmer of Gilmore Research Group for her assistance in clarifying the different approaches to calculating response rates.

Top Of PagePrevious PageNext Page