Internal consistency and measurement equivalence of the cannabis screening questions on the paper-and-pencil face-to-face ASSIST versus the online instrument
Substance Abuse Treatment, Prevention, and Policy volume 10, Article number: 8 (2015)
Validated Internet-based screening tools for cannabis use and abuse are needed. The present study aimed to establish equivalence between the previously validated Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) as a paper-and-pencil (PaP)-administered questionnaire and its online use.
Two groups of cannabis users took part in this study and the results were analyzed using structural equation modeling. One group consisted of 150 participants and was assessed with the ASSIST PaP questionnaire in a face-to-face interview (the PaP group). They were recruited from three settings: a primary health care outpatient clinic, a general psychiatric facility, and an ambulatory specialized addiction treatment facility. The other group (the Web group) comprised 1382 persons who answered the online version of the same questionnaire. This sample was drawn from people who naturalistically visited a website dedicated to helping people with cannabis addiction.
The internal consistency was good for the online questionnaire (0.74) and high for the already validated PaP questionnaire (0.91). The Web group, however, had higher scores on cannabis use than did the PaP group. The results show support for configural invariance, meaning that the one-factor structure was preserved across groups, although measurement equivalence between these two survey modes was not achieved. However, when the Web group was split into two random subsamples, measurement invariance was demonstrated between them by cross-validation.
Measurement equivalence was not achieved between the two survey modes. Nonetheless, subanalyses of the Web group demonstrated that the cannabis screening questions of the ASSIST can be used for online screening. Differences in ASSIST scores between samples may be due to the sensitive nature of the information surveyed, with possible underreporting in face-to-face interviews, or to the different characteristics of the Web group because of the specialized nature of the website.
The Internet has come to be widely used as a tool for drug treatment and research. For instance, the Web is now commonly used in searches for medical and psychological information [1-3]. Furthermore, different forms of online treatments have been studied for various mental health and addiction-related conditions [4-6]. In consequence, an increased number of medical and psychological studies have been performed by using Web-based questionnaires , including topics related to substance use .
Internet-based surveys have a number of advantages in comparison with paper-and-pencil (PaP) questionnaires [7,9-11], such as easy data collection and a reduction in data entry errors. The wide dissemination of online surveys may also lead to an increase in recruitment and lower desirability bias compared with traditional methods of data collection. The latter is of particular interest for surveys on stigmatizing topics such as drug use . For example, Web-based, self-administered questionnaires lead to higher reporting rates of substance use in adolescents compared with the rates reported from PaP self-administered questionnaires . Furthermore, the quality of the data provided by Internet-based surveys on topics such as smoking  and alcohol use  have equal or even better reliability in Web-based questionnaires than in PaP or face-to-face approaches. The Internet, probably because of the perception that it offers anonymity, has also been found to be a useful tool in reaching stigmatized populations, such as illicit drug users and those who use party drugs such as ecstasy or cannabis . The feasibility of using online research for substance use and illegal substance use has been demonstrated in several studies .
Despite the wide dissemination of Internet-based surveys, participants in such studies may differ from the general population. In particular, they may be more educated or more likely to be employed [16,17]. Nevertheless, if the possible limitations are acknowledged, Internet-based surveys are suitable for research on topics related to substance use and addictive behaviors, in particular to generate hypotheses that can later be tested in more representative samples .
Regardless of the number of opportunities linked to online research, however, some concerns remain regarding the reliability (whereby a given procedure produces the same results in tests repeated with the same empirical or equivalent tools)  and validity of the data (the degree to which a given procedure operationalizes the intended concept)  obtained via online questionnaires . Studies investigating the invariance of Web-based questionnaires and PaP surveys have demonstrated inconsistent findings. Some studies found no major differences between the two delivery modes , whereas others found variations related to particular scales  and still others reported differences between online and offline versions of the same tests in terms of score distributions achieved and psychometric properties of the tests [20,21]. The latter findings are particularly true for studies related to self-disclosure of sensitive information . Social desirability may affect answers in face-to-face interviews . For instance, reports of alcohol use and sexually risky behaviors were more likely to be fully disclosed in Web surveys than in traditional formats . Also noteworthy is the possible impact of different designs on the outcomes in these studies. Some researchers used within-subjects designs, whereas others used between-subjects designs in which the online questionnaire or the PaP form was randomly administered, or not administered, to the participants .
While an increased number of instruments classically used for mental health assessments have been developed and validated for Internet use [26-29], such developments are needed for the study of addictive disorders, especially for frequently used substances such as cannabis [30-33]. In the last few years, international efforts have been made to develop and validate screening assessment tools related to cannabis use . In particular, after the success of the Alcohol Use Disorder Identification Test (AUDIT) , efforts were made to offer similar screening tools for other substance use, including cannabis. The Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) [36,37] was developed by the World Health Organization to screen for problematic or risky substance use (http://www.who.int/substance_abuse/activities/assist/en/index.html). The ASSIST instrument was validated as a face-to-face questionnaire in different populations and in a number of linguistic versions, in which it was found to have sound psychometric properties. Its internal consistency, assessed with Cronbach’s α, ranged from 0.74 to 0.93. Concurrent validity was demonstrated by significant correlations between ASSIST scores and scores from the Addiction Severity Index , AUDIT , Revised Fagerstrom Tolerance Questionnaire , and Mini International Neuropsychiatric Interview (MINI-Plus)  for the diagnosis of substance abuse or dependence. Moreover, the ASSIST yielded interesting sensitivity and specificity values for discrimination between substance use and abuse and between substance abuse and dependence [37,41-44]. In its current version (V3.0), the ASSIST comprises eight questions for each of the following substances: tobacco, alcohol, cannabis, cocaine, amphetamine-type stimulants, inhalants, sedatives, hallucinogens, opiates, and other drugs. Question 1 deals with the lifetime use of each substance. If a substance is used, people are invited to answer additional questions related to this specific substance.
PaP and Web-based surveys may not produce similar results; therefore, measurement equivalence of Web-based and PaP surveys cannot be taken for granted and must be empirically demonstrated [45,46]. To our knowledge, the ASSIST, despite the possibility of its use in naturalistic online settings, has not yet been used in Internet studies and has yet to be validated for online studies.
In the current study, we aimed to do the following:
Describe the distribution of self-reported cannabis use in two samples of cannabis users: visitors to a cannabis prevention website who answered an online questionnaire, and patients of three clinics who answered a PaP questionnaire in a face-to-face interview. Both groups answered the ASSIST version 3.0 questionnaire.
Analyze whether the items in the ASSIST instrument operate equivalently across Internet and PaP groups for gathering sensitive information (i.e., is the measurement model group invariant?).
Participants and procedures
We obtained two groups of cannabis users. The first group was assessed with the ASSIST PaP questionnaire on a face-to-face basis, and the second group answered an online version of the same questionnaire.
The first group
The first group consisted of the adults who took part in a previous study related to the validation of the French version of the ASSIST questionnaire . It included 150 participants, of whom 50 were patients from a primary health care outpatient clinic, 50 were outpatients from a general psychiatric facility, and 50 were patients from an ambulatory specialized addiction treatment facility. The use of tobacco, alcohol, and illicit substances (cannabis, cocaine, opiates, etc.) was assessed. The three settings were concerned with these substances, except for the primary care setting, where cocaine and opiates were not identified as problematic. Opiate dependence was found in 7.3% of the sample, alcohol abuse or dependence in 43.3%, and cannabis use in 85.3%. Of the latter, 6.3% were considered to have cannabis abuse or dependence. The mean age of the participants was 41 ± 11.5 years. They were mainly men (64%), single (76.7%), and not working (68.5%). All participants treated in these settings were eligible if they were older than 18 years, able to speak in French, and able to give their written informed consent.
The ASSIST (PaP) assessment (in French) was made during an interview with a trained psychologist or psychiatrist who completed the ASSIST during the interview. The ethics committee of the Geneva University Hospitals approved the study.
The second group
The second group was drawn from people who naturalistically visited (without specific advertisement or invitation) a French-language website dedicated to online cannabis help (stop-cannabis.ch). The website offers information and online help for cannabis users. In particular, visitors may access a number of automated interventions such as an online motivational interview, a brief intervention (computer-tailored feedback based on the ASSIST scores), and support for cannabis cessation in a text-messaging format. Although the website is primarily dedicated to people asking for information or help related to cannabis use, it is accessible to anyone, and no specific registration is required to use it. It also includes information for the relatives of cannabis users. The website has been visited by 676,000 Internet users since 2008. No special advertising strategies promote the website: It is mostly accessed via keyword search on general search engines such as Google, as well as from links on other websites such as a sister website dedicated to tobacco smokers. Participants could answer the ASSIST questions related to cannabis use online without any registration procedure and without answering additional socio-demographic questions. After answering the questionnaire (in French), participants received, on a new Web page, a personalized, computer-tailored feedback report based on their answers (one graph and 250 words) that commented on their cannabis use and recommended treatment if necessary. We did not collect any demographic information about participants. Answers emerging from the same IP address were excluded. Participants had the option to accept or decline storage of their answers for survey purposes.
After a positive screening result on lifetime substance use of cannabis (ASSIST, Question1), both groups of participants answered the following questions:
Questions 2 to 7 of the ASSIST:
In the past three months how often have you used cannabis?
During the past three months how often have you had a strong desire or urge to use cannabis?
During the past three months how often has your use of cannabis led to health, social, legal or financial problems?
During the past three months how often have you failed to do what was normally expected of you because of your use of cannabis?
Has a friend or relative or anyone else ever expressed concern about your use of cannabis?
Have you ever tried to control, cut down or stop using cannabis?
Questions 2 to 5 are rated on a 5-point Likert scale ranging from “never” (in the past 3 months) to “daily or almost daily”, whereas Questions 6 and 7 use a three-category rating (“no never”, “yes in the past 3 months”, “yes but not in the past 3 months”). The total score, calculated as the sum of scores received for Questions 2 through 7 inclusive, is bound to measure one latent factor: the specific substance involvement score as defined by the ASSIST, here the cannabis involvement score.
This score enabled us to identify three groups of cannabis users:
Low risk (score: 0–3): The participant is at low risk of health and other problems from his/her current pattern of use.
Moderate risk (score: 4–26): The participant is at risk of health and other problems from his/her current pattern of substance use.
High risk (score ≥ 27): The participant is at high risk of experiencing severe problems (health, social, financial, legal, relationship) as a result of his/her current pattern of use and is likely to be dependent.
SPSS 18.0 (Statistical Package for the Social Sciences, SPSS Inc., Chicago, IL) and AMOS 19.0 (Analysis of Moment Structures; SPSS Inc., Chicago, IL) software programs were used to perform the statistical analyses. The distribution of self-reported cannabis use was compared by type of survey using χ 2 tests.
We next assessed internal consistency of the subscale in the Web survey by Cronbach's alpha coefficient. Ideally, α should be above 0.70, but not much higher than 0.90 .
For the multigroup invariance hypothesis, we used the structural equation modeling (SEM) procedure described by Jöreskog . Depending on the research question, searching for group equivalence may imply a series of tests performed in the following restrictive order: configural equivalence, measurement equivalence, and structural equivalence. Configural invariance testing focused on the extent to which the number of factors and the pattern of their structure are similar between groups. In testing for measurement and structural invariance, we focused more specifically on the extent to which parameters in the measurement and structural components of the model are equivalent across groups [49-51]. It is worth noting, however, that the determination of an appropriate baseline model is required for each group separately, from which the configural model is derived. Given that one of our research questions concerns measurement equivalence across groups, statistical analyses focus on configural invariance and measurement invariance.
Evaluation of model fit
The χ 2 to degrees of freedom ratio (χ 2/df)
The comparative fit index (CFI)
The root mean square error of approximation (RMSEA)
Changes in goodness-of-fit statistics were also examined to detect differences in the models. A significant difference in χ 2 values between nested models means that all equality constraints do not hold across the groups. Since the variables are measured on an ordinal scale with relatively few categories and the values between categories are not equidistant, asymptotically distribution-free estimation instead of maximum likelihood estimation was used here as one of the strategies to accommodate non-normally distributed data.
Sample size considerations
Sample size plays an important role in providing unbiased parameter estimates and accurate model fit information. Bentler and Chou  recommended a ratio of at least 5:1 of subjects to variables for normal and elliptical distributions, and there seems to be a general consensus among researchers to adopt this ratio. However, for categorical or non-normally distributed variables, as is the case here, larger samples are required than for continuous or normally distributed variables. A ratio of at least 10 subjects per variable for this type of distribution is recommended [55,56]. The sample in the current study fulfills this requirement.
A sample of 1382 persons participated in the Web study, whereas 150 persons filled in the PaP questionnaire. In the Web survey, 1% of the participants were excluded because of missing data, leaving a sample size of 1366 for the analysis. There were no missing data in the PaP survey.
There were statistically significant differences between the two groups for all ASSIST questions (Table 1). Participants from the Web survey scored higher on all items.
According to the validation study of the French version of the ASSIST, the internal consistency of the cannabis involvement scale yielded a Cronbach’s alpha coefficient of 0.91 . In the current study, these coefficients were 0.91 and 0.74 for the PaP and the Web surveys, respectively.
Testing for configural equivalence
As noted earlier, a prerequisite to testing for instrument equivalence is to establish a well-fitting model for each group separately, followed by a combined baseline model in which the same parameters are estimated again within the framework of a multigroup model. Results showed no misspecification for the PaP model, whereas possible misspecifications regarding two error terms (Items 2 and 3, 3 and 7) were highlighted for the Web model. It was subsequently respecified and reestimated with these two error covariances included. These baseline models were considered optimal in representing the data for the PaP sample (χ 2/df = 1.93; CFI = 0.90; RMSEA = 0.08) and for the Web sample (χ 2/df = 1.97; CFI = 0.99; RMSEA = 0.03).
Next, these differentially specified baseline models were incorporated into one file for the purposes of testing cross-group equivalence. Assessment of this model revealed a good fit to the data, as indicated by the CFI (0.99) and RMSEA (0.02). The χ 2 value (8.06, df = 6) provides the baseline value against which all subsequent tests for invariance are compared (Table 2).
Testing for measurement equivalence
A model with loadings constrained to be equal across groups had a fit that was significantly poorer than that of the unconstrained model (CFI = 0.97 and RMSEA = 0.03). As can be seen from Table 2, the χ 2 difference value between the constrained model and the configural model is statistically significant. This finding suggests that the constrained model is significantly worse than the unconstrained model and argues for nonequivalence of factor loadings across the groups. In other words, the hypothesis that the PaP survey and the Web survey have the same factor means is rejected. The factor loadings by type of format are displayed in Table 3.
Testing for invariance across two random subsamples of the Web sample
From the results reported above, the equivalence of the online and the offline versions of the ASSIST has not been demonstrated. To check whether the ASSIST could be used in online studies, we randomly split the Web sample into two comparable subgroups from the perspective of the ASSIST risk level (Table 4). We then applied the SEM methodology described earlier to the first subsample and cross-validated the results by using the other subsample in a multigroup analysis.
The baseline results showed good model fit for Subsample 1 (χ 2/df = 1.77; CFI = 0.99; RMSEA = 0.03), as well as for Subsample 2 (χ 2/df = 2.06; CFI = 0.99; RMSEA = 0.04). The next step consisted in establishing configural invariance for both groups together without constraint: it also yielded satisfactory results. As can be seen in Table 5, configural invariance (Model 1) is supported, indicating that the one-factor structure in each group is preserved. From Model 2 to Model 4, all successive invariance tests (metric invariance, residual variance invariance, strict factorial invariance) held, given the nonsignificant difference in the χ 2 values (Table 5).
In this study, the internal consistency and the validity of the ASSIST (for the assessment of cannabis use) were compared in two samples: visitors of a cannabis prevention website who answered an online survey in French, and outpatients of three clinics in Switzerland who answered an ASSIST PaP questionnaire. The internal consistency was high for the PaP survey and good for the Web survey (0.91 and 0.74, respectively). A possible explanation for this difference in value is that Cronbach’s alpha is sensitive to deviations from normality. Therefore, if the data do not meet the assumptions of normality and linearity, as in the Web sample, the reliability may be underestimated by the current formulas used for the calculation of this value . Regarding validity, the results show that the study supports configural invariance, meaning that the one-factor structure is preserved across groups.
Measurement invariance was not supported by the study results, however. Cannabis use was more frequent and problematic for the Web group than it was for the clinic-recruited sample. Only 10% of the PaP sample reported daily use of cannabis in the past 3 months, in comparison to 74% of the Web sample (Question 2). As shown in Table 1, very few PaP participants (4%) admitted to having failed in tasks normally expected from them because of their cannabis use (Question 4), whereas 60% of the Web sample endorsed this item. Consequently, the loading of this item in the factor analysis (Table 3) was very different across groups. The same problem applies to Question 7 (tried to stop or cut down).
These results may be explained, in part, by the absence of a randomization procedure (on the PaP and total Web sample), leading to selection bias, one of the limitations of the study [11,58]. The characteristics of the Web sample (from the naturalistic visitors of a website dedicated to helping people facing cannabis addiction) are probably explained by the involvement (via self-selection) of participants who were more concerned by their cannabis use than would be the case in the general population, as suggested by another study on the self-selection bias associated with online studies . The participants from the stop-cannabis.ch website selected themselves to visit the website and chose to assess their cannabis use with the ASSIST, probably in relation to concerns about their cannabis consumption. The participants from the clinic centers, in contrast, may not have been preoccupied by their cannabis use .
The contribution of a desirability bias to the differences observed between the PaP and the Web group should also not be ignored. This bias was previously reported in face-to-face interviews . The phenomenon leads to underreporting of stigmatized behaviors (i.e. cannabis use) in face-to-face settings in comparison to that in computerized self-assessments [60,61]. Furthermore, differences in the understanding of some questions cannot be fully ruled out [7,58].
Study limitations and strengths
This study has several limitations, the most important of which is that we did not include randomly equivalent groups by type of survey. This limitation may have undermined external validity. It further highlights selection  and desirability biases [45,62], hence restricting the statistical comparison of the PaP and the Web groups to a descriptive level. Another limitation is a lack of other clinical assessments. Comparison with a diagnostic instrument, for instance, would allow us to calculate the sensitivity and the specificity of the Internet-administered ASSIST, two inextricably linked measures that help clinicians in deciding whether to rule a diagnosis in or out. A third limitation is that detailed socio-demographic characteristics of the Internet sample were not available to examine the profile of the respondents.
These limitations having been acknowledged, the study nonetheless has two major strengths. The first is that, owing to the large Web-sample size, it has been possible to analyze two randomly selected groups whose measurement equivalence results support the view that the ASSIST is a valid instrument in Web surveys.
The second strength that must be highlighted is that, although some studies have been performed on the online assessment of substance use  or on Internet-based prevention  or treatment [65,66] of cannabis use, our study is, to the best of our knowledge, the first to describe cannabis use of naturalistically self-selected users of a specialized website. This is an important addition, demonstrating the acceptability of such websites in naturalistic settings by cannabis users who are concerned with their behavior.
Despite the limitations, it appears that the cannabis screening questions of the ASSIST instrument are useful in a Web-based format. The instrument could be used for screening and then possibly for online prevention and intervention. Further studies using the ASSIST for the assessment of other addictions, in other populations, or under different conditions while using the same statistical designs (e.g. random within-subjects, between-subjects designs) are warranted to generalize these findings.
Khazaal Y, Chatton A, Cochand S, Hoch A, Khankarli MB, Khan R, et al. Internet use by patients with psychiatric disorders in search for general and medical informations. Psychiatr Q. 2008;79:301–9.
Khazaal Y, Chatton A, Cochand S, Coquard O, Fernandez S, Khan R, et al. Brief DISCERN, six questions for the evaluation of evidence-based content of health-related websites. Patient Educ Couns. 2009;77:33–7.
Morel V, Chatton A, Cochand S, Zullino D, Khazaal Y. Quality of web-based information on bipolar disorder. J Affect Disord. 2008;110(3):265–9.
Andersson G, Bergstrom J, Hollandare F, Carlbring P, Kaldo V, Ekselius L. Internet-based self-help for depression: randomised controlled trial. Br J Psychiatry. 2005;187:456–61.
Etter JF. Internet-based smoking cessation programs. Int J Med Inform. 2006;75(1):110–6.
Schaub M, Sullivan R, Haug S, Stark L. Web-based cognitive behavioral self-help intervention to reduce cocaine consumption in problematic cocaine users: randomized controlled trial. J Med Internet Res. 2012;14(6):e166.
van Gelder MM, Bretveld RW, Roeleveld N. Web-based questionnaires: the future in epidemiology? Am J Epidemiol. 2010;172(11):1292–8.
Becker J, Hungerbuehler I, Berg O, Szamrovicz M, Haubensack A, Kormann A, et al. Development of an integrative cessation program for co-smokers of cigarettes and cannabis: demand analysis, program description, and acceptability. Subst Abuse Treat Prev Policy. 2013;8:33.
Whitehead L. Methodological issues in Internet-mediated research: a randomized comparison of internet versus mailed questionnaires. J Med Internet Res. 2011;13(4):e109.
Ramo DE, Prochaska JJ. Broad reach and targeted recruitment using Facebook for an online survey of young adult substance use. J Med Internet Res. 2012;14(1):e28.
Fenner Y, Garland SM, Moore EE, Jayasinghe Y, Fletcher A, Tabrizi SN, et al. Web-based recruiting for health research using a social networking site: an exploratory study. J Med Internet Res. 2012;14(1):e20.
Wang YC, Lee CM, Lew-Ting CY, Hsiao CK, Chen DR, Chen WJ. Survey of substance use among high school students in Taipei: web-based questionnaire versus paper-and-pencil questionnaire. J Adolesc Health. 2005;37(4):289–95.
Brigham J, Lessov-Schlaggar CN, Javitz HS, Krasnow RE, McElroy M, Swan GE. Test-retest reliability of web-based retrospective self-report of tobacco exposure and risk. J Med Internet Res. 2009;11(3):e35.
Miller ET, Neal DJ, Roberts LJ, Baer JS, Cressler SO, Metrik J, et al. Test-retest reliability of alcohol measures: is there a difference between internet-based assessment and traditional methods? Psychol Addict Behav. 2002;16(1):56–63.
Miller PG, Sonderlund AL. Using the internet to research hidden populations of illicit drug users: a review. Addiction. 2010;105(9):1557–67.
Dickerson S, Reinhart AM, Feeley TH, Bidani R, Rich E, Garg VK, et al. Patient Internet use for health information at three urban primary care clinics. J Am Med Inform Assoc. 2004;11(6):499–504.
Smith AB, King M, Butow P, Olver I. A comparison of data quality and practicality of online versus postal questionnaires in a sample of testicular cancer survivors. Psychooncology. 2013;22(1):233–7.
Piergiorgio C. Social Research: Theory, Methods and Technique. London: SAGE Publications; 2011.
Eaton DK, Brener ND, Kann L, Denniston MM, McManus T, Kyle TM, et al. Comparison of paper-and-pencil versus Web administration of the Youth Risk Behavior Survey (YRBS): risk behavior prevalence estimates. Eval Rev. 2010;34(2):137–53.
Vallejo MA, Jordan CM, Diaz MI, Comeche MI, Ortega J. Psychological assessment via the internet: a reliability and validity study of online (vs paper-and-pencil) versions of the General Health Questionnaire-28 (GHQ-28) and the Symptoms Check-List-90-Revised (SCL-90-R). J Med Internet Res. 2007;9(1):e2.
Buchanan T. Internet-based questionnaire assessment: appropriate use in clinical contexts. Cogn Behav Ther. 2003;32(3):100–9.
Tourangeau R, Yan T. Sensitive questions in surveys. Psychol Bull. 2007;133(5):859–83.
Kelly D, Harper DJ, Landau B. Questionnaire mode effects in interactive information retrieval experiments. Inf Process Manag. 2008;44(1):122–41.
Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL. Adolescent sexual behavior, drug use, and violence: increased reporting with computer survey technology. Science. 1998;280(5365):867–73.
De Beuckelaer A, Lievens F. Measurement equivalence of paper-and-pencil and Internet organisational surveys: a large scale examination in 16 countries. Appl Psychol. 2009;58(2):336–61.
Spek V, Nyklicek I, Cuijpers P, Pop V. Internet administration of the Edinburgh Depression Scale. J Affect Disord. 2008;106(3):301–5.
Donker T, van Straten A, Marks I, Cuijpers P. Quick and easy self-rating of Generalized Anxiety Disorder: validity of the Dutch web-based GAD-7, GAD-2 and GAD-SI. Psychiatry Res. 2011;188(1):58–64.
Donker T, van Straten A, Marks I, Cuijpers P. Brief self-rated screening for depression on the Internet. J Affect Disord. 2010;122(3):253–9.
Lindner P, Andersson G, Ost LG, Carlbring P. Validation of the internet-administered Quality of Life Inventory (QOLI) in different psychiatric conditions. Cogn Behav Ther. 2013;42(4):315–27.
Noack R, Hofler M, Lueken U. Cannabis use patterns and their association with DSM-IV cannabis dependence and gender. Eur Addict Res. 2011;17(6):321–8.
Norberg MM, Gates P, Dillon P, Kavanagh DJ, Manocha R, Copeland J. Screening and managing cannabis use: comparing GP's and nurses’ knowledge, beliefs, and behavior. Subst Abuse Treat Prev Policy. 2012;7:31.
Rooke SE, Gates PJ, Norberg MM, Copeland J. Applying technology to the treatment of cannabis use disorder: comparing telephone versus Internet delivery using data from two completed trials. J Subst Abuse Treat. 2014;46(1):78–84.
Fernandez-Artamendi S, Fernandez-Hermida JR, Muniz-Fernandez J, Secades-Villa R, Garcia-Fernandez G. Screening of cannabis-related problems among youth: the CPQ-A-S and CAST questionnaires. Subst Abuse Treat Prev Policy. 2012;7(1):13.
Legleye S, Kraus L, Piontek D, Phan O, Jouanne C. Validation of the Cannabis Abuse Screening Test in a sample of cannabis inpatients. Eur Addict Res. 2012;18(4):193–200.
Bohn MJ, Babor TF, Kranzler HR. The Alcohol Use Disorders Identification Test (AUDIT): validation of a screening instrument for use in medical settings. J Stud Alcohol. 1995;56(4):423–32.
Group WAW. The Alcohol, Smoking and Substance Involvement Screening Test (ASSIST): development, reliability and feasibility. Addiction. 2002;97(9):1183–94.
Newcombe DA, Humeniuk RE, Ali R. Validation of the World Health Organization Alcohol, Smoking and Substance Involvement Screening Test (ASSIST): report of results from the Australian site. Drug Alcohol Rev. 2005;24(3):217–26.
Kosten TR, Rounsaville BJ, Kleber HD. Concurrent validity of the addiction severity index. J Nerv Ment Dis. 1983;171(10):606–10.
Heatherton TF, Kozlowski LT, Frecker RC, Fagerstrom KO. The Fagerstrom Test for Nicotine Dependence: a revision of the Fagerstrom Tolerance Questionnaire. Br J Addict. 1991;86(9):1119–27.
Sheehan DV, Lecrubier Y, Sheehan KH, Amorim P, Janavs J, Weiller E, et al. The Mini-International Neuropsychiatric Interview (M.I.N.I.): the development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J Clin Psychiatry. 1998;59:Suppl 20:22–33–quiz 34–57.
Humeniuk R, Ali R, Babor TF, Farrell M, Formigoni ML, Jittiwutikarn J, et al. Validation of the Alcohol, Smoking And Substance Involvement Screening Test (ASSIST). Addiction. 2008;103(6):1039–47.
Hides L, Cotton SM, Berger G, Gleeson J, O'Donnell C, Proffitt T, et al. The reliability and validity of the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) in first-episode psychosis. Addict Behav. 2009;34(10):821–5.
Khan R, Chatton A, Thorens G, Achab S, Nallet A, Broers B, et al. Validation of the French version of the alcohol, smoking and substance involvement screening test (ASSIST) in the elderly. Subst Abuse Treat Prev Policy. 2012;7(1):14.
Khan R, Chatton A, Nallet A, Broers B, Thorens G, Achab-Arigo S, et al. Validation of the French version of the alcohol, smoking and substance involvement screening test (ASSIST). Eur Addict Res. 2011;17(4):190–7.
Buchanan T, Ali T, Heffernan TM, Ling J, Parrott AC, Rodgers J, et al. Nonequivalence of on-line and paper-and-pencil psychological tests: the case of the prospective memory questionnaire. Behav Res Methods. 2005;37(1):148–54.
Cole MS, Bedeian AG, Feild HS. The measurement equivalence of Web-based and paper-and-pencil measures of transformational leadership: a multinational test. Organ Res Methods. 2006;9(3):339–68.
Streiner DL, Norman GR. Health measurement scales. 4th ed. New York: Oxford University Press; 2008.
Joreskog KG. Simultaneous factor analysis in several populations. Psychometrika. 1971;36:409–26.
Hoyle RH. Handbook of structural equation modeling. New York: The Guilford Press; 2012.
Laursen BP, Little TD, Card NA. Handbook of developmental research methods. New York: The Guilford Press; 2012.
Byrne BM. Structural equation modeling with AMOS. 2nd ed. New York: Routledge; 2009.
Byrne BM. A primer of LISREL: basic applications and programming for confirmatory factor analytic models. New York: Springer; 1989.
Browne M, Cudeck R. Testing structural equation models. Edited by Bollen KA, Long JS. London, Sage Publications; 1993: 111–136.
Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6:1–55.
Bentler PM, Chou C-P. Practical issues in structural modeling. Sociol Methods Res. 1987;16:78–117.
Kline R. Principles and practice of structural equation modeling. 3rd ed. New York: The Guilford Press; 2011.
Helms JE, Henze KT, Saas TL, Mifsud VA. Treating Cronbach’s alpha reliability coefficients as data in counseling research. Couns Psychol. 2006;34(5):630–60.
Ramo DE, Prochaska JJ. Prevalence and co-use of marijuana among young adult cigarette smokers: an anonymous online national survey. Addict Sci Clin Pract. 2012;7(1):5.
Khazaal Y, van Singer M, Chatton A, Achab S, Zullino D, Rothen S, et al. Does self-selection affect samples’ representativeness in online surveys? An investigation in online video game research. J Med Internet Res. 2014;16(7):e164.
Buchanan T, Smith JL. Research on the Internet: validation of a World-Wide Web mediated personality scale. Behav Res Methods Instrum Comput. 1999;31(4):565–71.
Richman WL, Kiesler S, Weisband S, Drasgow F. A meta-analytic study of social desirability distortion in computer-administered questionnaires, traditional questionnaires, and interviews. J Appl Psychol. 1999;84:754–75.
Kays K, Gathercoal K, Buhrow W. Does survey format influence self-disclosure on sensitive question items? Comput Human Behav. 2012;28(1):251–6.
Duncan DF, White JB, Nicholson T. Using Internet-based surveys to reach hidden populations: case of nonabusive illicit drug users. Am J Health Behav. 2003;27(3):208–18.
Newton NC, Teesson M, Vogl LE, Andrews G. Internet-based prevention for alcohol and cannabis use: final results of the Climate Schools course. Addiction. 2010;105(4):749–59.
Hoch E, Buhringer G, Pixa A, Dittmer K, Henker J, Seifert A, et al. CANDIS treatment program for cannabis use disorders: findings from a randomized multi-site translational trial. Drug Alcohol Depend. 2014;134:185–93.
Schaub MP, Haug S, Wenger A, Berg O, Sullivan R, Beck T, et al. Can reduce–the effects of chat-counseling and web-based self-help, web-based self-help alone and a waiting list control program on cannabis use in problematic cannabis users: a randomized controlled trial. BMC Psychiatry. 2013;13:305.
The authors would like to thank Dr. Barbara Broers for her contribution to the collection of data. They also thank the anonymous reviewers who provided useful feedback upon the first submission of this study, and Barbara Every, ELS, of BioMedical Editor, for language editing of the manuscript.
The authors declare that they have no competing interests.
Y. Khazaal designed the study. A. Chatton undertook the statistical analysis. Y. Khazaal, A. Chatton, D. Zullino, J-F Etter, and R. Khan contributed to the writing of the manuscript. D. Zullino and J-F Etter contributed to the editing and review of the final manuscript. G. Monney and A. Nallet contributed to data collection. All authors contributed to and have approved the final manuscript.
About this article
Cite this article
Khazaal, Y., Chatton, A., Monney, G. et al. Internal consistency and measurement equivalence of the cannabis screening questions on the paper-and-pencil face-to-face ASSIST versus the online instrument. Subst Abuse Treat Prev Policy 10, 8 (2015). https://doi.org/10.1186/s13011-015-0002-9
- The alcohol
- Smoking and substance involvement screening test
- Screening test