Reliability and validity of indirect assessment outcomes: Experts versus caregivers

Publication date: Available online 20 March 2017Source: Learning and MotivationAuthor(s): Joseph D. Dracobly, Claudia L. Dozier, Adam M. Briggs, Jessica F. JuanicoAbstractClinicians often conduct indirect assessments (IAs; e.g., Durand & Crimmins, 1988; Iwata, DeLeon, & Roscoe, 2013; Matson & Vollmer, 1995) such as questionnaires and interviews with caregivers to gain information about the variables influencing problem behavior. However, researchers have found poor reliability and validity of IAs with respect to determining functional variables. There are numerous variables that might influence the efficacy of IAs as an assessment tool, one of which is the skill set of the person completing the IA. For example, it may be possible to increase the validity and reliability of IAs by having individuals with certain skill sets such as a background in behavior analysis and FBA (“experts”) complete them. Thus, the purpose of this study was to compare the reliability (i.e., agreement with respect to function and specific IA questions) and validity (i.e., agreement between the outcome of IAs and a functional analysis) of IAs completed by caregivers and “experts” for each of eight children who emitted problem behavior. We found that experts were more likely than caregivers to agree on IA outcomes with respect to (a) overall interrater agreement, (b) item-by-item agreement, and (c) the highest-rated function(s) of problem behavior. Experts were also more likely to correctly iden...
Source: Learning and Motivation - Category: Psychiatry & Psychology Source Type: research