1
|
Connolly K, Koslouski JB, Chafouleas SM, Schwartz MB, Edmondson B, Briesch AM. Evaluating the Usability of the Wellness School Assessment Tool Whole School, Whole Community, Whole Child (WellSAT WSCC): A School Wellness Policy Assessment Tool. J Sch Health 2024; 94:406-414. [PMID: 37933437 DOI: 10.1111/josh.13410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 10/10/2023] [Accepted: 10/10/2023] [Indexed: 11/08/2023]
Abstract
BACKGROUND Adoption of the Whole School, Whole Community, Whole Child (WSCC) model has been slowed by a lack of available tools to support implementation. The Wellness School Assessment Tool (WellSAT) WSCC is an online assessment tool that allows schools to evaluate the alignment of their policies with the WSCC model. This study assesses the usability of the WellSAT WSCC. METHODS Using a convergent mixed methods design, we collected qualitative and quantitative data from 5 school-based participants with roles in development and evaluation of policy. Participants explored the platform while engaging in a think-aloud procedure and scored a sample policy using the platform. They also completed the System Usability Scale and responded to open-ended questions about the usability of the platform. RESULTS Participants rated the WellSAT WSCC as an above-average user experience, but data suggested several areas for improvement, including improved instructions, enhanced visual design of the platform, and guidance for subsequent policy changes. CONCLUSION The WellSAT WSCC provides an above-average user experience but can be improved to increase user experience. These improvements increase the potential for greater use to facilitate integration of the WSCC model into school policy.
Collapse
|
2
|
Volpe RJ, Matta M, Briesch AM, Owens JS. Formative behavioral assessment across eight constructs: Dependability of direct behavior ratings and formative behavior rating measures. J Sch Psychol 2023; 101:101251. [PMID: 37951664 DOI: 10.1016/j.jsp.2023.101251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 08/14/2023] [Accepted: 09/09/2023] [Indexed: 11/14/2023]
Abstract
Due to their promise as a feasible tool for evaluating the effects of school-based interventions, Direct Behavior Ratings (DBR) have received much research attention over the past 2 decades. Although DBR methodology has demonstrated much promise, favorable psychometric characteristics only have been demonstrated for tools measuring a small number of constructs. Likewise, although a variety of methods of DBR have been proposed, most extant studies have focused on the use of single-item methods. The present study examined the dependability of four methods of formative behavioral assessment (i.e., single-item and multi-item ratings administered either daily [DBR] or weekly [formative behavior rating measures or FBRM]) across eight psychological constructs (i.e., interpersonal skills, academic engagement, organizational skills, disruptive behavior, oppositional behavior, interpersonal conflict, anxious depressed, and social withdrawal). School-based professionals (N = 91; i.e., teachers, paraprofessionals, and intervention specialists) each rated one student across all eight constructs after being assigned to one of the four assessment conditions. Dependability estimates varied substantially across methods and constructs (range = 0.75-0.96), although findings of the present study support the use of the broad set of formative assessment tools evaluated.
Collapse
Affiliation(s)
- Robert J Volpe
- Department of Applied Psychology, Northeastern University, 360 Huntington Ave., Boston, MA 02115, USA.
| | - Michael Matta
- Department of Psychological, Health, and Learning Sciences, University of Houston, Farish Hall 429, 3657 Cullen Blvd., Houston, TX 77204, USA
| | - Amy M Briesch
- Department of Applied Psychology, Northeastern University, 360 Huntington Ave., Boston, MA 02115, USA
| | - Julie S Owens
- Department of Psychology, Ohio University, Porter Hall 200, Athens, OH 45701, UK
| |
Collapse
|
3
|
Briesch AM, Waldron FM, Beneville MA. State Variation Regarding Other Health Impairment Eligibility Criteria for Attention Deficit Hyperactivity Disorder. School Mental Health 2023. [DOI: 10.1007/s12310-023-09581-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
AbstractThe special education eligibility category that has come to be most commonly associated with Attention-Deficit Hyperactivity Disorder (ADHD) in recent years is Other Health Impairment (OHI). However, the eligibility criteria for the OHI disability category have been criticized for being especially vague, given that the disability category incorporates a wide range of health impairments without providing any additional specificity. Because states have the latitude to utilize more specific eligibility criteria than what is provided at the federal level, the purpose of the current study was to review state-level special education eligibility criteria for OHI, with particular interest in identifying the degree to which eligibility guidance exists specific to students with ADHD and the extent to which this guidance varies across states. Results suggested that wide state variation exists regarding eligibility guidance, with 22% of states utilizing the federal definition and only 14% of states providing elaboration regarding all three components of the federal definition. Whereas it was most common for states to provide additional guidance surrounding what is needed to establish that a student has a health impairment, less than half of states provided specific guidance surrounding the other two components of the federal definition. Implications for policy and practice are discussed.
Collapse
|
4
|
Long S, Volpe RJ, Briesch AM. Evaluation of a computer‐assisted letter sound tutoring program: An application to preschool English language learners. Psychology in the Schools 2022. [DOI: 10.1002/pits.22784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Stephanie Long
- Department of Applied Psychology Northeastern University Boston Massachusetts US
| | - Robert J. Volpe
- Department of Applied Psychology Northeastern University Boston Massachusetts US
| | - Amy M. Briesch
- Department of Applied Psychology Northeastern University Boston Massachusetts US
| |
Collapse
|
5
|
Briesch AM, Lane KL, Common EA, Oakes WP, Buckman MM, Chafouleas SM, Iovino EA, Sherod RL, Abdulkerim N, Royer DJ. Exploring Views and Professional Learning Needs of Comprehensive, Integrated, Three-Tiered (Ci3T) Leadership Teams Related to Universal Behavior Screening Implementation. Educ Treat Children 2022; 45:245-262. [PMID: 35919259 PMCID: PMC9334022 DOI: 10.1007/s43494-022-00080-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
Research conducted to date has highlighted barriers to initial adoption of universal behavior screening in schools. However, little is known regarding the experiences of those implementing these procedures and there have been no studies conducted examining the experiences of educators in different stages of implementing various tiered systems of supports. Universal screening is foundational to a successful Comprehensive, Integrated Three-Tiered (Ci3T) model of prevention-an integrated tiered system addressing academics, behavior, and social and emotional well-being. Therefore, the perspectives of Ci3T Leadership Team members at different stages of Ci3T implementation were solicited through an online survey that sought to understand (1) current school-based screening practices and (2) individual beliefs regarding those practices. A total of 165 Ci3T Leadership Team members representing five school districts from three geographic regions across the United States, all of whom were participating in an Institute of Education Sciences Network grant examining integrated tiered systems, reported the screening procedures were generally well-understood and feasible to implement. At the same time, results highlighted continuing professional learning may be beneficial in the areas of: (1) integrating multiple sources of data (e.g., screening data with other data collected as regular school practices) and (2) using those multiple data sources to determine next steps for intervention. We discuss educational implications, limitations, and directions for future inquiry.
Collapse
|
6
|
Hill E, Volpe RJ, Briesch AM. Psychometric Properties of the Classroom Observation of Engagement, Disrespectful and Disruptive Behaviors. School Psychology Review 2021. [DOI: 10.1080/2372966x.2021.2001692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
7
|
Briesch AM, Codding RS, Hoffman JA, Rizzo CJ, Volpe RJ. Caregiver Perspectives on Schooling From Home During the Spring 2020 COVID-19 Closures. School Psychology Review 2021. [DOI: 10.1080/2372966x.2021.1908091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
8
|
Matta M, Volpe RJ, Briesch AM, Owens JS. Five direct behavior rating multi-item scales: Sensitivity to the effects of classroom interventions. J Sch Psychol 2020; 81:28-46. [PMID: 32711722 DOI: 10.1016/j.jsp.2020.05.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/24/2019] [Accepted: 05/22/2020] [Indexed: 11/16/2022]
Abstract
Direct Behavior Rating (DBR) is a tool designed for the assessment of behavioral changes over time. Unlike methods for summative evaluations, the development of progress monitoring tools requires evaluation of sensitivity to change. The present study aimed to evaluate this psychometric feature of five newly developed DBR Multi-Item Scales (DBR-MIS). Teachers identified students with behaviors interfering with their learning or the learning of others and implemented a Daily Report Card (DRC) intervention in the classroom settings for two months. The analyses were performed on 31 AB single case studies. Change metrics were calculated at an individual level by using Tau-UA vs. B + trend B and Hedges' g and at a scale-level by using Mixed Effect Meta-Analysis, Hierarchical Linear Models (HLMs), and Between-Case Standardized Mean Difference (BC-SMD). HLMs were estimated considering both fixed and random effects of intervention and linear trend within the intervention phase. The results supported sensitivity to change for three DBR-MIS (i.e., Academic Engagement, Organizational Skills, and Disruptive Behavior), and the relative magnitudes were consistent across the metrics. Sensitivity to change of DBR-MIS Interpersonal Skills received moderate support. Conversely, empirical evidence was not provided for sensitivity to change of DBR-MIS Oppositional Behavior. Particular emphasis was placed on the intervention trend in that responses to behavioral interventions might occur gradually or require consistency over time in order to be observed by raters. Implications for the use of the new DBR-MIS in the context of progress monitoring of social-emotional behaviors are discussed.
Collapse
Affiliation(s)
- Michael Matta
- Department of Psychological, Health, and Learning Sciences, University of Houston, United States of America.
| | - Robert J Volpe
- Department of Applied Psychology, Northeastern University, United States of America
| | - Amy M Briesch
- Department of Applied Psychology, Northeastern University, United States of America
| | | |
Collapse
|
9
|
Auerbach ER, Chafouleas SM, Briesch AM, Long SJ. Exploring the Alignment of Behavior Screening Policies and Practices in US Public School Districts. J Sch Health 2020; 90:264-270. [PMID: 31984528 DOI: 10.1111/josh.12872] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 11/06/2018] [Accepted: 01/01/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND Although recent studies provide information regarding state-level policies and district-level practices regarding social, emotional, and behavioral screening, the degree to which these policies influence screening practices is unknown. As such, the purpose of this exploratory study was to compare state- and district-level policies and reported practices around school-based social, emotional, and behavioral screening. METHODS We obtained data for the present study from three sources: (1) a recent systematic review of state department of education websites; (2) a national survey of 1330 US school districts; and (3) a Web search and review of policy manuals published by the 1330 school districts. Comparative analyses were used to identify similarities and differences across state and district policies and practices. RESULTS Of the 1330 districts searched, 911 had policy manuals available for review; 87 of these policy manuals, which represented 10 states, met inclusion criteria, and thus, were included in analyses. Discrepancies were found across state and district policies and across state social, emotional, and behavioral screening guidance and district practices, but consistencies did exist across district policies within the same state. CONCLUSION District-level guidance around social, emotional, and behavioral screening appears to be limited. Our findings suggest a disconnect between state- and district-level social, emotional, and behavioral screening guidance and district reported practices, which signifies the need to identify the main influences on district- and school-level screening practices.
Collapse
Affiliation(s)
- Emily R Auerbach
- Department of Educational Psychology, University of Connecticut, 249 Glenbrook Rd., Unit 3064, Storrs, CT, 06268
| | - Sandra M Chafouleas
- Department of Educational Psychology, University of Connecticut, 249 Glenbrook Rd., Unit 3064, Storrs, CT, 06268
| | - Amy M Briesch
- Department of Counseling and Applied Educational Psychology, Northeastern University, 404 International Village, Boston, MA, 02155
| | - Stephanie J Long
- Department of Counseling and Applied Educational Psychology, Northeastern University, 404 International Village, Boston, MA, 02155
| |
Collapse
|
10
|
Volpe RJ, Briesch AM. Generalizability and Dependability of Single-Item and Multiple-Item Direct Behavior Rating Scales for Engagement and Disruptive Behavior. School Psychology Review 2019. [DOI: 10.1080/02796015.2012.12087506] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
11
|
Chafouleas SM, Christ TJ, Riley-Tillman TC, Briesch AM, Chanese JAM. Generalizability and Dependability of Direct Behavior Ratings to Assess Social Behavior of Preschoolers. School Psychology Review 2019. [DOI: 10.1080/02796015.2007.12087952] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
|
13
|
Volpe RJ, Briesch AM. Dependability of Two Scaling Approaches to Direct Behavior Rating Multi-Item Scales Assessing Disruptive Classroom Behavior. School Psychology Review 2019. [DOI: 10.17105/spr45-1.39-52] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
14
|
|
15
|
Briesch AM, Ferguson TD, Daniels B, Volpe RJ, Feinberg AB. Examining the Influence of Interval Length on the Dependability of Observational Estimates. School Psychology Review 2019. [DOI: 10.17105/spr-2016-0006.v46-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
16
|
Briesch AM, Chafouleas SM, Riley-Tillman TC. Generalizability and Dependability of Behavior Assessment Methods to Estimate Academic Engagement: A Comparison of Systematic Direct Observation and Direct Behavior Rating. School Psychology Review 2019. [DOI: 10.1080/02796015.2010.12087761] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
17
|
Briesch AM, Cintron DW, Dineen JN, Chafouleas SM, McCoach DB, Auerbach E. Comparing Stakeholders’ Knowledge and Beliefs About Supporting Students’ Social, Emotional, and Behavioral Health in Schools. School Mental Health 2019. [DOI: 10.1007/s12310-019-09355-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Volpe RJ, Chaffee RK, Yeung TS, Briesch AM. Initial Development of Multi-item Direct Behavior Rating Measures of Academic Enablers. School Mental Health 2019. [DOI: 10.1007/s12310-019-09338-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
Casale G, Volpe RJ, Hennemann T, Briesch AM, Daniels B, Grosche M. Konstruktvalidität eines universellen Screenings zur unterrichtsnahen und ökonomischen Diagnostik herausfordernden Verhaltens von Schüler_innen – eine Multitrait-Multimethod-Analyse. Zeitschrift für Pädagogische Psychologie 2019. [DOI: 10.1024/1010-0652/a000232] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Zusammenfassung. Die vorliegende Untersuchung überprüft die Konstruktvalidität einer 16 Items umfassenden Kurzversion der Integrated Teacher Report Form (ITRF), einem universellen und unterrichtsrelevanten Verhaltensscreening zur Diagnostik des externalisierenden Verhaltens von Schüler_innen im Klassenraum. 107 Lehrkräfte bearbeiteten für insgesamt 1048 Schülerinnen und Schüler der ersten bis sechsten Klasse die ITRF sowie zusätzlich jeweils eines von drei im deutschsprachigen Raum etablierten Beurteilungsverfahren. Die Analyse der konvergenten und diskriminanten Validität erfolgt anhand einer Multitrait-Multimethod (MT-MM) Korrelationsmatrix sowie einem strukturprüfenden Correlated Trait-Correlated Method minus 1 [CT-C(M-1)] Modell zur separaten Analyse des Einflusses der Konstrukte (lernbezogene / aufmerksame Verhaltensprobleme, oppositionelle / störende Verhaltensprobleme) und der Methoden (ITRF, zusätzliches Beurteilungsverfahren) auf die erzielten Werte der Beurteilungen. Die Ergebnisse zeigen, dass sich die Stärke der theoretisch postulierten Korrelationen mit den empirischen Daten erwartungskonform abbilden lassen, was auf konvergente und diskriminante Validität hinweist. Die Varianz der ITRF-Werte lässt sich zu einem größeren Anteil durch das zu messende Konstrukt als durch methodenspezifische Einflüsse erklären. Somit liefern unsere Befunde Evidenz für eine angemessene Konstruktvalidität des Verfahrens, weshalb sich die Kurzversion der ITRF für den praktischen Schuleinsatz eignet.
Collapse
Affiliation(s)
- Gino Casale
- Universität zu Köln
- Bergische Universität Wuppertal
| | | | | | | | | | | |
Collapse
|
20
|
Briesch AM, Chafouleas SM, Cintron DW, McCoach DB. Factorial invariance of the Usage Rating Profile for Supporting Students' Behavioral Needs (URP-NEEDS). ACTA ACUST UNITED AC 2019; 35:51-60. [PMID: 30883160 DOI: 10.1037/spq0000309] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
[Correction Notice: An Erratum for this article was reported online in School Psychology on Dec 30 2019 (see record 2019-80953-001). In the fourth paragraph of the "Understanding the Factors That Influence Usage" section and in the "Usage Rating Profile for Supporting Students' Behavioral Needs (URP-NEEDS)" section, the URP-NEEDS was incorrectly reported to have 23 items. This measure consists of 24 items. This item was also missing in the Appendix under the "Understanding" factor: "School personnel understand how goals for social, emotional, and behavioral screening fit with a system of student supports." All versions of this article have been corrected.] Previous research has suggested that multiple factors beyond acceptability alone (e.g., feasibility, external supports) may interact to determine whether consumers will use an intervention or assessment in practice. The Usage Rating Profile for Supporting Students' Behavioral Needs (URP-NEEDS) was developed in order to provide a simultaneous assessment of those factors influencing use of a particular approach to identifying and supporting the social, emotional, and behavioral needs of students. As the measure was intended for use with a range of school-based stakeholders, a first necessary step involved establishing the measurement invariance of the instrument. Participants in the current study included 1,112 district administrators, 431 building administrators, and 1,355 teachers who were asked to identify the approach used within their school district to identify and support the social, emotional, and behavioral needs of students, and then to complete the URP-NEEDS in reference to this identified approach. Results supported the measurement invariance of the URP-NEEDS across stakeholder groups. In addition, measurement invariance was found across self-identified approaches to social, emotional, and behavioral risk identification within the district administrator and teacher groups. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Amy M Briesch
- Department of Applied Psychology, Northeastern University
| | | | | | - D Betsy McCoach
- Department of Educational Psychology, University of Connecticut
| |
Collapse
|
21
|
Schlosser RW, Belfiore PJ, Sigafoos J, Briesch AM, Wendt O. Appraisal of comparative single-case experimental designs for instructional interventions with non-reversible target behaviors: Introducing the CSCEDARS ("Cedars"). Res Dev Disabil 2018; 79:33-52. [PMID: 29853335 DOI: 10.1016/j.ridd.2018.04.028] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Revised: 04/06/2018] [Accepted: 04/30/2018] [Indexed: 06/08/2023]
Abstract
Evidence-based practice as a process requires the appraisal of research as a critical step. In the field of developmental disabilities, single-case experimental designs (SCEDs) figure prominently as a means for evaluating the effectiveness of non-reversible instructional interventions. Comparative SCEDs contrast two or more instructional interventions to document their relative effectiveness and efficiency. As such, these designs have great potential to inform evidence-based decision-making. To harness this potential, however, interventionists and authors of systematic reviews need tools to appraise the evidence generated by these designs. Our literature review revealed that existing tools do not adequately address the specific methodological considerations of comparative SCEDs that aim to compare instructional interventions of non-reversible target behaviors. The purpose of this paper is to introduce the Comparative Single-Case Experimental Design Rating System (CSCEDARS, "cedars") as a tool for appraising the internal validity of comparative SCEDs of two or more non-reversible instructional interventions. Pertinent literature will be reviewed to establish the need for this tool and to underpin the rationales for individual rating items. Initial reliability information will be provided as well. Finally, directions for instrument validation will be proposed.
Collapse
Affiliation(s)
- Ralf W Schlosser
- Departments of Communication Sciences and Disorders, and Applied Psychology, Northeastern University, United States; Department of Otolaryngology and Communication Enhancement, Boston Children's Hospital, United States; Centre for Augmentative and Alternative Communication, University of Pretoria, South Africa.
| | - Phillip J Belfiore
- Program of Special Education and Applied Disability Studies, Mercyhurst University, United States
| | - Jeff Sigafoos
- Faculty of Education, Victoria University at Wellington, New Zealand
| | - Amy M Briesch
- Departments of Communication Sciences and Disorders, and Applied Psychology, Northeastern University, United States
| | - Oliver Wendt
- Department of Communication Sciences and Disorders, University of Central Florida, United States
| |
Collapse
|
22
|
Casale G, Volpe RJ, Daniels B, Hennemann T, Briesch AM, Grosche M. Measurement Invariance of a Universal Behavioral Screener Across Samples From the USA and Germany. European Journal of Psychological Assessment 2018. [DOI: 10.1027/1015-5759/a000447] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. The current study examines the item and scalar equivalence of an abbreviated school-based universal screener that was cross-culturally translated and adapted from English into German. The instrument was designed to assess student behavior problems that impact classroom learning. Participants were 1,346 K-6 grade students from the US (n = 390, Mage = 9.23, 38.5% female) and Germany (n = 956, Mage = 8.04, 40.1% female). Measurement invariance was tested by multigroup confirmatory factor analysis (CFA) across students from the US and Germany. Results support full scalar invariance between students from the US and Germany (df = 266, χ2 = 790.141, Δχ2 = 6.9, p < .001, CFI = 0.976, ΔCFI = 0.000, RMSEA = 0.052, ΔRMSEA = −0.003) indicating that the factor structure, the factor loadings, and the item thresholds are comparable across samples. This finding implies that a full cross-cultural comparison including latent factor means and structural coefficients between the US and the German version of the abbreviated screener is possible. Therefore, the tool can be used in German schools as well as for cross-cultural research purposes between the US and Germany.
Collapse
Affiliation(s)
- Gino Casale
- Department of Educational Sciences, University of Paderborn, Germany
| | - Robert J. Volpe
- Bouvé College of Health Sciences, Northeastern University Boston, MA, USA
| | - Brian Daniels
- Department of Counseling and School Psychology, University of Massachusetts, Boston, MA, USA
| | | | - Amy M. Briesch
- Bouvé College of Health Sciences, Northeastern University Boston, MA, USA
| | - Michael Grosche
- Institute of Educational Research, University of Wuppertal, Germany
| |
Collapse
|
23
|
Volpe RJ, Casale G, Mohiyeddini C, Grosche M, Hennemann T, Briesch AM, Daniels B. A universal behavioral screener linked to personalized classroom interventions: Psychometric characteristics in a large sample of German schoolchildren. J Sch Psychol 2017; 66:25-40. [PMID: 29429493 DOI: 10.1016/j.jsp.2017.11.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Revised: 10/05/2017] [Accepted: 11/12/2017] [Indexed: 10/18/2022]
Abstract
The current study represents the first psychometric evaluation of an American English into German translation of a school-based universal screening measure designed to assess academic and disruptive behavior problems. This initial study examines the factor structure and diagnostic accuracy of the newly translated measure in a large sample of 1009 German schoolchildren attending grades 1-6 in Western Germany. Confirmatory factor analysis supported a two-factor model for both male- and female- students. Configural invariance was supported between male- and female-samples. However scalar invariance was not supported, with higher thresholds for ratings of female students. Results of receiver operating characteristic (ROC) curve analyses were indicative of good to excellent diagnostic accuracy with areas under the curve ranging from 0.89 to 0.93. Optimal cut-off scores were 10, 5, and 13 for the Academic Productivity/Disorganization, Oppositional/Disruptive, and the Total Problems Composite scores of the Integrated System Teacher Rating Form respectively. This initial study of the newly translated measure supports further investigations into its utility for universal screening in German speaking schools.
Collapse
|
24
|
Briesch DuBois JM, Briesch AM, Hoffman JA, Struzziero J, Toback R. Implementing self-management within a group counseling context: Effects on academic enabling behaviors. Psychol Schs 2017. [DOI: 10.1002/pits.22029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
25
|
Johnson AH, Chafouleas SM, Briesch AM. Dependability of data derived from time sampling methods with multiple observation targets. School Psychology Quarterly 2017; 32:22-34. [DOI: 10.1037/spq0000159] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
26
|
Daniels B, Volpe RJ, Fabiano GA, Briesch AM. Classification accuracy and acceptability of the Integrated Screening and Intervention System Teacher Rating Form. ACTA ACUST UNITED AC 2016; 32:212-225. [PMID: 26928387 DOI: 10.1037/spq0000147] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study examines the classification accuracy and teacher acceptability of a problem-focused screener for academic and disruptive behavior problems, which is directly linked to evidence-based intervention. Participants included 39 classroom teachers from 2 public school districts in the Northeastern United States. Teacher ratings were obtained for 390 students in Grades K-6. Data from the screening instrument demonstrate favorable classification accuracy, and teacher ratings of feasibility and acceptability support the use of the measure for universal screening in elementary school settings. Results indicate the novel measure should facilitate classroom intervention for problem behaviors by identifying at-risk students and informing targets for daily behavior report card interventions. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Gregory A Fabiano
- Department of Counseling, School, and Educational Psychology, University at Buffalo, State University of New York
| | | |
Collapse
|
27
|
Abstract
Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment of 2 levels of behavioral specificity (i.e., item-level, global composite-level). Further, it has been argued that scales can be individualized for each student to improve efficiency without sacrificing technical characteristics. The current study examines the dependability of 5 items comprising a DBR-MIS designed to measure classroom disruptive behavior. A series of generalizability theory and decision studies were conducted to examine the dependability of each item (calls out, noisy, clowns around, talks to classmates and out of seat), as well as a 3-item composite that was individualized for each student. Seven graduate students rated the behavior of 9 middle-school students on each item over 3 occasions. Ratings were based on 10-min video clips of students during mathematics instruction. Separate generalizability and decision studies were conducted for each item and for a 3-item composite that was individualized for each student based on the highest rated items on the first rating occasion. Findings indicate favorable dependability estimates for 3 of the 5 items and exceptional dependability estimates for the individualized composite.
Collapse
Affiliation(s)
- Robert J Volpe
- Department of Applied Psychology, Northeastern University
| | - Amy M Briesch
- Department of Applied Psychology, Northeastern University
| |
Collapse
|
28
|
Briesch AM, Hemphill EM, Volpe RJ, Daniels B. An evaluation of observational methods for measuring response to classwide intervention. School Psychology Quarterly 2015; 30:37-49. [DOI: 10.1037/spq0000065] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
29
|
Briesch AM, Volpe RJ, Ferguson TD. The influence of student characteristics on the dependability of behavioral observation data. Sch Psychol Q 2013; 29:171-181. [PMID: 24274156 DOI: 10.1037/spq0000042] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although generalizability theory has been used increasingly in recent years to investigate the dependability of behavioral estimates, many of these studies have relied on use of general education populations as opposed to those students who are most likely to be referred for assessment due to problematic classroom behavior (e.g., inattention, disruption). The current study investigated the degree to which differences exist in terms of the magnitude of both variance component estimates and dependability coefficients between students nominated by their teachers for Tier 2 interventions due to classroom behavior problems and a general classroom sample (i.e., including both nominated and non-nominated students). The academic engagement levels of 16 (8 nominated, 8 non-nominated) middle school students were measured by 4 trained observers using momentary time-sampling procedures. A series of G and D studies were then conducted to determine whether the 2 groups were similar in terms of the (a) distribution of rating variance and (b) number of observations needed to achieve an adequate level of dependability. Results suggested that the behavior of students in the teacher-nominated group fluctuated more across time and that roughly twice as many observations would therefore be required to yield similar levels of dependability compared with the combined group. These findings highlight the importance of constructing samples of students that are comparable to those students with whom the measurement method is likely to be applied when conducting psychometric investigations of behavioral assessment tools.
Collapse
Affiliation(s)
- Amy M Briesch
- School Psychology Program, Department of Counseling and Applied Educational Psychology, Northeastern University
| | - Robert J Volpe
- School Psychology Program, Department of Counseling and Applied Educational Psychology, Northeastern University
| | - Tyler David Ferguson
- School Psychology Program, Department of Counseling and Applied Educational Psychology, Northeastern University
| |
Collapse
|
30
|
Briesch AM, Daniels B. USING SELF-MANAGEMENT INTERVENTIONS TO ADDRESS GENERAL EDUCATION BEHAVIORAL NEEDS: ASSESSMENT OF EFFECTIVENESS AND FEASIBILITY. Psychol Schs 2013. [DOI: 10.1002/pits.21679] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
31
|
David Ferguson T, Briesch AM, Volpe RJ, Daniels B. The influence of observation length on the dependability of data. Sch Psychol Q 2013; 27:187-197. [PMID: 23294233 DOI: 10.1037/spq0000005] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although direct observation is one of the most frequently used assessment methods by school psychologists, studies have shown that the number of observations needed to obtain a dependable estimate of student behavior may be impractical. Because direct observation may be used to inform important decisions about students, it is crucial that data be reliable. Preliminary research has suggested that dependability may be improved by extending the length of individual observations. The purpose of the current study was, therefore, to examine how changes in observational duration affect the dependability of student engagement data. Twenty seventh grade students were each observed for 30-min across 2 days during math instruction. Generalizability theory was then used to calculate reliability-like coefficients for the purposes of intraindividual decision making. Across days, acceptable levels of dependability for progress monitoring (i.e., .70) were achieved through two 30-min observations, three 15-min observations, or four to five 10-min observations. Acceptable levels of dependability for higher stakes decisions (i.e., .80) required over an hour of cumulative observation time. Within a given day, a 15 minute observation was found to be adequate for making low-stakes decisions whereas an hour long observation was necessary for high-stakes decision making. Limitations of the current study and implications for research and practice are discussed.
Collapse
Affiliation(s)
- Tyler David Ferguson
- Department of Counseling and Applied Educational Psychology, Northeastern University
| | - Amy M Briesch
- Department of Counseling and Applied Educational Psychology, Northeastern University
| | - Robert J Volpe
- Department of Counseling and Applied Educational Psychology, Northeastern University
| | - Brian Daniels
- Department of Counseling and Applied Educational Psychology, Northeastern University
| |
Collapse
|
32
|
Briesch AM, Chafouleas SM, Neugebauer SR, Riley-Tillman TC. Assessing influences on intervention implementation: revision of the usage rating profile-intervention. J Sch Psychol 2012; 51:81-96. [PMID: 23375174 DOI: 10.1016/j.jsp.2012.08.006] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2012] [Revised: 08/16/2012] [Accepted: 08/17/2012] [Indexed: 11/15/2022]
Abstract
Although treatment acceptability was originally proposed as a critical factor in determining the likelihood that a treatment will be used with integrity, more contemporary findings suggest that whether something is likely to be adopted into routine practice is dependent on the complex interplay among a number of different factors. The Usage Rating Profile-Intervention (URP-I; Chafouleas, Briesch, Riley-Tillman, & McCoach, 2009) was recently developed to assess these additional factors, conceptualized as potentially contributing to the quality of intervention use and maintenance over time. The purpose of the current study was to improve upon the URP-I by expanding and strengthening each of the original four subscales. Participants included 1005 elementary teachers who completed the instrument in response to a vignette depicting a common behavior intervention. Results of exploratory and confirmatory factor analyses, as well as reliability analyses, supported a measure containing 29 items and yielding 6 subscales: Acceptability, Understanding, Feasibility, Family-School Collaboration, System Climate, and System Support. Collectively, these items provide information about potential facilitators and barriers to usage that exist at the level of the individual, intervention, and environment. Information gleaned from the instrument is therefore likely to aid consultants in both the planning and evaluation of intervention efforts.
Collapse
|
33
|
Briesch AM, Hagermoser Sanetti LM, Briesch JM. Reducing the Prevalence of Anxiety in Children and Adolescents: An Evaluation of the Evidence Base for the FRIENDS for Life Program. School Mental Health 2010. [DOI: 10.1007/s12310-010-9042-5] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
34
|
Chafouleas SM, Briesch AM, Riley-Tillman TC, Christ TJ, Black AC, Kilgus SP. An investigation of the generalizability and dependability of Direct Behavior Rating Single Item Scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students. J Sch Psychol 2010; 48:219-46. [DOI: 10.1016/j.jsp.2010.02.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2008] [Revised: 01/04/2010] [Accepted: 02/03/2010] [Indexed: 10/19/2022]
|
35
|
Briesch AM, Chafouleas SM. Exploring Student Buy-In: Initial Development of an Instrument to Measure Likelihood of Children's Intervention Usage. Journal of Educational and Psychological Consultation 2009. [DOI: 10.1080/10474410903408885] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
36
|
Briesch AM, Chafouleas SM. Review and analysis of literature on self-management interventions to promote appropriate classroom behaviors (1988–2008). School Psychology Quarterly 2009. [DOI: 10.1037/a0016159] [Citation(s) in RCA: 140] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Schlientz MD, Riley-Tillman TC, Briesch AM, Walcott CM, Chafouleas SM. The impact of training on the accuracy of Direct Behavior Ratings (DBR). School Psychology Quarterly 2009. [DOI: 10.1037/a0016255] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
38
|
Riley-Tillman TC, Chafouleas SM, Christ T, Briesch AM, LeBel TJ. The impact of item wording and behavioral specificity on the accuracy of direct behavior ratings (DBRs). School Psychology Quarterly 2009. [DOI: 10.1037/a0015248] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Abstract
In this study, a self-report measure of intervention usage, the Usage Rating Profile for Interventions (URP-I) is developed and empirically examined with regard to factor structure and internal consistency. Results supported that intervention usage is associated with at least 4 different constructs, and that a measure consisting of 25 items may provide a reliable index of the 4 factors. The 4 factors identified included Acceptability, Knowledge, Feasibility, and Integrity. Findings extend and integrate existing work-related items to acceptability research aimed at predicting usage. Implications, limitations, and future directions are discussed.
Collapse
|
40
|
Riley-Tillman TC, Chafouleas SM, Briesch AM. A school practitioner's guide to using daily behavior report cards to monitor student behavior. Psychol Schs 2006. [DOI: 10.1002/pits.20207] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|