Document Type

Presentation

Loading...

Media is loading
 

Publication Date

8-23-2022

Keywords

fidelity, fidelity assessment, clinical outcome assessment, spinal cord injury

Comments

Presentation: 20:33

Presentation completed in partial fulfillment of a Post Professional Occupational Therapy Doctorate degree at Thomas Jefferson University.

Abstract

Introduction: Assessment fidelity refers to the adherence to intended procedures and guidelines when administering an assessment (Mowbray et al., 2003; Walton et al., 2020). Clinical outcome assessments (COAs) play a crucial role in the assessment of treatment effects in clinical trials. However, there is a scarcity of fidelity assessments for COAs, which has the potential to impact the accurate evaluation of treatment effects (Richardson et al., 2016). The Spinal Cord Injury Movement Index (SCI-MI) is a performance-based COA being developed with the intent to be used in spinal cord injury (SCI) clinical trials. As the stakes in SCI clinical trials are high, the SCI-MI fidelity assessment was developed to facilitate the training and the evaluation of therapists when administering the SCI-MI. The purpose of this project was to describe the development and refinement of the SCI-MI fidelity assessment and evaluate its reliability and usability.

Objectives: The objectives of this project were to: 1) refine the SCI-MI fidelity assessment; 2) establish rater agreement and inter-rater reliability; 3) establish usability

Methods: A mixed methods approach was used for this study. To refine the SCI-MI fidelity assessment, the set-up and administration fidelity criteria were exposed to the modified Delphi process with a convenience sample (n=3) using a Qualtrics survey which was open for 1 week with 1 email reminder. Results were then analyzed and revisions to the fidelity assessment were made, followed by a subsequent survey round. This process continued iteratively until 100% agreement was achieved that every criteria on the fidelity assessment was relevant, clear, specific, and in alignment with the response scale; that the fidelity assessment as a whole was comprehensive; and the rating system and instructions were clear. Descriptive statistics were used to calculate percent agreement. To establish rater agreement, inter-rater reliability, and usability, a random sampling of 21 video-recorded sessions were reviewed by 3 trained fidelity raters. Raters were blinded to each other’s assessments and data were entered by an independent research assistant. Coefficient of agreement (target of ≥ 80%) and intraclass correlation coefficient (ICC; target > 0.90) were calculated. Two fidelity raters completed a 1 question usability question using a 6-point Likert scale. Descriptive statistics were used to calculate the frequency distributions and percentages for each response option (target: ≥ 80% agreement [strongly agree, agree]).

Results: Four rounds of the Delphi process were required to achieve 100% consensus. The total absolute agreement for the fidelity criteria was 78.93%, and 6 of the 13 individual criteria had agreement of >80%. The intraclass correlation coefficient was 0.617 (moderate) for the set-up subscale and 0.312 (poor) for the administration subscale. The fidelity raters agreed/strongly agreed the fidelity assessment was usable for 57.1% of sessions.

Conclusion: The quantitative and qualitative data from this study will be used to modify the SCI-MI set-up and administration guidelines and fidelity assessment.

References:

Mowbray, C. T., Holter, M. C., Teague, G. B., & Bybee, D. (2003). Fidelity criteria:

Development, measurement, and validation. American Journal of Evaluation, 24(3), 315-340.

Richardson, J. D., Hudspeth Dalton, S. G., Shafer, J., & Patterson, J. (2016). Assessment fidelity in aphasia research. American Journal of Speech-Language Pathology, 25, S788-S797. DOI: 10.1044/2016_AJSLP-15-0146

Walton, H., Spector, A., Williamson, M., Tombor, I., & Michie, S. (2020). Developing quality fidelity and engagement measures for complex health interventions. British Journal of Health Psychology, 25(1), 39–60. https://doi.org/10.1111/bjhp.12394

Synopsis: It is important that assessments tools are implemented as intended so that they can accurately measure the effects of treatments, including therapy interventions, and the results from assessments provide a true picture of the client. This is known as assessment fidelity. The Spinal Cord Injury Movement Index (SCI-MI) is an assessment being developed for which assessment fidelity needed to be studied. A multi-step process was used to accomplish this. This process helped to identify revisions to the SCI-MI fidelity assessment and ways the SCI-MI manual and administration guidelines needed to be revised.

Acknowledgments: Thank you to Dr. Rachel Kim, Maclain Capron, Dr. Namrata Grampurohit, and Dr. Daniel Graves for your contributions throughout this project. Thank you to the Craig H. Neilsen Foundation (597640) for the funding to complete this project, and the raters and study participants who completed the reliability testing sessions, as well as the capstone students and graduate research assistants who assisted with the testing sessions and video recordings.

Language

English

Share

COinS