AphasiaBank | ![]() | APROCSA |
The following details a training program for learning Auditory-Perceptual Rating of Connected Speech in Aphasia (APROCSA; Casilio et al., 2019). APROCSA is a novel, comprehensive, efficient, and validated system for evaluating language produced in naturalistic contexts (referred to as connected speech or discourse). In brief, APROCSA entails listening to a short clip of a language sample twice and rating 27 features, such as phonemic paraphasias or abandoned utterances, on a five-point scale (please see the manual below for complete details on all APROCSA procedures).
APROCSA ManualAPROCSA has the most robust psychometric properties of assessment systems for connected speech in aphasia that require no transcription (Stark & Dalton, 2024), possessing good-to-excellent interrater reliability (Casilio et al., 2019; Casilio et al., 2025) and strong criterion validity (Casilio et al., 2019). Importantly, its 27 features can be distilled into four explanatory dimensions:
These dimensions have distinct yet overlapping neural correlates in acute post-stroke aphasia (Casilio et al., 2025). Work using APROCSA to map the behavioral trajectories and neural correlates of connected speech recovery in post-stroke aphasia recovery, as well as associations with word production errors and self-reported functional communication, is currently underway.
APROCSA Neural CorrelatesGiven the evidence thus far, it is our view that APROCSA offers a systematic and informative way of capturing salient behaviors in aphasia that are meaningful to patients, care partners, and care teams.
Our training program for APROCSA was co-designed with a team of eight speech-language pathologists, and is intended for clinicians working with individuals with aphasia in clinical and research settings. We found that this program results in a statistically significant improvement in clinicians' use of APROCSA. Specifically, clinicians showed better agreement with expert consensus ratings for APROCSA's 27 features. The development and evaluation of the APROCSA training program is described in-depth in the following manuscript:
Although not yet evaluated directly, we also have used the training program with graduate speech-language pathology students, who anecdotally have also shown improved performance post-training and have described APROCSA as beneficial to their learning. As such, we believe this training program would be useful for clinical trainees.
The APROCSA training program takes approximately 3-4 hours and has two main parts, which are described in more detail below:
We also have included information on how to objectively quantify pre-to-post-training improvement on APROCSA for clinicians, researchers, or educators.
The first part of the training program is reviewing its core materials: (1) the APROCSA manual, and (2) the recorded webinar. The goal of this part of the training is to give learners a strong foundation in the principles and mechanics of APROCSA.
APROCSA manual
The APROCSA manual is a 19-page document that is designed to be a resource for clinicians using APROCSA and discourse assessment in aphasia more generally in their work settings. It is broken into five categories, the first two of which cover foundational topics relevant to APROCSA and the final three of which detail specifics of APROCSA. Following the protocol we outlined in Casilio et al. (in revision), we recommend learners spend approximately one hour reviewing the manual, with an emphasis on acclimating to APROCSA's features and scoring procedures. There is no need for the manual to be read word-for-word; rather it is intended to be skimmed and used as a reference when needed.
APROCSA ManualOf note, the APROCSA manual was amended slightly in November 2024 to update references and related findings.
Recorded webinar
The recorded webinar is a series of slides narrated by Marianne Casilio, who developed APROCSA, and is designed to be a tutorial detailing APROCSA's 27 features. Here, each APROCSA feature is defined, and then a short video example of the feature is shown and described. Video examples all come from participants in the QAB corpus, which was the dataset used to validate the Quick Aphasia Battery (QAB; Wilson et al., 2018), and are of connected speech elicited from semi-structured interviews (akin to the Free Speech section of the AphasiaBank protocol). Tips for how to rate the features, when relevant, are also discussed. Features are organized and presented within APROCSA's four-dimensional structure, and a brief overview of that structure is provided at the beginning of the webinar. Again following the protocol we outlined in Casilio et al. (in revision), we recommend learners watch the entirety of the webinar. For those who would like more time to replay the webinar or spend more time with the video examples, we recommend learners either pause and replay relevant portions of the webinar, or review the full connected speech samples available as part of the corpus.
APROCSA webinarThe second part of the training is completing a guided scoring practice, which entails (1) watching and rating a connected speech sample and (2) reviewing learners' ratings in relation to expert consensus ratings. This part the training is designed to be completed after part 1 (review of the manual and recorded webinar), as the goal is to practice applying APROCSA to a real clinical case.
When validating the program, this part of the training was led by experts in APROCSA: Marianne Casilio, Manaswita Dutta, Katherine Bryan, and Michael de Riesthal. We view this expertise as critical to helping learners identify salient behaviors of connected speech in aphasia. As such, we are happy to support learners of APROCSA and encourage those who would like to complete this portion of the training with us to reach out (see Contact Us section below). However, if having the support of an APROCSA expert is not feasible, we have provided the materials and described the procedure for this portion of the training program below.
Watching and rating a connected speech sample
The following individual was recruited as part of a larger project at Vanderbilt University Medical Center on the neural correlates of aphasia recovery after stroke. He completed both the AphasiaBank connected speech protocol and the QAB. Ratings on all 27 APROCSA features were derived by consensus from a group of five experts (the authors of this training manual) using a protocol we had previously developed, as described in Ezzes et al. (2022). Ratings were based on the first five minutes of participant speech during the Free Speech portion of the AphasiaBank protocol, following Casilio et al. (2019). Clinical/demographic data, consensus ratings, and sample timestamps are available as part of the APROCSA corpus on AphasiaBank.
The following video is the clipped connected speech sample used to train clinicians in Casilio et al. (in revision).
Training videoFor practice scoring, we recommend that learners follow the published APROCSA protocol (Casilio et al., 2019), which we used to train speech-language pathologists. This involves watching and listening to the sample twice without pausing the video. However, pausing between the first and second watch/listen is encouraged, as is note-taking between or during watching/listening. Scoring can be completing on the PDF protocol included above or using our online APROCSA protocol.
Reviewing ratings relative to expert consensus ratings
After scoring the sample above, learners' ratings should be compared to the expert consensus ratings. These ratings, as well as a rationale for each rating, are provided below:
Participant 2364 -- 2364 Consensus Scores
Quantifying pre-to-post-training improvement
To validate the APROCSA training program, we asked clinicians to rate connected speech samples from six participants with diverse presentations of aphasia both before and after completing the training program (Casilio et al., in revision). In our graduate aphasia courses, we have also followed a similar protocol. It is our view that this protocol may be useful for those interested in quantifying the effectiveness of the training program for research, clinical, or educational purposes.
Connected speech samples and ratings
We made use of an open-source dataset (n = 6) we previously published (Ezzes et al., 2022). The connected speech samples are available as part of the APROCSA corpus on AphasiaBank. As with the participant described above, all participants in this dataset completed both the AphasiaBank connected speech protocol, as well as the QAB. Importantly, this dataset contains consensus ratings on all 27 APROCSA features for all six participants. These features were rated from the first five minutes of participant speech during the Free Speech portion of the AphasiaBank protocol, following the procedure published in Casilio et al. (2019). Clinical/demographic data, consensus ratings, and sample timestamps are also available at langneurosci.org/aprocsa-dataset.
The following videos are the clipped connected speech samples used to evaluate clinicians in Casilio et al. (in revision). Clinicians rated these samples in the order listed here, both pre-training and post-training.
Participant 1738
1738 video clipParticipant 1944
1944 video clipParticipant 1713
1713 video clipParticipant 1554
1554 video clipParticipant 1833
1833 video clipParticipant 1731
1731 video clipAs with the practice scoring, we recommend that learners follow the published APROCSA protocol (Casilio et al., 2019) of watching/listening to the sample twice without pausing the video. Scoring can be completed on the PDF protocol included above or using our online APROCSA protocol.
For pre-training ratings, we provided clinicians with a definitions list of APROCSA features and some brief instructions on scoring, all information for which is available in our Casilio et al. (2019) publication.
Consensus ratings and inter-rater reliability
Again in line with the practice scoring described above, learners' ratings at pre-training and post-training should be compared to the expert consensus ratings. We have reproduced these ratings here:
In Casilio et al. (in revision), we quantified inter-rater reliability by comparing a single clinicians' ratings with expert consensus ratings, both of which were averaged across all 27 APROCSA features and all six participants' connected speech samples. This yielded a single metric (intraclass correlation coefficient) for each clinician at pre-training and post-training. The analysis code and formatted data files for computing these metrics are available on a Github repository that is associated with the Casilio et al. (in revision) manuscript at https://github.com/mcasilio/aprocsatraining.
To provide more opportunities for learning and practicing with APROCSA, we expanded our open-source dataset described in Ezzes et al. (2022). Video-recorded connected speech samples following the AphasiaBank protocol, along with expert APROCSA consensus scores, from 15 participants with aphasia, five of whom were tested longitudinally, are now available as part of the APROCSA corpus on AphasiaBank.
For those interested in learning more about APROCSA's scientific basis, our publications and other resources beyond the training program can be found at langneurosci.org/aprocsa.
We are eager to hear from and support clinicians, researchers, and educators in using APROCSA. Any inquiries about this training program or APROCSA more generally can be sent to Marianne Casilio at marianne.e.casilio@vanderbilt.edu.