This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
The Feifer Assessment of Writing (FAW) is a comprehensive test of writing that was designed to measure three subtypes of written language disorders. Academic achievement tests endeavor to evaluate core neuropsychological and theoretical perspectives that identify students at risk. Written assessments have historically focused more on the ability to write, putting ideas together in a sentence, and being able to do so efficiently. Missing from these evaluations is the impact of working memory and other executive functioning abilities, such as the ability to strategically develop a plan, evaluate, monitor, draft, and revisit the text. This review explores the FAW, and its contribution to the neuropsychological evaluation of writing.
Keywords: Feifer Assessment of Writing, Pediatric neuropsychology, Writing, DysgraphiaLearning disabilities were identified as a category for special education in 1977. Since that time, there have been different models of evaluation to identify children with learning disabilities. Initially, this would have been through teacher observation. Evaluations progressed to models of ability vs. discrepancy in academic achievement, response intervention, and, more recently, a pattern of strengths and weaknesses. The models are driven by federal laws regulating policies that filter down to the interpretation of state educational associations, and ultimately districts and schools. Even with the same statutes, interpretation can vary widely. As such, there is not one specific measure that has been made to adapt to educational statutes. Rather, academic achievement tests endeavor to evaluate core neuropsychological and theoretical perspectives that identify students at risk.
Crepeau-Hobson and Bianco (2010) recognize that it is important to blend standardized assessments to some other evaluative means to determine writing difficulties. In their literature, they discuss response to intervention practices as a means to compare student performance and standardized tests to better understand patterns that might be contributing to difficulties. While useful in blending function with assessment, there has been an extensive review of what concepts might contribute to a writing disorder. It might be assumed that fine motor abilities would be a significant contributing factor. However, Hooper, Costa et al. (2016) determined that in first and second graders, attention/executive functions and language were more associated with written expression in spelling than in fine motor, latent traits. Yet, this is not to suggest that the fluency of writing mechanics is still not influential in determining a student’s ability to be a successful writer. Struthers et al. (2013) worked toward developing a checklist for written disorders. Their model of understanding what cohesively needs to come together for successful writing noted the following: letter formation, mechanical skills like capitalization of letters and punctuation fluency, spelling, language skills like word choice, and construction of grammatically correct sentences. Each of these is needed to communicate ideas in writing effectively.
Furthermore, the idea of cohesiveness in a student’s writing has often been an overlooked aspect of writing assessments (Feifer & De Fina, 2002). Written assessments had historically focused more on the ability to write, putting ideas together in a sentence, and being able to do so efficiently. This is consistent with Datchuk and Kubina (2012), who recognized that intervention focused on handwriting and sentence construction did assist with transferring acquired skills to more complex tasks, like extended composition. However, even with this early intervention suggestion, their work was consistent with their review of Berninger and colleagues’ prior work, suggesting that written expression was the interaction of neurodevelopment, linguistics, and cognition. There needed to be the physical development of visual-motor skills for handwriting, the linguistic skills to produce letters, words, and then sentences appropriately in syntax, and the cognitive level to compose text. Fortunately, their study of intervention at sentence level skills was beneficial to improve composition quality.
Research has continued to focus on other skills that are needed to engage in effective writing skills. Swanson, Harris, and Graham (2013) recognize that students who have struggled with working memory and other executive functioning challenges, such as difficulty monitoring their performance, tended to have more trouble with writing and less motivation to write. With the mention of executive skill development, it is not surprising that Grams, Collins, and Rigby-Wills (2017) note that writing also requires the ability to strategically develop a plan, evaluate, monitor, draft, and revisit the text. The better someone can employ these strategies, the more likely they will have “motivational aspirations to put the skills, strategies, and knowledge into play” (Grams et al., 2017). Their review identified that writing mastery required quality, organization, and voice to come together with text production (sentence fluency, handwriting, spelling, and grammar), knowledge, and motivation.
These concepts are consistent with Feifer’s (2012) discussion that written language is an exclusively human form of communication. The process involves integrating linguistics, motor skills, visual perception, proprioception, kinesthetics, emotion, memory, and cognition. All of these skills need to interact for the process of writing to be successful. This has contributed to his work in creating the Feifer Assessment of Writing. It is evident in Feifer’s work that there seems to be a general agreement with the historical idea put forth by Vaughn et al. (2003): an assessment measure needs to consider both the student’s test scores and the identification of skill sets needed to assist intervention. Trying to wait for a discrepancy to develop could be too late to help the student learn more effectively. This is quite consistent with Fletcher et al. (2005). They argue that the emphasis of an initial assessment should be less about trying to diagnose a learning disability and instead focus on identifying achievement difficulties that can address the student’s needs through intervention.
The Feifer’s Assessment of Writing (FAW) was put together as a means to evaluate unique concepts that have not been previously addressed in other written assessments. Even though there is a Graphomotor Skills Index, which other measures do have, the FAW includes reviewing dyslexia and executive functioning concepts that have been well discussed in the literature but have not always been included in written assessments. There is also an optional composition writing task in the assessment with normative data from pre-kindergarten to college levels.
The FAW test kit includes a manual, two Stimulus Books, 10 Examiner Record Forms, 10 Examinee Response Forms, scoring templates, and sentence scaffolding cards. Kit materials are well constructed. The Stimulus Books are wire-bound, making it easy to quickly turn pages during administration. The stimulus books lay flat during administration. The Record Forms include the administration instructions.
The FAW is administered with paper and pencil. Examinees in prekindergarten take five subtests, examinees in kindergarten to Grade 1 take seven subtests, and examinees in Grade 2 to college take ten subtests. The Screening Form takes approximately 10 min to administer, while the entire test takes approximately 55 min for students in Grade 2 and beyond.
The FAW generates three separate index scores: the Graphomotor Index, the Dyslexic Index, and the Executive Index. Each index can be compared to the other, as well as to the total index score, to determine relative strengths and weaknesses in writing. The evaluator has access to an optional Compositional Writing Index, which is available when both Copy Editing and Story Mapping subtests are administered.
The FAW was standardized on a sample of 1,048 participants in prekindergarten to college, drawn from 30 states and based on the 2017 U.S. Census statistics. The sample includes intellectual developmental disorder, ADHD, fine motor deficits, and written language learning disability clinical samples.
FAW indexes have median reliability coefficients that range from 0.89 to 0.95, which suggest a high degree of internal consistency. Test–retest coefficients are 0.70 or higher, with index score test–retest coefficients in the 0.80 and 0.90 range. Inter-scorer reliability was high, between 0.93 and 1.00 on most subtests, with two subtests at 0.85 and 0.86 (Feifer, 2020).
As described above, the FAW contains four indexes (the Graphomotor Index, the Dyslexic Index, the Executive Index, and the Compositional Index) that were formed based on posited domains of writing skill. Within each index were subtests that inspect specific areas of their respective category. The items within each index were inspected by a variety of specialists in psychological science, developmental neuropsychology, education, and linguistics. The high internal consistency aforementioned indicates strong validity among the indexes.
Scores from the FAW were compared to other assessments that focused on academic achievement (Academic Achievements battery [AAB] and the Wechsler Individual Achievement Test [WIAT-III]), reading (Feifer Assessment for Reading [FAR]), mathematics (Feifer Assessment for Mathematics [FAM]), and intelligence (Reynolds Intellectual Assessment Scales [RIAS-2]). The objective was to determine correlations between subtests from the FAW and subtests of similar constructs. There was a fairly moderate relationship between FAW subtest scores and AAB scores in Spelling (r = 0.45–0.47), Writing Composition (r = 0.45), and Comprehension (r = 0.39–0.50). There was a range of fairly moderate to strong correlations between FAW subtest scores and WIAT-III scores in Oral Expression (r = 0.36–0.42), Sentence Composition (r = 0.38–0.50), Essay Composition (r = 0.42), and Spelling (r = 0.40–0.82). There was a fairly moderate to strong relationship with FAR scores in Phonology (r = 0.54–0.69), Fluency (r = 0.38–0.74), Mixed Composite (r = 0.36–0.79), Comprehension (r = 0.57–0.80), and Total Composite (r = 0.36–0.83). There was a fairly moderate to fairly strong correlations with FAM subtest scores in Procedure (r = 0.44–0.48), Verbal (r = 0.43–0.58), Semantic (r = 0.42–0.47), and Total score (r = 0.39–0.57). Conclusively, the FAW compares similarly to other instruments that measure various components in learning skills.
Scores from children with intellectual disabilities, attention/hyperactive-deficit disorder (ADHD), fine motor deficits, and written language learning disorders were compared with a control group to establish validity in detecting learning disorders in writing. The intellectually disabled group scored significantly lower than the control group (Mdiff = 20.48–41.91) with a Cohen’s d ranging from 1.17 to 4.31. The ADHD group scored significantly lower than the control group (Mdiff = 7.89–16.79) with a Cohen’s d ranging from 0.96 to 1.30. The fine motor deficit group scored significantly lower than the control group (Mdiff = 16.13–33.77) with a Cohen’s d ranging from 1.18 to 2.06. Finally, the written language learning disorder group scored significantly lower than the control group (Mdiff = 11.27–29.82) with a Cohen’s d ranging from 0.81 to 2.31. Conclusively, the FAW is able to distinguish between an array of learning disorders in writing.
The goal of the test was to incorporate many subtests Feifer was incorporating into his own neuropsychological battery. Writing requires many neuropsychological processes such as attention, word retrieval, working memory, executive functioning, motor planning, and motor speed. At conferences, colleagues have complained of the complexity in scoring the essay subtest on the WIAT and the Kaufman Test of Educational Achievement (KTEA). Both the WIAT and KTEA have spelling subtests. While they are not specific to writing, there are similar measures that assess executive dysfunction (planning, organization, word retrieval, syntactic thought production).
Students spend a considerable amount of time writing in school, yet we as a field have struggled to evaluate writing effectively. A review of any writing instrument should acknowledge the difficulty of evaluating writing. In fact, many psychologists complain about the challenges of scoring essays on achievement batteries. The authors of the FAW clearly had these challenges in mind as they set out to develop the instrument. The subtests are divided by individual skill.
The FAW’s description of the working memory components of written expression is excellent. The measure provides methods to investigate the phonological loop, the visuospatial sketchpad, the episodic buffer, orthographic working memory, and aspects of the central executive system. The manual details how executive dysfunction (initiation, attention, inhibition, organization, planning, and self-monitoring) can impact writing.
The FAW manual is one of the more educational test manuals in that it appears to have three aims. First, the manual provides a rationale for the test construction and the terminology the authors picked. Second, the manual provides descriptions of the neuropsychological processes of writing. Third, the manual assists in the interpretation of writing errors. A benefit of the FAW is the ability to pinpoint a student’s challenges in order to inform intervention. Another benefit of FAW is that the authors continue to support it. In the wake of COVID-19, the authors put together guidelines for digital administration.
One negative of PARiConnect, PAR’s online scoring software, is that if you make an error on item entry, once you score the test with the error, you have to recreate a new test and pay for another usage to score the corrected error. This also means that you cannot score results in real time. Given the usefulness of the Screening Form, it would be an excellent addition if you could administer the Screening subtests, receive a score, and then decide to proceed with the entire measure without purchasing both the Screening Scoring and the full test Scoring.
The executive working memory task is really clever and helpful. However, some items rely too much on the primacy and recency effects. The sentence scaffolding cards are difficult to access as the box is too big and searching for the cards is awkward. While multiple-choice scaffolding holds great potential in language acquisition, it requires a high level of concentration on the child’s part to integrate the granularity of words and phrases into memory. Thus, utilizing scaffolding in a time-sensitive environment could greatly reduce its reliability as a diagnostic tool. Furthermore, the complexity of the scaffolding process may require mediation from the instructor. Given this higher-order process, the scaffolding task does not appear to be a practical tool for a timed assessment; rather, it is just suitable for interventions.
Knowing when to query can be difficult with the scoring manual in front of you, which would be impractical given its size. When items can be scored either 0, 1, or 2, the Examiner Form includes the 2-point answer but does not display the 1-point answers. Timing during executive functioning does not allow kids to show what they know. During expository writing, the prompt for second graders uses language some children may not understand.
At times, the scoring rules and cutoff criteria appear too quick on some of the graphomotor subtests. We applaud the test creators in making a writing assessment that is quick and easy to administer, yet it feels that sometimes the subtests are over too soon. For example, on Motor Planning, the student is asked to copy sentences insides a series of boxes. As soon as the student finishes one box, they are instructed to move on to the next box. The task is timed. The evaluator scores the item if the student writes at least two words in a box. Some students will run out of space in a box yet still try to fit the rest of the sentence in the box. Other students will run out of space, forget about the rest of the sentence, and begin the next box. The scoring system appears to reward students who decide to move on and not complete the item.
Despite these concerns, the FAW is a welcome addition to the field and is likely to please many neuropsychologists. We anticipate neuropsychologists will appreciate the manual, the speed of administration, and the integration of many familiar neuropsychological concepts.
It is unlikely that neuropsychologists will need to use the FAW in every battery. The FAW should be used when a student demonstrates writing challenges on standard achievement tests. The FAW should be used to guide recommendations and interventions. Neuropsychologists are most likely to find that the FAW will improve their recommendations. We believe this will greatly improve the efficiency with which neuropsychologists can communicate their findings, and we are hopeful that these improvements will lead to better interventions for students.