Assessment of Nonverbal Contextual Reasoning Background
I began working in Minnesota’s public schools in 2005 after graduating from the University of Minnesota–Duluth with a degree in teaching social studies, grades 5-12. Although I never secured a permanent teaching role, substitute teaching for a few years and work in the private financial sector eventually led me back to graduate school, where I completed the school psychology program at the University of Wisconsin–Stout. In 2012, I started practicing as a licensed school psychologist in the Twin Cities, later getting my credentials in and serving as a licensed special education director. Over the years, I worked with students from many cultural and linguistic backgrounds and relied heavily on standardized tests to understand their skills, abilities and levels of functioning. As I used these tools, more specifically the cognitive assessments, I began noticing consistent performance patterns among certain groups, sparking my curiosity about why these disparities existed and what might be done to address them. Despite many changes in school psychology over the many years, one thing has remained constant: the widespread use of standardized tests—and the persistent performance gaps they reveal.
The performance gap seen on standardized cognitive tests is not new, recognized and documented for decades. In response, the professionals in the field introduced purportedly culture-fair, nonverbal assessments based on the assumption that language was the root of these disparities. In my experience, however, these tools were not the appropriate remedy. In fact, in many cases, the implementation of these instruments worsened testing situations, per the fundamental bias each possessed, supported by past and current research. This referenced research pointed instead to a deeper issue underlying all traditional intelligence tests and measures of cognitive processing: they require examinee’s formal, abstract reasoning abilities for successful negotiation. This ability is shaped heavily by culture rather than innate ability, contrary to conservative belief. In spite of this topic and all related concerns, virtually all assessments continue to rely on the same formal reasoning models, even as we continue searching for more equitable ways to measure cognitive abilities.
Accepting the need to separate cognitive assessments from traditional formal reasoning models has been difficult, despite strong evidence calling for the change. Little progress has been made in developing new, culture-fair, measures, per the prioritizing of measuring abstract, formalized problem-solving, while ignoring real-world decision-making. This has led to increasing calls for assessments that emphasize contextual reasoning.
An Alternative Form of Problem Solving
Understanding and accepting the need for an assessment of contextual reasoning requires recognizing the importance of understanding and evaluating how individuals interpret and respond to the surrounding circumstances, variables, and nuances in a given situation. This framing underscores the importance of not only factual knowledge but also the ability to adapt to and assess situations dynamically, considering all relevant factors. It can apply to both reasoning and decision-making, depending on the context. The need for an alternative means of testing problem solving has existed long before I entered the field of school psychology and, if not addressed, will continue to exist long after my time in the field expires. Although I was aware of the need for reform in the field of testing, the need became clearer when I began working with Hmong-American students, in the twin cities metro area, in Minnesota. During this time, this population’s performance on cognitive assessments revealed a stunning and clear trend. This work occurred alongside nearly 15 years of lived experience within the Hmong community, including a close familial relationship through marriage to my Hmong wife, which provided sustained cultural and contextual insight. This personal experience, the previously referenced performance trends, and previous peer reviewed research and other readings, made the case for an assessment of contextual reasoning blatant.
Between 2012 and 2017, work in several charter schools serving large populations of Hmong American and other Southeast Asian students revealed a consistent assessment pattern: students tended to struggle with tasks requiring formal, abstract reasoning and thinking but performed comparably to national expectations on tasks involving concrete or contextual problem-solving. A formal analysis confirmed this trend. Across KABC-II and WISC-V assessments, Hmong American students scored roughly one to one and a half standard deviations below the mean on global indices, with lower performance linked to items requiring abstract reasoning or abstract-mathematical logic. In contrast, their performance on concrete and contextual items aligned with standard samples and was stronger relative to their abstract scores.
These findings suggested that the difficulties observed were not primarily language-based but related to the formal, abstract nature of the assessment designs themselves. This raised key questions about whether performance would have improved with less abstract, more context-based testing formats. Ultimately, the pattern supported the hypothesis that an alternative form of reasoning—contextual, practical, and less dependent on abstract logic—may be present in this student population, and that measuring this reasoning style could produce more equitable and accurate results across groups.
Beginning in 2018 under a Schoolhouse Educational Services contract, researchers developed an alternative cognitive measure aimed at assessing practical, concrete problem-solving rather than abstract reasoning. The initial version, the Romstad Assessment of Informal Nonverbal Reasoning (RAINR), showed balanced performance across diverse groups, suggesting it tapped a form of reasoning not captured by traditional intelligence tests. A 2019 analysis refined the construct—renamed contextual reasoning—and led to the updated Assessment of Nonverbal Contextual Reasoning (ANCR), with subtests revised or removed and the format converted to a fully digital platform to reduce memory demands and improve consistency. When COVID-19 halted in-person testing in 2020, the digital format enabled remote administration, and additional data collection across states confirmed earlier findings, supporting contextual reasoning as a culturally equitable construct and the ANCR as a valid way to measure it.
In Conclusion
Since its inception, the Assessment of Nonverbal Contextual Reasoning has consistently fulfilled its intended purpose. Although its development spanned seven years, the need for such a tool has existed for decades—predating many current assessment practices. As Dr. Helaine Marshall of Long Island University notes, practitioners often become attached to familiar methods and theoretical frameworks, unintentionally adopting “paradigm blinders” that limit recognition of alternative constructs or practices that may better serve diverse learners. These blinders obscured contextual reasoning, despite its relevance and visibility within everyday problem-solving. Removing these constraints enabled the identification of contextual reasoning and the subsequent development of the ANCR—a measure that produces equitable, balanced results without the need for qualification, apology, or interpretive justification.