Invisible Disability, Visible Damage: The Systemic Failure to Identify Language Disorders

As a speech-language pathologist (SLP) specializing in language and literacy assessment, I’ve reviewed hundreds of educational, psychoeducational, and neuropsychological evaluations for students with complex learning needs. I repeatedly see students who are clearly struggling in the classroom, despite strong support at home, undergoing assessments that fail to identify their needs accurately. As a result, they are not recommended for the services or interventions they clearly require.

These are students whose grades are often propped up by participation points and whose assignments are completed with heavy adult support. Their daily struggles, in areas like discourse comprehension, written expression, and academic reasoning, are evident to those who work with them closely. Yet psychological and neuropsychological evaluations often fail to capture this reality, leaning instead on grade-level performance or composite scores that mask the student’s underlying breakdowns. The result is a profile that looks functional on paper but doesn’t reflect how the student actually functions in real life.

What’s missing is a good, hard look at the supports masking students’ struggles and the language demands causing them. This isn’t a minor oversight; it’s a systemic failure that delays intervention, misguides planning, and leaves families frustrated and ignored.

This post is written to highlight the gaps in current evaluation practices; not to place blame, but to push for better approaches that prioritize language and reflect the real challenges students face in school.

The IQ Trap: “If they’re smart, they can’t have a language problem.”

Many psychoeducational and neuropsychological evaluations rely too heavily on IQ scores to judge a student’s potential and eligibility for services. If a child scores in the average or low average range on a test like the WISC-V, their learning difficulties are often dismissed as motivational, emotional, or environmental, while language is barely addressed (Carr, 2024, para. 4).

This reflects a fundamental misunderstanding of what IQ tests measure. IQ tests do not assess the kinds of language needed for academic success (Richardson, 2002). Even the Verbal Comprehension Index on the WISC-V measures vocabulary and factual knowledge, not the ability to interpret texts or organize discourse. A child can score well on these tasks and still have significant language impairment.

Students with Developmental Language Disorder (DLD) often have average or above-average IQs. Their challenges are frequently missed because they don’t align with outdated assumptions about what a learning disability looks like. When evaluators see average scores, they assume the student’s difficulties stem from something else. As a result, many students are misclassified, denied services, or left unsupported (Ziegenfusz et al., 2022).

This becomes even more harmful when paired with the discrepancy model, which waits for a large gap between IQ and achievement before recognizing a disability. If both are in the average range, evaluators may conclude there’s no issue, even if the student is clearly struggling in real-world academic settings (Fletcher & Vaughn, 2009).

Using IQ as a gatekeeper ignores the central role of language in learning. Language supports reading, writing, math, reasoning, and social interaction (Snow, 2010). If language isn’t assessed directly, we miss the root cause of many learning difficulties.

An average IQ does not rule out a language disorder (Blake, Dąbrowska & Riches, 2025). Treating it that way leads to chronic under-identification of students with real, disabling language needs that don’t show up on the surface. These aren’t soft problems. They are language problems, and they need to be treated as such.

Why Generic Recommendations Miss the Mark

Recommendations in psychoeducational and neuropsychological reports are often disconnected from a student’s actual language and reading profile. Programs like Orton-Gillingham, Wilson, or Lindamood-Bell are routinely recommended by default, regardless of whether the student’s difficulties involve decoding, fluency, comprehension, or oral language (Stevens et al, 2021; Snowling & Hulme, 2025).

This mismatch undermines effective intervention. If a student struggles with comprehension, difficulty making inferences, understanding figurative language, or processing complex syntax, a phonics-heavy program is the wrong fit. These students need support in language processing, vocabulary, syntax, and reasoning, not more sound-symbol drills (Snowling & Hulme, 2025).

When the issue is fluency, slow, effortful reading despite accurate decoding, decoding programs miss the mark again. Fluency challenges often stem from poor automaticity, language formulation difficulties, or working memory overload. These require connected text practice and expressive language support, not more phonics (Benjamin & Gaab, 2012; Kim et al, 2017; Lee & Yoon, 2017; O’Connor, Swanson, & Geraghty, 2010).

If a child has oral language weaknesses or trouble organizing ideas, assigning a decoding-focused program makes no clinical sense. And recommending ADHD coaching to a student who struggles with following directions because of syntactic comprehension deficits is not just misguided, it’s a misdiagnosis. These are language-based issues, not attention problems (Elleseff, 2025).

Orton-Gillingham and similar programs are appropriate when decoding is the primary issue. But when used reflexively, without diagnostic precision, they waste time and set students up to fail interventions that were never meant for them (Stevens, et al, 2021).

Effective intervention depends on accurate diagnosis. Reading is not a unitary skill (Catts & Kamhi, 2017). It requires decoding, fluency, language comprehension, and reasoning. Unless evaluations differentiate these areas and align recommendations accordingly, blanket programs and generic referrals risk doing more harm than good (Farall, 2012; Catts et al 2015; Catts & Petscher, 2022).

If It’s Not Overt, It Doesn’t Exist

Many psychoeducational and neuropsychological reports rely on superficial observations like “speaks in full sentences” or “language appears age-appropriate” as evidence that no language concerns are present. These conclusions are often based on casual conversation, not on the functional or academic language skills needed to succeed in school (Perichon et al., 2025).

Students with DLD often seem socially competent in low-demand, one-on-one settings. They can answer simple questions, hold pleasant conversations, and appear engaged. But the language needed for school, summarizing texts, explaining processes, constructing arguments, or analyzing viewpoints, places far greater demands on vocabulary, syntax, inferencing, and discourse organization. These are the areas where breakdowns typically occur (Andreou et al., 2022; Ziegenfusz et al., 2022).

Many students with DLD develop coping strategies that mask their deficits. They nod along, mirror others’ words, rely on vague affirmations, or shift topics when unsure. These behaviors can give the illusion of strong communication skills, especially to adults who aren’t attuned to subtle signs of breakdown. Some students use humor or enthusiasm to redirect attention, while others echo phrases from peers or adults to seem engaged. These strategies are adaptive, but not sustainable. They fall apart in situations that demand clear, organized, and precise language—peer discussions, group work, or tasks involving sarcasm, perspective-taking, or layered reasoning (McGregor, 2020; Leitão et al., 2025; Hobson & Lee, 2022; McGregor et al, 2023).

These deficits are rarely detected through informal conversation or basic checklists. Tools like the Clinical Assessment of Pragmatics (CAPs) (Lavi, 2019) or discourse-based tasks (Nippold, Mansfield & Billow, 2007) are far better suited to reveal difficulties in interpreting intent, navigating breakdowns, managing topic shifts, and adjusting language to fit social or academic contexts.

What looks like strong social language during a brief interview often masks deeper challenges with comprehension and expression (Andreou et al., 2022). If an evaluation doesn’t include high-level language tasks, it will miss the skills that matter most in real-world academic settings. Passing the “eyeball test” is not the same as being able to write an essay, follow layered instructions, or contribute meaningfully to a classroom discussion.

If we don’t assess discourse-level language, we miss the core deficits driving academic struggles in students with DLD.

WJ-V and WIAT-4 Are Not Enough

The Woodcock-Johnson-V (WJ-V) and Wechsler Individual Achievement Test-4 (WIAT-4) are commonly used in academic evaluations to assess skills like word reading, spelling, math, and sentence composition. While these tools provide standardized scores in core academic areas, they are not designed to capture how language functions in real academic settings, and that’s a major limitation (Elleseff, 2020; Beaujean & Parkin, 2022).

The tasks on the WJ-V and WIAT-4 are often decontextualized. Word reading involves isolated words, spelling requires writing dictated items, and sentence composition checks for grammatical correctness in single sentences. These tasks assess basic skills, but they don’t measure how students use language to complete authentic schoolwork summarizing texts, explaining cause and effect, constructing arguments, or responding to open-ended questions in science and history (Beaujean & Parkin, 2022).

Students with DLD or other language-based learning disabilities can score in the average range on these tests and still struggle to manage classroom demands. They may have trouble organizing ideas, interpreting complex directions, using precise vocabulary, or expressing thoughts clearly in writing (McGregor et al, 2023). These deficits often go unnoticed because the WJ-V and WIAT-4 do not assess discourse-level comprehension, narrative structure, pragmatic language, higher-level syntax, or auditory memory.

This creates a dangerous disconnect. Students who pass standardized achievement tests may still be unprepared for real-world academic tasks, yet their struggles are often misattributed to attention issues or low effort instead of underlying language weaknesses.

We cannot equate WJ-V or WIAT-4 performance with academic readiness. These tools have a role in identifying certain foundational skills, but they are insufficient on their own. A more complete evaluation must include curriculum-aligned language assessments, discourse-based tasks, and tools like the Test of Integrated Language and Literacy (TILLS) (Nelson, Plante, Helm-Estabrooks, & Hotz, 2016; Mailend et al., 2016) that probe how language is used in school-based contexts. Without these, we risk overlooking students with serious, yet hidden, language needs and making decisions that fail to address the root of their academic challenges.

Everything is ADHD, Except When It’s Not

When a child struggles with focus, becomes overwhelmed, or performs poorly on working memory tasks, many evaluations jump to ADHD as the explanation. Executive dysfunction is treated as the root cause, with treatment plans focused on attention, persistence, and behavior. Language, especially comprehension, is often ignored or barely addressed (Parks et al, 2023).

But this framing is often incomplete, and in many cases, incorrect (Ford-Jones, 2015; Sadek, 2023; Kazda et al., 2021; Abdelnour et al., 2022).

Take the student who seems to zone out during class. This is usually seen as inattention. But in reality, the student may be struggling to process long, complex sentences. Instructions like “Before you start the second paragraph, make sure you’ve addressed the three points on the board and use evidence from yesterday’s reading” are dense. If the student can’t hold on to the structure or meaning, they may check out, not from distraction, but from overload (Kornilov et al, 2015; Jones & Westermann, 2022).

Or consider the student who “can’t follow directions.” This is frequently attributed to pure working memory issues or impulsivity. But what if they never understood the directions in the first place? Language processing breakdowns, especially with conditionals, sequencing, or embedded clauses, can make instructions incomprehensible, leading to apparent noncompliance (Kornev & Balčiūnienė, 2021).

The same applies to students who read haltingly, skip words, or lose the thread of what they’re reading. These behaviors are often chalked up to poor attention. In many cases, the problem is weak syntactic understanding. Without the ability to parse clauses, chunk phrases, or follow grammatical cues, reading becomes exhausting and fragmented (Catts, Adlof, & Weismer, 2006; Hulme & Snowling, 2011).

These are language-based difficulties that look like ADHD (Parks et al., 2023). The behaviors overlap, but the causes are different. Yet most evaluations don’t assess oral language or discourse comprehension in depth. Instead, they end with a broad ADHD diagnosis and a recommendation for medication, while the true language issues go undetected.

This isn’t just a misdiagnosis (Ford-Jones, 2015; Sadek, 2023; Kazda et al., 2021; Abdelnour et al., 2022). It’s a systemic oversight that delays the right intervention and mislabels children for symptoms rooted in unrecognized communication breakdowns.

If language is not thoroughly ruled out, diagnosing ADHD is premature. It’s labeling the symptom without identifying the cause (Elleseff, 2025).

No Link Between Behavior and Pragmatic Language

In many evaluations, behavioral and social-emotional difficulties are treated as separate from language, as if a child’s outbursts, withdrawal, or defiance have nothing to do with how they understand and use communication (Helland et al., 2014; Levickis et al, 2018). This outdated, siloed view misses the mark and often causes harm.

Many students labeled “oppositional,” “disengaged,” or “immature” are actually struggling with unrecognized pragmatic language impairments (Hollo, Wehby, & Oliver, 2014; Chow & Hollo, 2018; Hollo, Chow, & Wehby, 2019; Chow, Walters, & Hollo, 2020; Chow & Hollo, 2022). These breakdowns aren’t always obvious in casual conversation or on tests that only assess vocabulary and grammar. They show up in how students interpret social cues, manage conversations, express intentions, and navigate relationships (Forrest et al., 2025).

Common signs include (Andreou et al., 2022; Botting & Adams, 2005; Brinton & Fujiki, 2005; Chow & Hollo, 2022):

  • Misreading tone, sarcasm, or indirect requests
  • Taking idioms literally
  • Struggling to clarify misunderstandings
  • Missing unspoken rules like turn-taking or topic shifts
  • Using abrupt or disorganized language that sounds rude
  • Having trouble understanding others’ perspectives, especially in conflict

When students with these issues are misunderstood, the behavior often escalates. A child who “talks back” may not grasp how their tone sounds. A student who interrupts might lack the skills to stay on topic. The one who shuts down in group work may not know how to negotiate or ask for help. These are all signs of communication breakdown (Chow, Walters, & Hollo, 2020).

Yet many evaluations pathologize the behavior without probing its source. Reports describe “poor coping” or “emotional dysregulation” and recommend counseling or behavior support, while the language component is ignored. This is especially common in students with DLD or trauma-related language issues, profiles where pragmatic deficits are central, but often overlooked (Hyter, 2021).

What’s needed is an integrated approach. Behavior and language are deeply connected. Tools like the Clinical Assessment of Pragmatics (CAPs) (Lavi, 2019), discourse samples (Nippold, Mansfield & Billow, 2007), and narrative tasks (Hughes, 2001; Justice et al., 2006) offer insight into how students interpret social language, organize thoughts, and manage interaction. These tools go beyond surface-level checklists. They reveal the communicative demands behind the behavior.

When pragmatic impairments are identified and treated as language-based, intervention becomes far more effective (Jensen de López, Kraljević, & Struntze, 2022). These students don’t just need psychotherapy. They need support building flexible communication, social reasoning, and functional self-advocacy.

The behavior is real, but often, it’s not just behavioral. It’s communicative. If evaluations continue to overlook pragmatic language, we’ll keep treating symptoms while missing the cause.

The Illusion of Grade-Level Performance

Many evaluations dismiss concerns when a student is “performing at grade level” or earning A’s and B’s. The assumption is that if grades look good, there’s no problem, no academic impact, no need for support. But this surface-level interpretation ignores the complexity of academic expectations and the hidden scaffolds holding many students up (Kunnath, 2017).

Grades often reflect participation, extra credit, parent help, or heavily supported assignments, not independent performance. These same students can’t write a coherent paragraph, respond precisely to open-ended questions, or follow multi-step academic instructions without help. Their report cards look solid, but the moment high-stakes tasks arrive, state testing, unsupported writing, and higher-level reasoning, they fall apart.

This disconnect often begins early. Parents of struggling readers report obvious concerns, slow letter-sound learning, guessing words, or trouble blending. But unless schools use objective, sensitive tools (like the TILLS, GORT-5, or TOWRE-2), these issues remain undocumented. Instead, subjective tools like Running Records (Blaiklock, 2004; D’Agostino et al 2021; Rodgers et al., 2021) or Fountas & Pinnell (Burns et al 2015 a; Burns et al 2015 b; What Works Clearinghouse, 2017; Burns, 2024 ) levels mask true ability, labeling students “on level” even when they lack foundational reading skills.

It doesn’t stop at reading. Spelling tests are often based on memorization, not pattern knowledge (Apel, Masterson & Hart, 2004; Berninger & Fayol, 2008; Putman, 2017). Students ace the test, forget the words days later, and continue to misspell them in writing. In science and social studies, students study fill-in-the-blank guides or complete projects edited by adults. Grades inflate, but true comprehension is shallow. I’ve worked with families who spend hours helping with homework or managing meltdowns, only to see an A- in the gradebook, giving schools the false impression that the child is thriving.

Writing grades are especially misleading. I’ve seen essays full of fragments, disorganization, and basic errors marked as B+. Rubrics may be provided, but scoring often reflects effort or leniency, not mastery. Students are passed along without the skills to express complex ideas or revise their work, and by middle or high school, the gap between appearance and reality becomes too wide to ignore (Stoeckel et al., 2013; Koutsoftas, 2016; Koutsoftas & Gray, 2012; Williams et al., 2013).

We need to stop using grades and vague notions of “grade level” as a stand-in for real evaluation. Instead, we must ask: What was measured? Who supported the work? Does it reflect the student’s independent ability, or the system’s ability to mask the problem? Without honest answers, students with real language and learning needs will continue to be overlooked until it’s too late.

Right Scores, Wrong Story: When Data Are Ignored or Misread

The problem isn’t just which tests are used; it’s how the results are interpreted. Many evaluation reports offer only a surface-level summary, relying on broad composite scores that obscure meaningful differences between subtests. Key patterns, like large gaps between tasks, are often ignored, misinterpreted, or deliberately smoothed over to fit a predetermined conclusion (Ryder & Norwich, 2018).

I’ve reviewed evaluations where a student scored at the 84th percentile on one task and at the 9th percentile on another task of the same test. Rather than recognize this as a red flag, the evaluator averaged the scores into a composite, called it “average,” and moved on. The discrepancy and the need for further analysis were never acknowledged.

This kind of composite masking is a major source of diagnostic error (Elleseff, 2016; Gilmore et al, 2024). Students who perform well on structured, rote tasks like labeling or word definitions may still struggle to explain ideas, organize narratives, or understand implied meaning. If evaluators don’t understand the specific demands of each subtest, they may assume that “average” means intact. When that flawed interpretation is used to deny services or justify inaction, the consequences are real (McKenna & Heaney, 2020).

Another common error is misattributing poor performance to attention or behavior. Listening comprehension breakdowns are often blamed on inattention, when the real problem is sentence complexity, unfamiliar vocabulary, or linguistic overload. A child who misses key instructions might not have a working memory problem; they may have never understood the language in the first place (Elleseff, 2025).

Similarly, memory subtests are often misread. Poor performance on story recall or word lists is assumed to reflect memory deficits. But these tasks place heavy demands on syntax, semantics, and retrieval. If the language demands aren’t recognized, the child may be mistakenly identified as having isolated executive function deficits, when in fact the difficulty stems from underlying comorbid cognitive and linguistic processing challenges (Flores Camas & Leon-Rojas, 2023).

Even when strong tools are used, interpretation often lacks clinical depth. Evaluators may work in isolation, without input from SLPs or literacy specialists who understand the developmental trajectory of academic language. These errors have high stakes. When reports overlook subtest patterns or explain away language breakdowns as attention or emotional issues, students are denied services they clearly need. They’re sent to counseling, placed on medication, or told everything is fine when it’s not! And when families question the results, they’re often dismissed with “the testing didn’t show anything significant,” even when red flags are sitting right there in the data.

The tests themselves are not always the problem. It’s how they’re used, or misused. Failing to read the manual and bypassing clinical reasoning can turn technically correct scores into meaningless reports that do more to obscure than inform. If evaluators aren’t trained to recognize language disorders, or don’t collaborate with those who are, the numbers may look good, but the conclusions are misleading.

The value of any evaluation lies in how well it explains a child’s challenges and guides meaningful support. A report that misses or misrepresents a language-based learning issue doesn’t just fall short; it becomes a barrier to the help that child needs.

Testing Tools Are Often Too Simple or Poorly Matched

Another major flaw in many psychoeducational and neuropsychological evaluations is not just what gets tested, but how mismatched or narrow the testing tools are, especially for older students with subtle or higher-order language deficits.

Evaluators often rely on language measures, which are helpful for detecting broad developmental delays, but fail to capture the complex language demands. Tasks like sentence repetition, vocabulary labeling, or word definitions may look fine on paper (e.g., CELF-5), but they don’t reveal whether a student can organize ideas, interpret implied meaning, reason through language, or participate in academic discussions—skills essential for real classroom success.

Many of these tools also lack ecological validity (Olson, Jacobson, & Van Oot, 2013). School requires students to write essays, explain scientific processes, evaluate arguments, ask questions, and adapt their language to different contexts. Naming objects or repeating sentences in isolation doesn’t reflect what students actually need to do. If a test doesn’t require students to engage in meaningful, connected language tasks, it’s not measuring the skills that matter most (Chaytor & Schmitter-Edgecombe, 2003).

This problem is made worse by the use of tools with overly narrow constructs. A test might claim to assess language, but in practice, it only looks at basic grammar or word meanings (e.g., CELF-5) (Wiig, Semel & Secord, 2013). Without examining how students integrate language across reading, writing, listening, and speaking, evaluations miss the bigger picture. A student might define a word but not use it effectively in writing. They might repeat a sentence but fail to construct a paragraph or follow the logic in a narrative. When evaluations only capture isolated skills, key deficits remain hidden.

Compounding this is the lack of alignment between many standardized tests and real classroom expectations. A test might assess comprehension with short, literal passages and multiple-choice questions (Recht & Leslie, 1988), while teachers are asking students to analyze complex texts, support arguments with evidence, and communicate layered ideas. The gap between the test and the academic task is wide, and students fall through it.

Language underpins every part of learning. If the assessment tools are outdated, shallow, or disconnected from academic demands, they won’t capture the struggles students are actually facing. What looks like “no language concerns” is often just the result of using the wrong tools for the student’s age, the task at hand, and the type of language that matters.

Discrepancy Model Thinking is Still Alive and Well

Despite decades of research discrediting it, the IQ-achievement discrepancy model remains widely used in schools and private evaluations (Beaujean et al., 2018). This model assumes that a student must show a large gap between IQ and academic performance to be diagnosed with a learning disability or to qualify for services. If no gap exists, the conclusion is often “no disability, no services” (Fletcher & Miciak, 2024).

This approach is not just outdated, it’s scientifically flawed and deeply inequitable (Beaujean et al., 2018).

The discrepancy model assumes that IQ scores represent a student’s learning potential and that academic achievement should match that potential. So, if a student has an average IQ but performs poorly in reading or writing, the gap, or “discrepancy”, is seen as evidence of a learning disability. But this logic breaks down for students with DLD and learning-based challenges who often score in the low average range on both IQ and academic tests. Because there’s no large gap between scores, the model incorrectly concludes that there’s no disability. In reality, these students may have significant language and literacy deficits, but the tools being used aren’t sensitive enough to detect them. The absence of a discrepancy doesn’t mean the student is fine; it means the model is failing to capture the full picture of their learning needs (Fletcher & Miciak, 2024).

This model also creates harmful delays. Students often must fall far enough behind to finally show a large enough gap, by which point their academic confidence has eroded and failure has taken hold. The system waits for visible collapse before acting.

Even worse, the verbal nature of many IQ subtests means that language disorders can depress IQ scores, causing the very issues the model then uses to deny services (Liao et al., 2015; Saar, Komulainen, & Levänen, 2022). In effect, the student is penalized twice: once for having a language-based disability, and again by a model that misreads it.

Research supports more accurate frameworks (if done right!) like response-to-intervention (RTI) (Fletcher & Vaughn, 2009) and patterns of strengths and weaknesses (PSW) (Miciak et al, 2014), which consider skill deficits, instructional history, and how the student functions in context. But these models are unevenly applied, and many teams still fall back on discrepancy logic as a shortcut or a gatekeeping tool.

At its core, this thinking misunderstands learning. Learning is not just a reflection of IQ; it’s shaped by language, instruction, opportunity, and how well students can apply their skills in real settings. Until we let go of this flawed model and replace it with frameworks that focus on real-world functioning and instructional fit, we will continue to miss and underserve countless students, not because they lack ability, but because we’re asking the wrong questions.

The Missing Piece: Collaboration Between SLPs, Psychologists, and Neuropsychologists

One of the most persistent and avoidable problems in the evaluation process is the lack of true collaboration between speech-language pathologists, psychologists, and neuropsychologists. Professionals often work in parallel rather than together, leading to fragmented conclusions, missed connections, and incomplete diagnostic profiles.

Each discipline brings essential expertise. Psychologists and neuropsychologists assess cognition, memory, attention, and executive functioning, but often lack in-depth training in language development and its academic impact. SLPs specialize in language processing, pragmatics, discourse, and the language foundations of literacy, but are frequently left out of high-stakes evaluations or brought in only to screen for basic articulation or expressive output.

This siloed approach is especially problematic for students with complex profiles, those who struggle across multiple domains like verbal reasoning, processing speed, pragmatics, or inferencing. A psychologist may attribute low working memory scores to attention deficits when the real issue is syntactic complexity. A student’s weak problem-solving may be labeled as a cognitive weakness when it actually reflects gaps in vocabulary or sentence comprehension. These misinterpretations persist when SLPs are not part of the discussion.

Real collaboration means more than trading reports. It involves jointly reviewing data, asking whether performance issues stem from language, cognition, attention, or a combination, and selecting tools that reflect developmental, academic, and linguistic demands. It also means staying open to revising hypotheses based on shared findings.

Effective collaboration depends on shared knowledge and mutual respect for disciplinary expertise. Speech-language pathologists need a working understanding of psychometrics, executive functioning, and the cognitive underpinnings of standardized assessments in order to interpret broader neuropsychological or psychoeducational profiles. Without this, they risk over- or under-emphasizing findings outside their traditional scope. Conversely, psychologists and neuropsychologists need a deeper and more developmentally grounded understanding of language, particularly how delays or disorders in syntax, semantics, discourse, and pragmatics manifest over time and influence academic performance, emotional regulation, and social behavior. Misinterpreting language-based difficulties as purely behavioral or executive in nature can lead to misdiagnosis and ineffective intervention. Productive interdisciplinary work hinges on the ability to see the whole child, not just through a disciplinary lens, but through an integrated framework that accounts for the interdependence of language, cognition, emotion, and learning. Without this mutual understanding, teams default to narrow, discipline-specific conclusions, and students fall through the cracks.

To fix this, SLPs need to be involved early in the evaluation process, not as an afterthought, but as core contributors. Psychologists and neuropsychologists need to ask the right language-informed questions when interpreting scores. And schools need to stop treating these roles as separate silos. No single evaluator can capture the full complexity of a student’s learning profile alone.

Interdisciplinary work isn’t optional; it’s essential for accurate diagnosis and effective support.

Final Thoughts: It’s Not About Blame, It’s About Doing Better

I want to be very clear! This post was not written to blame psychologists, neuropsychologists, or educators. It was written to highlight the systemic blind spots in how we evaluate students and to take collective responsibility for addressing them. The issue is that too many professionals work in isolation, relying on outdated frameworks, narrow tools, or surface-level interpretations that miss the complexity of language and its foundational role in learning.

Language is a framework that supports reading, writing, reasoning, memory, self-regulation, social interaction, and academic engagement. When evaluations overlook language, or reduce it to basic vocabulary or sentence repetition, students are mischaracterized, misdiagnosed, and misserved. Key diagnoses are missed. Ineffective interventions are put in place. And capable students continue to struggle without the support they need.

We need assessments that are developmentally informed, interdisciplinary in nature, and grounded in an understanding of how language drives learning. We need professionals who communicate across disciplines, who question assumptions, and who view language as central, not peripheral, to academic success.

Because the stakes are high. The students we overlook don’t get another chance. They get mislabeled. They’re given medication or accommodations instead of therapy. Behavior plans instead of communication support. Judgment instead of understanding.

We owe them better, and we owe it to them now!

Further Reading:
  1. When the Wrong Professional Evaluates the Child: The High Cost of Misdirected Assessments
  2. Neuropsychological or Language/Literacy: Which Assessment is Right for My Child?
  3. A Reading Program is NOT Enough: A Deep Dive into the Dyslexia Diagnosis
  4. Orton-Gillingham (OG) Alone Won’t Cut It: What the Research Actually Shows About Helping Struggling Readers
  5. Comprehensive Assessment of Elementary-Aged Children with Subtle Language and Literacy Deficits
  6. Why Accuracy Matters: The Hidden Risks of Flawed Language and Literacy Tests
  7. School SLPs vs. Private SLPs: Do They Really Target Different Oral Language Goals?
  8. Dear Parents: Ask the Right Questions to Get the Right Services
  9. Addressing Misconceptions About ADHD and Poor Language Performance
  10. The APD Diagnosis Trap: How a Controversial Label Harms Kids in Schools
References:
  1.  Abdelnour, E., Jansen, M. O., & Gold, J. A. (2022). ADHD diagnostic trends: Increased recognition or overdiagnosis? Missouri Medicine, 119(5), 467–473.
  2. Andreou, G., Lymperopoulou, V., & Aslanoglou, V. (2022). Developmental Language Disorder (DLD) and Autism Spectrum Disorder (ASD): similarities in pragmatic language abilities. A systematic review. International Journal of Developmental Disabilities70(5), 777–791.
  3. Apel, K., Masterson, J. J., & Hart, P. (2004). Integration of language components in spelling: Instruction that maximizes students’ learning. In E. R. Silliman and L. C. Wilkinson (Eds.), Language and literacy learning in schools (pp. 292-315). New York: Guilford Press.
  4. Berninger, V., & Fayol, M. (2008). Why spelling is important and how to teach it effectively. Encyclopedia of Language and Literacy Development (pp. 1-13). London, ON: Canadian Language and Literacy Research Network.
  5. Beaujean, A. A., Benson, N. F., McGill, R. J., & Dombrowski, S. C. (2018). A misuse of IQ scores: Using the dual discrepancy/consistency model for identifying specific learning disabilities. Journal of Intelligence, 6(3), 36.
  6. Beaujean, A. A., & Parkin, J. R. (2022). Evaluation of the Wechsler Individual Achievement Test–Fourth Edition as a measurement instrument. Journal of Intelligence, 10(2), Article 30.
  7. Benjamin, C. F., & Gaab, N. (2012). What’s the story? The tale of reading fluency told at speedHuman brain mapping33(11), 2572–2585.
  8. Blaiklock, K. (2004). A critique of running records of children’s oral reading. New Zealand Journal of Educational Studies, 39(2), 127–145.
  9. Blake, A., Dąbrowska, E., & Riches, N. (2025). Exploring cognitive predictors of language in children with developmental language disorder: The role of nonverbal IQ, working memory, implicit statistical learning, and speed of automatization. Journal of Communication Disorders, 116, 106541.
  10. Botting, N., & Adams, C. (2005). Semantic and inferencing abilities in children with communication disorders. International Journal of Language & Communication Disorders, 40(1), 49–66.
  11. Brinton, B., & Fujiki, M. (2005). Social behaviors of children with language impairment in collaborative work groups. American Journal of Speech-Language Pathology, 14(3), 238–249.
  12. Burns, M. K., Scholin, S. E., Kosciolek, S., & Livingston, J. A. (2015 a). Comparison of predictive validity and diagnostic accuracy of screening measures of reading skills. Journal of School Psychology, 53(5), 365–377.
  13. Burns, M. K., Pulles, S. M., Maki, K. E., Kanive, R., Hodgson, J., Helman, L. A., McComas, J. J., & Preast, J. L. (2015 b). Accuracy of student performance while reading leveled books rated at their instructional level by a reading inventory. Journal of School Psychology, 53(6), 437–445.
  14. Burns, M. K. (2024). Assessing an Instructional Level During Reading Fluency Interventions: A Meta-Analysis of the Effects on Reading. Assessment for Effective Intervention49(4), 214-224.
  15. Carr, S. (2024, March 28). How flawed IQ tests prevent kids from getting help in school. The Hechinger Report. https://hechingerreport.org/how-flawed-iq-tests-prevent-kids-from-getting-help-in-school/
  16. Catts, H. W., Adlof, S. M., & Weismer, S. E. (2006). Language deficits in poor comprehenders: A case for the simple view of reading. Journal of Speech, Language, and Hearing Research, 49(2), 278–293.
  17. Catts, H. W., & Kamhi, A. G. (2017). Prologue: Reading comprehension is not a single ability. Language, Speech, and Hearing Services in Schools, 48(2), 73–76.Catts, H.W., Nielsen, D. C., Bridges, M. S., Liu, Y. S., & Bontempo, D. E. (2015). Early identification of reading disabilities within an RTI framework. Journal of Learning Disabilities, 48(3), 281–297.
  18. Catts, H. W., & Petscher, Y. (2022). A cumulative risk and resilience model of dyslexia. Journal of Learning Disabilities, 55(3), 171–184.
  19. Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neuropsychological tests: A review of the literature on everyday cognitive skills. Neuropsychology Review, 13(4), 181–197.
  20. Chow, J. C., & Hollo, A. (2018). Language ability of students with emotional disturbance: Discrepancies between teacher ratings and direct assessment. Assessment for Effective Intervention, 43(2), 90–95.
  21. Chow, J. C., Walters, S., & Hollo, A. (2020). Supporting students with co-occurring language and behavioral deficits in the classroom. TEACHING Exceptional Children, 52(4), 222–230.
  22. Chow, J. C., & Hollo, A. E. (2022). Language skills of students with emotional and behavioral disorders. Intervention in School and Clinic, 58(1), 46–50.
  23. D’Agostino, J. V., Rodgers, E., Winkler, C., Johnson, T., & Berenbon, R. (2021). The Generalizability of Running Record Accuracy and Self-Correction Scores. Reading Psychology42(2), 111–130.
  24. Elleseff, T. (2025, March 26). Beyond ADHD: The hidden language challenges behind academic struggles. DLD and Me. https://www.dldandme.org/all-articles/beyond-adhd-the-hidden-language-challenges-behind-academic-struggles
  25. Elleseff, T (2020, Aug 27) Measurement and Interpretation of Standardized Reading Assessments for Professionals and Parents (Webinar) EBP REEL TALK, Smart Speech Therapy LLC Somerset, NJ
  26. Elleseff, T. (2016, May 27). The limitations of using total/core scores when determining speech-language eligibility. Smart Speech Therapy LLC. https://www.smartspeechtherapy.com/the-limitations-of-using-totalcore-scores-when-determining-speech-language-eligibility/
  27. Farrall, M. L. (2012). Reading assessment: Linking language, literacy, and cognition. John Wiley & Sons, Inc..
  28. Fletcher, J. M., & Vaughn, S. (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3(1), 30–37.
  29. Fletcher, J. M., & Miciak, J. (2023). Assessment of specific learning disabilities and intellectual disabilities. Assessment, 31(1), 53–74
  30. Flores Camas, R. A., & Leon-Rojas, J. E. (2023). Specific language impairment and executive functions in school-age children: A systematic review. Cureus, 15(8), e43163.
  31. Ford‑Jones, P. C. (2015). Misdiagnosis of attention deficit hyperactivity disorder: “Normal behaviour” and relative maturity. Paediatrics & Child Health, 20(4), 200–202.
  32. Forrest, C. L., Gibson, J. L., Button, K. S., Halligan, S. L., & St Clair, M. C. (2025). Do you like me? Differences in learning social cues in adolescents with developmental language disorder (DLD). Journal of Autism and Developmental Disorders. Advance online publication.
  33. Gabriel, R. E. (2019). Converting to Privatization: A Discourse Analysis of Dyslexia Policy Narratives. American Educational Research Journal57(1), 305-338.
  34. Gilmore, L., Campbell, M., Howard, G., & Ting, C. (2024). Best practice in psychoeducational assessment: beyond the manual. Educational and Developmental Psychologist41(2), 212–219.
  35. Helland, W. A., Lundervold, A. J., Heimann, M., & Posserud, M.-B. (2014). Stable associations between behavioral problems and language impairments across childhood – The importance of pragmatic language problems. Research in Developmental Disabilities, 35(5), 943–951.
  36. Hobson, H. M., & Lee, A. (2022). Camouflaging in Developmental Language Disorder: The Views of Speech and Language Pathologists and Parents. Communication Disorders Quarterly44(4), 247-256.
  37. Hollo, A., Wehby, J. H., & Oliver, R. M. (2014). Unidentified language deficits in children with emotional and behavioral disorders: A meta-analysis. Exceptional Children, 80(2), 169–186.
  38. Hollo, A., Chow, J. C., & Wehby, J. H. (2019). Profiles of language and behavior in students with emotional disturbance. Behavioral Disorders, 44(4), 195–204
  39. Hughes, D., (2001, Oct) Assessment of Narrative Abilities in Preschool and School-Age Children. Perspectives in Language Learning and Education.  pp. 7-11.
  40. Hulme, C., & Snowling, M. (2011). Children’s reading comprehension difficulties: Nature, causes, and treatments. Current Directions in Psychological Science, 20(3), 139–142.
  41. Hyter, Y. D. (2021). Developmental traumatology: Complex trauma, social pragmatic communication, and the speech‑language‑hearing scientist. International Society for Traumatic Stress Studies.
  42. Jensen de López, K. M., Kraljević, J. K., & Struntze, E. L. B. (2022). Efficacy, model of delivery, intensity and targets of pragmatic interventions for children with developmental language disorder: A systematic review. International Journal of Language & Communication Disorders, 57(4), 764–781.
  43. Jones, S. D., & Westermann, G. (2022). Under-resourced or overloaded? Rethinking working memory deficits in developmental language disorder. Psychological Review, 129(6), 1358–1372.
  44. Justice, L. M., Bowles, R. P., Kaderavek, J. N., Ukrainetz, T. A., Eisenberg, S. L., & Gillam, R. B. (2006). The index of narrative microstructure: A clinical tool for analyzing school-age children’s narrative performance. American Journal of Speech-Language Pathology15, 177–191.
  45. Kazda, L., Bell, K., Thomas, R., McGeechan, K., Sims, R., & Barratt, A. (2021). Overdiagnosis of attention-deficit/hyperactivity disorder in children and adolescents: A systematic scoping review. JAMA Network Open, 4(4), e215335.
  46. Kim, M. K., Bryant, D. P., Bryant, B. R., & Park, Y. (2017). A synthesis of interventions for improving oral reading fluency of elementary students with learning disabilities. Preventing School Failure: Alternative Education for Children and Youth61, 116–125. 
  47. Koutsoftas, A. D., & Gray, S. (2012). Comparison of narrative and expository writing in students with and without language-learning disabilities. Language, Speech, and Hearing Services in Schools, 43(4), 395–409.
  48. Koutsoftas, A. D. (2016). Writing process products in intermediate-grade children with and without language-based learning disabilities. Journal of Speech, Language, and Hearing Research, 59(6), 1471–1483.
  49. Kornev, A. N., & Balčiūnienė, I. (2021). Lexical and grammatical errors in developmentally language disordered and typically developed children: The impact of age and discourse genre. Children, 8(12), 1114.
  50. Kornilov, S. A., Magnuson, J. S., Rakhlin, N., Landi, N., & Grigorenko, E. L. (2015). Lexical processing deficits in children with developmental language disorder: An event-related potentials study. Development and Psychopathology, 27(2), 459–476.
  51. Kunnath, J. P. (2017). Teacher grading decisions: Influences, rationale, and practices. American Secondary Education, 45(3), 68–88.
  52. Lavi, A. (2019). Clinical Assessment of Pragmatics (CAPs). WPS Publishing.
  53. Lee, J., & Yoon, S. Y. (2017). The effects of repeated reading on reading fluency for students with reading disabilities. Journal of Learning Disabilities50, 213–224. 
  54. Leitão, S., St Clair, M. C., Botting, N., Gibson, J., & Jackson, E. (2025). “They don’t realise how hard he has to try every day”: The rewards and challenges of parenting a child with developmental language disorder. International Journal of Language & Communication Disorders, 60(2), e70003.
  55. Levickis, P., Sciberras, E., McKean, C., Conway, L., Pezic, A., Mensah, F. K., Bavin, E. L., Bretherton, L., Eadie, P., Prior, M., & Reilly, S. (2018). Language and social-emotional and behavioural wellbeing from 4 to 7 years: A community-based study. European Child & Adolescent Psychiatry, 27(7), 849–859.
  56. Liao, S. F., Liu, J. C., Hsu, C. L., Chang, M. Y., Chang, T. M., & Cheng, H. (2015). Cognitive development in children with language impairment, and correlation between language and intelligence development in kindergarten children with developmental delay. Journal of Child Neurology, 30(1), 42–47.
  57. Mailend, M.-L., Plante, E., Anderson, M. A., Applegate, E. B., & Nelson, N. W. (2016). Reliability of the Test of Integrated Language and Literacy Skills (TILLS). International Journal of Language & Communication Disorders, 51(4), 447–459.
  58. Miciak, J., Fletcher, J. M., Stuebing, K. K., Vaughn, S., & Tolar, T. D. (2014). Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification. School Psychology Quarterly, 29(1), 21–37.
  59. McGregor, K. K. (2020). How we fail children with developmental language disorder. Language, Speech, and Hearing Services in Schools, 51(4), 981–992.
  60. McGregor, K. K., Ohlmann, N., Eden, N., Arbisi‑Kelm, T., & Young, A. (2023). Abilities and disabilities among children with developmental language disorder. Language, Speech, and Hearing Services in Schools, 54(3), 927–951.
  61. McKenna, S. P., & Heaney, A. (2020). Composite outcome measurement in clinical research: the triumph of illusion over reality? Journal of Medical Economics23(10), 1196–1204.
  62. Miciak, J., & Fletcher, J. M. (2020). The critical role of instructional response for identifying dyslexia and other learning disabilities. Journal of Learning Disabilities, 53(5), 343–353.
  63. Nelson, N., Plante, E., Helm-Estabrooks, N., & Hotz, G. (2016). Test of Integrated Language and Literacy Skills (TILLS). Brookes.
  64. Nippold, M. A., Mansfield, T. C., & Billow, J. L. (2007). Peer conflict explanations in children, adolescents, and adults: Examining the development of complex syntax. American Journal of Speech-Language Pathology, 16(2), 179–188.
  65. O’Connor, R. E., Swanson, H. L., & Geraghty, C. (2010). Improvement in reading rate under independent and difficult text levels: Influences on word and comprehension skills. Journal of Educational Psychology102, 1–19. 
  66. Olson, K., Jacobson, K. K., & Van Oot, P. (2013). Ecological validity of pediatric neuropsychological measures: Current state and future directions. Applied Neuropsychology: Child, 2(1), 17–23.
  67. Parks, K. M. A., Hannah, K. E., Moreau, C. N., Brainin, L., & Joanisse, M. F. (2023). Language abilities in children and adolescents with DLD and ADHD: A scoping review. Journal of Communication Disorders, 106, 106381.
  68. Perichon, J., Paul, M. J., Chabanal, D., & Maïonchi‑Pino, N. (2025). A scoping review of the observed and perceived functional impacts associated with language and learning disorders in school‑aged children. International Journal of Language & Communication Disorders, 60(4), e70086.
  69. Putman, R. (2017). Using research to make informed decisions about the spelling curriculum. Texas Journal of Literacy Education, 5(1), 24–32.
  70. Recht D., & Leslie L. (1988).Effect of prior knowledge on good and poor readers’ memory of text. Journal of Educational Psychology, 80, 16–20.
  71. Richardson, K. (2002). What IQ Tests Test. Theory & Psychology12(3), 283-314.
  72. Rodgers, E., D’Agostino, J. V., Berenbon, R., Johnson, T., & Winkler, C. (2021). Scoring Running Records: Complexities and affordances. Journal of Early Childhood Literacy23(4), 665-694.
  73. Ryder, D., & Norwich, B. (2018). What’s in a name? Perspectives of dyslexia assessors working with students in the UK higher education sector. Dyslexia, 24(2), 109–127.
  74. Saar, V., Komulainen, E., & Levänen, S. (2022). The significance of nonverbal performance in children with developmental language disorder. Child Neuropsychology29(2), 213–234.
  75. Sadek, J. (2023). Attention Deficit Hyperactivity Disorder Misdiagnosis: Why Medical Evaluation Should Be a Part of ADHD Assessment. Brain Sciences13(11), 1522. 
  76. Snow, C. E. (2010). Academic language and the challenge of reading for learning about science. Science, 328(5977), 450–452.
  77. Snowling, M.J. and Hulme, C. (2025), Risk Factors for Dyslexia: Addressing Oral Language Deficits. Mind, Brain, and Education. 
  78. Stevens, E. A., Austin, C. R., Moore, E. W., Scammacca, N. K., Boucher, A. N., & Vaughn, S. (2021). Current state of the evidence: Examining the effects of Orton-Gillingham reading interventions for students with or at risk for word-level reading disabilities. Exceptional Children, 87(4), 397–417.
  79. Stoeckel R. E., Colligan R. C., Barbaresi W. J., Weaver A. L., Killian J. M., Katusic S. K. (2013). Early speech-language impairment and risk for written language disorder: A population-based study. Journal of Developmental and Behavioral Pediatrics, 34(1), 38–44.
  80. What Works Clearinghouse. (2017, September). Intervention report: Leveled Literacy Intervention (LLI). U.S. Department of Education, Institute of Education Sciences. https://ies.ed.gov/ncee/wwc/InterventionReport/679
  81. Wiig, E. H., Semel, E., & Secord, W. A. (2013). Clinical Evaluation of Language Fundamentals, Fifth Edition (CELF-5). Pearson.
  82. Williams G. J., Larkin R. F., Blaggan S. (2013). Written language skills in children with specific language impairment. International Journal of Language & Communication Disorders, 48(2), 160–171. 
  83. Ziegenfusz, S., Paynter, J., Flückiger, B., & Westerveld, M. F. (2022). A systematic review of the academic achievement of primary and secondary school-aged students with developmental language disorder. Autism & Developmental Language Impairments, 7, 23969415221099397.

Share this with others

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop