Ponderings on Evidence, Expertise, and the Future of Clinical Judgment in Speech Language Pathology

Recently, I attended a webinar that discussed the best use of artificial intelligence with speech-language pathologists. It was marketed as a way to improve workload management and caseload efficiency. What I largely witnessed instead was a familiar problem, old habits becoming newly automated. I observed some non-evidence-based practices that have long persisted in our field, simply being repackaged as AI prompts.
Presenters asked us to generate “receptive language” activities, despite years of research showing that receptive language as a treatment target lacks both evidence and conceptual clarity (Rinaldi et al., 2021) as well as asked us to create “multi-step directions”, aka goals without actual evidence behind them (Kamhi, 2014; Wallach 2014)
The technology wasn’t the problem; the problem was what people were asking it to do.
The Tool Is Only as Good as the User
AI can be revolutionary in the right hands. It can help generate materials, organize data, model sentence complexity, or simulate social inferencing scenarios. But without a clear evidence base and a clinician who knows how to interpret and apply it, AI becomes a mirror of our misconceptions. It replicates the same pseudoscientific interventions, albeit more rapidly.
AI does not (and cannot) make us experts. If anything, it increases the gap between those who understand the underlying science of language and learning and those who rely on emotions. An uninformed clinician with AI is not more efficient; they’re simply misinformed at scale.
The Illusion of “Expertship”
One striking detail from the webinar was the repeated use of the phrase, “As an expert speech-language pathologist…” Each prompt began with those words, as if declaring expertise could conjure it. But when the output that follows includes non-evidence-based interventions, the label “expert” loses all meaning.
Even more troubling was the implicit claim that SLPs themselves were already the experts, and that their role was to “teach” AI to think like them. That framing assumes that what we are already doing is inherently valid, rather than in need of scrutiny and alignment with evidence. If our own practices are riddled with pseudoscientific or outdated approaches, then we are not teaching AI to be better clinicians; instead, we are training it to replicate our worst habits, only faster.
You don’t declare yourself an expert and then teach AI how to be a bad one!
Real experts never call themselves experts. Expertise is not self-ascribed; it’s demonstrated through continuous inquiry, humility, and accountability. The more genuine knowledge someone acquires, the more aware they become of how much they don’t know.
To be an expert is to be perpetually learning: be it reading, re-evaluating, testing, or refining. It means being willing to discard cherished ideas when the evidence contradicts them. It means sitting with discomfort, uncertainty, and the limits of one’s knowledge without papering over those limits with buzzwords or AI-generated templates.
Expertise Is Not Automation
When we prompt AI with “As an expert SLP,” we are confusing authority with evidence. We present ourselves as experts without delivering the substance. Expertise cannot be automated or imitated; it must be earned and continually qualified for through disciplined, skeptical thought.
AI can perform the task, but it does not question the task. It tends to reproduce only patterns that already exist. If those patterns are rooted in outdated or non-evidence-based practice, the results will mirror that. In contrast, when informed professionals use AI to augment their reasoning (to cross-check, organize, or generate starting points) it can become a powerful extension of clinical thought rather than a shortcut around it.
A recent MIT Media Lab study divided college-aged participants into three groups writing SAT-style essays: one using AI, one using search engines, and one writing independently. The group using AI showed reduced brain engagement across 32 cortical regions and consistently underperformed at linguistic, behavioral, and neural levels. Over time, their problem-solving abilities declined; they became passive consumers of text rather than active thinkers (Kosmyna et al, 2025).
The same principle applies to us. If we allow AI to fully write our reports or compose our clinical narratives without engaging deeply in the reasoning process ourselves, we risk losing the very cognitive precision that defines clinical expertise. AI can write a paragraph, but it cannot think, and if we stop doing that, our expertise atrophies.
What can we do?
As AI becomes an integral part of what we do, we must use it wisely. We need better judgment. We need to stop treating “expert” as a marketing label and start reclaiming it as a responsibility.
Before asking AI to write our lesson plans, compose our reports, or draft our parent letters, we should be asking ourselves:
- Is the target grounded in evidence?
- Do I understand the theoretical framework that supports this intervention?
- Am I using AI to think with or to think for me?
- If we can’t answer those questions clearly, the problem isn’t the technology.
The Bottom Line
AI is very powerful. In the hands of informed individuals, it can streamline work and extend the reach of solid science. In the hands of uninformed individuals, it magnifies pseudoscience and cloaks it in the illusion of sophistication.
The real challenge is not to make AI “SLP-friendly,” but to make SLPs more scientifically literate. Real expertise means you never stop learning, and you’re honest enough to admit when you don’t know.
We don’t need AI to make us fake experts. We need scientifically backed knowledge to make AI truly useful.
References:
- Kamhi, A. (2014). Improving clinical practices for children with language and learning disorders. Language, Speech, and Hearing Services in Schools, 45(2), 92-103.
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task [Preprint]. arXiv.
- Rinaldi, S., Caselli, M. C., Cofelice, V., D’Amico, S., De Cagno, A. G., Della Corte, G., Di Martino, M. V., Di Costanzo, B., Levorato, M. C., Penge, R., Rossetto, T., Sansavini, A., Vecchi, S., & Zoccolotti, P. (2021). Efficacy of the treatment of developmental language disorder: A systematic review. Brain Sciences, 11(3), 407.
- Wallach, G.P. (2014). Improving clinical practice: A school-age and school-based perspective. Language, Speech, and Hearing Services in Schools, 45, 127-136.