The fear I hear most from clinicians who are actually using AI well — not the skeptics, the early adopters — is surprisingly consistent.

"If I keep letting the tool do the thinking, will I still be able to think?"

It's a fair question. And the research is starting to suggest it's the right one to ask.

The Problem Is Real

The MIT Media Lab's 2025 electroencephalography (EEG) study on the brains of participants using ChatGPT showed that those who drafted essays with LLM assistance exhibited reduced neural engagement, weaker memory formation, and, most strikingly, a diminished sense of ownership over their own work. When asked to reproduce what they'd written minutes earlier, they couldn't.

Translate that into clinical practice, and the implications are sharp. The AMA's augmented intelligence principles frame the same concern in policy terms: AI is meant to augment physician judgment, not replace it. But the line between augmentation and replacement is drawn inside your own head, every time you open the tool.

This isn't a distant risk. It's the daily posture question facing every clinician using an ambient scribe, a decision-support tool, or a literature synthesis model. And most of us haven't been trained to navigate it.

The Solution Is Metacognitive Friction

Here's the core idea. Metacognition (thinking about your own thinking) is the clinical muscle most at risk of atrophy when AI enters your workflow. It's also the one that makes you irreplaceable.

The solution isn't to avoid AI. The solution is to build deliberate friction into the moments where your thinking would otherwise be quietly replaced.

Metacognitive friction is the practice of pausing, briefly and intentionally, to engage your own reasoning before you look at what the model produced. It's the clinical equivalent of showing your work before checking the answer key. Done consistently, it preserves the reflex that a decade of training built — the reflex that makes you notice when something is off, even when you can't yet articulate why.

This isn't theoretical. It's a repeatable practice you can embed in any workflow, with any tool.

There's one more piece worth naming, because it's genuinely new. Preserving metacognition isn't only about protecting the reasoning you already do well — it's also about building a skill most of us were never trained for: critically reviewing clinical work we didn't author. Medical training teaches you to present, to generate, to decide. It doesn't teach you how to audit a plausible-sounding differential that arrived fully formed on your screen. That's a new cognitive skill, and it deserves deliberate practice.

Four Practices That Preserve the Muscle

1. Pre-commit before you prompt. Before you open the tool for a complex case, write down — even one sentence — what you already suspect. "I think this is perimenopausal insulin resistance with a thyroid contributor." Now run the AI. When the output comes back, you'll notice immediately where it confirms your read, where it diverges, and where it surfaces something you missed. Pre-committing keeps your pattern library active, preventing it from being overwritten by the model's output. A 2025 NEJM AI study on clinical reasoning with AI exposes that current LLMs, despite excelling on medical exams, still lack the nuanced, probabilistic reasoning that expert clinicians use when updating decisions under uncertainty.

2. Interrogate the output, don't consume it. Every AI output should trigger the same three questions you'd ask a resident presenting a case: What's the evidence? What did you miss? What assumption is this built on? The Stanford HAI 2025 AI Index reports that clinical AI accuracy is improving fast, but hallucination rates remain nontrivial — which means your interrogation isn't optional, it's the safety layer. Consuming AI output uncritically is the behavior that causes cognitive atrophy. Interrogating it is the behavior that builds skill.

3. Protect one task per week from AI entirely. Pick something you used to do unaided — a differential for a complex case, a literature synthesis, a patient protocol — and do it without the tool. Not because AI would do it worse, but because you need the reps. Musicians don't stop practicing scales just because they have synthesizers. The reps aren't for the output. They're for the muscle. This Lancet Digital Health commentary on AI and clinical skill retention made the same argument for radiologists and endoscopists: deliberate unaided practice is what keeps their skills from decaying, even among practitioners who use AI daily.

4. Build the audit skill deliberately.

Reviewing AI-generated clinical work is not the same cognitive task as generating it yourself, nor is it the same as supervising a resident. A resident's reasoning arrives with tells — hesitation, phrasing, the way they hedge — that tell you where to probe. AI output arrives smooth, confident, and internally consistent, whether it's right or wrong. That surface polish is the hardest part to evaluate against, because your brain is wired to trust fluency as a proxy for accuracy.

The skill you're building is the ability to read AI output against the grain — to notice what's missing, what's been smoothed over, what assumptions got baked in before the first word was generated.

A few practical ways to build the audit skill: read the output once for what it says, then a second time for what it doesn't say. Ask what a thoughtful colleague would push back on. Notice when the output feels too clean — real clinical reasoning has texture, uncertainty, and trade-offs, and output that lacks those is often hiding something. Over time, this becomes a distinct clinical sense, separate from the diagnostic intuition you already have.

This is the skill the next generation of clinicians will need most, and it's the one no medical school curriculum currently teaches. Build it deliberately, the same way you'd build any other clinical competency — with reps, with feedback, and with the humility to assume the smooth answer is the one that needs the hardest look.

The Posture, Not the Tool

None of this is tool-specific. Whether you're using Abridge, OpenEvidence, Claude, or ChatGPT, the practices are the same. Pre-commit. Interrogate. Protect the reps.

The clinicians who stay sharp in an AI-augmented practice won't be the ones with the best tools. They'll be the ones with the best posture toward the tools they have. Metacognition is preserved by friction — the small, deliberate pauses where you engage your own reasoning instead of deferring to the model's.

Your training didn't make you good at generating differentials. It made you good at noticing. Protect the noticing, and the tools become what they were meant to be: scaffolding for the work only you can do.

This Week in Clinical AI

Nature reports that AI chatbots confidently diagnosed a fictional disease that researchers planted in the literature. A team led by Almira Osmanovic Thunström at the University of Gothenburg invented "bixonimania" — a fake eye condition supposedly caused by blue-light exposure — and uploaded two obviously bogus preprints, complete with a fictional author at a non-existent university and acknowledgements thanking researchers aboard the USS Enterprise. Within weeks, ChatGPT, Gemini, Copilot, and Perplexity were repeating the invented condition as legitimate medical advice, some citing prevalence rates of one in 90,000. The fake papers were subsequently cited in a peer-reviewed Cureus paper, which was retracted only after Nature contacted the journal. A separate Lancet Digital Health study found LLMs hallucinate more when input text looks professionally medical — formatted like a hospital discharge note or clinical paper — than when it comes from social media posts. For clinicians whose patients are increasingly running symptoms through chatbots before the visit, the lesson is counterintuitive and sharp: surface polish is not a signal of accuracy, and the more "medical" a source looks, the more likely the model is to repeat it with confidence. (Nature)

Healthcare CIOs now view AI integration as a competitive necessity — but most health systems are stuck in pilot purgatory. A new Qventus report surveyed more than 60 CIOs, Chief AI Officers, and CMIOs at medium and large national health systems and found the execution gap is stark: while 42% of health systems are actively deploying AI across multiple use cases, only 4% have achieved scaled implementation with measurable outcomes. 94% of leaders said delaying AI creates a competitive disadvantage, and 68% said it will worsen clinician burnout. The barriers aren't capability — they're EHR dependencies, third-party integration sprawl, and the fact that over 50% of systems are burning a quarter of their IT bandwidth just managing vendors. For independent functional and integrative practices, the signal cuts two ways: the window for deliberate, clinician-led AI adoption is narrower than it was six months ago, but the integration chaos inside large systems is exactly the vulnerability that lets smaller, nimbler practices outpace them on actual patient experience.(Healthcare IT News)

The Lancet Digital Health calls 2026 the year agentic AI moves from theory to bedside and flags the risks that come with it. A new editorial tracks how multi-agent AI frameworks have moved into real-world studies over the past year, from simulated transplant selection committees to autonomous ophthalmology workflows to biomedical research copilots. The editors acknowledge the vision is compelling — virtual colleagues that can reason, debate, and coordinate with humans — but argue the integration is hampered by data heterogeneity, unresolved accountability in multidisciplinary teams, persistent algorithmic bias, and decision-making processes that remain opaque even to other agents in the system. Their recommendations are pointed: open interoperable data standards, continuous bias assessment, and multidisciplinary co-design with clinicians, ethicists, and patients at the table. For functional and integrative clinicians, the signal is that "AI as tool" is being quietly replaced by "AI as colleague" in the infrastructure conversations happening one layer above your practice — and the accountability questions aren't yet resolved. Your audit skill is the bridge. (Lancet Digital Health)

Upcoming Conferences & Events

The calendar is filling up. A few worth putting on your radar - and a chance to connect with the Vibrant community in person!

Apr 10–12 — A4M Spring Congress · West Palm Beach, FL The A4M's flagship spring gathering for anti-aging and longevity medicine practitioners. Strong mix of clinical science and practice building. Vibrant (just Sunita) and Lexi (Dr. G) will be there!

Apr 13 — Vibrant Community Webinar “All Things Peptides with Leonard Pastrana, nuAgeBio“ Virtual Event Join Sunita and Leonard Pastrana to discuss the latest changes in the peptide regulatory landscape and what it means for your practice.

May TBD — Vibrant Community Webinar “AI Skills for the Modern Clinician“ Virtual. Join Dr. G and Dr. Sunjya Schweig to walk through the new competencies for the AI-savvy clinician, along with tangible tips for using a variety of tools in your practice.

May 27–30 — IFM Annual International Conference · San Diego, CA The largest gathering of functional medicine clinicians in the world. This year's program reflects the field's rapid expansion into AI-assisted care and multi-omics diagnostics. Vibrant will be there!

Jun 9–11 — Longevity Docs Cannes 2026 · Cannes, France Awards and summit for longevity medicine's emerging leaders. A smaller, high-signal event for clinicians building practices at the frontier.

Nov 5-7 — Private Physicians Alliance Annual Meeting · St. Petersburg, FL The gathering for independent, cash-pay, and concierge physicians navigating practice independence. Practical and peer-driven. Vibrant will be there!

Nov 8-11 — American College of Lifestyle Medicine Conference · Orlando, FL Lifestyle medicine's main annual event — evidence-based approaches to behavior change, chronic disease, and healthspan. Growing overlap with the longevity medicine community. Vibrant will be there!

TBA — Longevity Clinics Roundtables · Buck Institute Clinical practice meets research infrastructure. One for the clinicians who want to be closer to where the science is actually being made.

Know of an event we should add to the list? Reply and let us know.

Until Next Week

AI is genuinely changing what clinical practice can look like. The documentation is faster. The literature synthesis is better. The scaffolding for protocols, patient education, and decision support is more accessible than it's ever been. Those are real gains, and the clinicians using these tools well are absolutely getting leverage from them.

But leverage without metacognition is just quiet atrophy with a faster workflow. The research is starting to show it, the case reports are starting to surface it, and anyone honest about their own practice can feel the pull — the small, daily temptation to accept the output instead of interrogating it.

The four practices in this issue aren't about slowing down. They're about preserving the modern clinician. Pre-commit before you prompt. Interrogate the output instead of consuming it. Protect one task a week from AI entirely. And build the audit skill — the new one, the one nobody trained you for — with the same deliberateness you'd bring to any other clinical competency.

If you've already noticed the pull in your own workflow, or you've built your own practice for preserving clinical judgment in an AI-augmented day, reply and tell us what you're seeing. The best insights in this newsletter come from clinicians living it.

The goal was never to think less. It was to think at a higher altitude — at the intersection of what the data shows, what the evidence recommends, and what this specific patient is ready to do about it. That's the work only you can do. We're here to help you protect it.

—Sunita and Dr. G

👋 Welcome New Readers

The Modern Clinician is written for functional, integrative, and longevity-focused physicians who want to scale their impact and deliver cutting-edge care.

If you liked this one, share it with a colleague! We appreciate you spreading the word.

To learn more about the why behind this newsletter, start with our first post introducing The Modern Clinician.

Keep Reading