The Modern Clinician

The Skills That Close the Gap

Something shifted this year.

The clinicians we meet at conferences with aren't asking whether to use AI anymore. They're asking how to get better at it. How to stop fumbling through prompts. How to trust the output without losing their clinical judgment. How to build practices that actually leverage these tools instead of just dabbling.

And there's another shift happening in the exam room: patients are showing up having already consulted AI. They're using ChatGPT Health to interpret their labs, research their symptoms, and sometimes even adjust their own protocols. AI has become a therapeutic tool in their lives, whether we're guiding that process or not. Which means we need fluency not just to improve our own workflows, but to meet patients where they are and counsel them well.

Meanwhile, the friction points we've all accepted as inevitable, such as documentation that eats our evenings, the density of patient histories that takes thirty minutes to synthesize, and the cognitive load of keeping up with literature, all now have solutions to smooth the frictions so we can restore the flow back into our workflows. AI isn't a future promise for these problems. It's working now, for clinicians who've developed the skills to use it.

That's the gap forming. Not between believers and skeptics, but between clinicians developing real fluency and those still treating AI like a novelty.

Tom Blue, Sunjya Schweig, and I have spent the past several months building the OvationLab framework to become an AI-enhanced practitioner. These aren't abstract ideas. They're observable skills, things you can practice, develop, and bring into your work this week as the modern clinician. 

Three of them are prerequisites to everything else. Master these the way you once mastered anatomy and biochemistry, and the more advanced skills start to build themselves.

Precision Prompting

Most clinicians type into AI the way they Google. Vague question, hope for the best, complain when the output is generic.

Precision prompting is different. It's talking to AI the way you'd present a case to a colleague you respect, with context, specificity, and a clear ask.

Think about the difference between "thoughts on fatigue?" and "I've got a 47-year-old woman experiencing perimenopausal symptoms, ferritin 18, TSH 3.1, fasting insulin 10, cortisol pattern suggests HPA dysfunction, sleep fragmented. What evidence-based lifestyle interventions should be recommended first?"

The second version takes thirty extra seconds. It returns something you can actually use.

The key is treating AI like a consultant, not a search engine. Give it the clinical picture. Tell it what you've already considered. Specify what format would be most useful: a differential, a prioritized list, or a patient-friendly explanation. The more context you front-load, the less time you spend refining.

This is a learnable skill. And once you've internalized it, you'll never go back to vague prompts again.

Don't have time to craft the perfect prompt? Voice-type your question into Prompt Cowboy and let it do the structuring for you. Seconds, not minutes.

Human-in-the-Loop Oversight

AI sounds confident when it's wrong. That's the trap.

I've watched AI scribes document "denies chest pain" when the patient said "chest pressure is better today." I've seen AI chatbots list 11-β-hydroxypregnenolone instead of 17-OH-Pregnenolone when reviewing the steroidogenesis pathway. The information was wrong.

The skill here isn't skepticism, it's active verification. Before you accept any AI output, ask: Does this match physiology? Would I stake my license on it? What breaks if this is subtly off?

This matters even more when you're scaling. If you're building systems, clinical protocols, patient education, and team workflows, errors can compound. The clinicians who will lead in this space are the ones who build verification into their process from day one, not the ones who bolt it on after something breaks.

And it matters when you're counseling patients on their own AI use. They need to understand that the confident-sounding answer they got at 2 am might be wrong in ways that aren't obvious. We're the ones who can teach them how to evaluate what they're reading and protect them from harm.

Iterative Refinement

Here's where most people quit too early.

AI rarely nails it on the first pass. Neither does a differential diagnosis. The value emerges through refinement.

Generic output: broad fatigue differential. After one refinement ("focus on metabolic drivers"): better. After two ("adjust for luteal phase timing"): useful. After three ("prioritize for insulin resistance with elevated cortisol"): now we're thinking together.

Each cycle sharpens the output and sharpens your instincts. You start anticipating what context AI needs. Your prompts get tighter. The whole process compresses.

This is the flywheel. The more you practice, the less effort it takes. And when you're building a practice, that efficiency translates directly into hours back in your week, hours you can reinvest in patient care, in your team, in the work that actually matters.

The Pitfalls Worth Knowing

Like any clinical skill, these come with their own learning curve. Here's what to watch for.

AI Error Vigilance

The more AI gets things right, the easier it becomes to stop checking. Even with strong verification habits, AI's confidence can slip past your filter—especially when the output is well-structured and aligns with what you'd expect to see. Three habits keep this in check:

Cross-check across models. Don't take one AI's word for it. Drop a ChatGPT response into Perplexity or Claude and ask it to verify. Different models have different blind spots. When two disagree, that's the signal telling you where to apply your clinical judgment.

Know what you're looking for before you prompt. If you go in with no expectation of the output, you'll accept whatever sounds good. But if you have a hypothesis—even a loose one—you evaluate the response against your own reasoning instead of being led by it. This is how you preserve your clinical thinking.

Anchor to primary sources. When AI cites a study, a guideline, or a mechanism—verify it exists and says what AI claims. This takes seconds with PubMed or UpToDate, and it catches the most dangerous failure mode: confident output built on a hallucinated reference.

Competency Compression

AI raises the floor for everyone. A newer clinician with strong AI skills can now manage complexity that used to take years to develop.

This is mostly a good thing. But AI generalizes. It doesn't know your patient population, the nuances of your clinical approach, or the patterns you've learned to recognize through years of practice. It won't weigh the things you weigh. It won't ask the follow-up question you'd ask.

If you're building a practice that stands out—that delivers the kind of care that keeps patients coming back and referring others—you can't let efficiency erode expertise. Use AI for the cognitive heavy lifting. Keep your clinical instincts sharp. That's your edge.

The Perception Gap

Same AI use, completely different patient reactions. One patient thinks AI assistance is cutting-edge care. The next thinks you're outsourcing their health to a computer.

Managing this requires calibration. I've learned to read the room before mentioning AI at all. For the skeptical: "I'm using a tool to double-check my thinking." For the enthusiasts: "This helps me review more research than I could alone." Same truth, different frame.

How you communicate AI use is part of your patient relationship. Get this wrong and you erode the trust that makes everything else possible.

Where This Goes Next

These skills transfer across every tool—ChatGPT, Claude, ambient scribes, whatever emerges next. The technology keeps changing. The competencies are what stay with you.

This is how the modern clinician emerges. Not by chasing every new tool, but by building the foundation that makes any tool useful. The clinicians who define the next era of medicine won't be the ones with the longest tech stack. They'll be the ones who learned to think alongside AI without losing what made them great clinicians in the first place.

You already learned medicine. You can learn to prompt.

How I AI with Dr. Lexi Gonzales

The next evolution of clinical AI isn’t about using more tools—it’s about developing the skills to integrate AI intentionally, safely, and with clinical judgment.

In this week’s How I AI conversation, Dr. Lexi Gonzales, functional medicine doctor and Senior Clinical Implementation and AI Specialist at OvationLab, joined Sunita Mohanty to explore what it actually means to become an AI-Enhanced Practitioner. The discussion focused not on technology for its own sake, but on the realities clinicians face when integrating AI into everyday workflows, decision-making, and patient care.

Rather than emphasizing platforms or features, the conversation centered on how clinicians think with AI: how to evaluate outputs, supervise systems, reduce cognitive load, and integrate AI into documentation, research, patient communication, and operational workflows without compromising standards of care.

A recurring theme was agency. When clinicians lack a shared language or framework for AI, adoption becomes reactive—driven by vendors, systems, or external pressure. The AI-Enhanced Practitioner Framework is designed to reverse that dynamic by defining the core competencies clinicians need to lead AI integration, not simply participate in it.

The conversation reinforced a simple but important idea:
AI fluency in medicine is less about technical mastery and more about clinical oversight, judgment, and intentional use. The clinicians who thrive will be those who can supervise AI the same way they supervise trainees, tests, and protocols—clearly, critically, and responsibly.

Want to go deeper with this framework?

Explore the core skills clinicians need to evaluate, implement, and supervise AI in clinical care.

👇 Click the button below to access the full framework.

Join the Conversation Live at Integrative Healthcare Symposium

This discussion continues live at IHS on Friday, February 20 at 4:45 PM EST.

CPEU Panel Discussion
The AI-Enhanced Practitioner: Clinical Skills for the Next Generation of Longevity Medicine
📍 Murray Hill

For this panel, Lexi Gonzales, ND, Tom Blue, and Sunjya Schweig, MD discuss and examine how AI is redefining core competencies already central to functional and longevity medicine: data synthesis, pattern recognition, clinical reasoning, and patient communication. Through discussion and demonstration, the session will illustrate how AI-augmented clinical skills reduce cognitive burden, surface previously obscured insights, and support more precise, individualized care strategies.

Participants will gain pragmatic, immediately deployable methods to refine workflow efficiency, enhance interpretive accuracy, and expand clinical capacity within a rapidly evolving care landscape.

Get ahead in 2026 with Vibrant, the AI-powered, all-in-one EHR built specifically for personalized medicine. Schedule a demo with our team to learn more about how we can help you extend your clinical brain and deliver great personalized care.

This Week in Clinical AI

Mainstream investors bet on AI-driven longevity biotech as first partial “de-aging” trial clears FDA Fortune reports that Life Biosciences has secured FDA clearance for what it calls the first partial “de-aging” human trial, underscoring how AI-guided target discovery and biomarker modeling are moving geroscience from theory to regulated clinical experimentation. With high-profile backers and platforms like Insilico Medicine running dozens of AI-driven longevity drug programs in parallel, longevity therapeutics are crossing into the same mainstream capital and regulatory lanes as oncology and cardiometabolic drugs.

Death Clock wants to turn dynamic mortality prediction into a consumer-facing longevity operating system Death Clock positions itself as a “clinical‑grade” AI longevity platform that ingests labs, wearables, and biometrics to generate a continuously updated life‑expectancy estimate and targeted protocol against the “Four Horsemen” of chronic disease. By turning lifespan modeling and biomarker tracking into a closed-loop product that adapts as users change behavior and biology, it reframes mortality risk from a static actuarial concept into a personalized, real‑time metric meant to drive adherence and engagement in preventive care.​

From the founders of Fitbit, Luffu is building an AI guardian for multi‑generational family health Luffu aggregates medications, symptoms, vitals, visits, and messages for kids, parents, elders, and even pets into a single AI‑powered “guardian” that flags concerning trends and surfaces proactive “guardian moments” for the family member coordinating care. By combining multimodal data capture (text, voice, photos, device integrations) with continuous monitoring and privacy‑granular sharing, it aims to offload the cognitive burden of caregiving and turn the messy reality of family health into a managed, event‑driven stream clinicians and caregivers can actually act on.​

👋 Welcome New Readers

The Modern Clinician is written for functional, integrative, and longevity-focused physicians who want to scale their impact and deliver cutting-edge care.

If you liked this one, share it with a colleague! We appreciate you spreading the word.

To learn more about the why behind this newsletter, start with our first post introducing The Modern Clinician.

Keep Reading