AI is improving, but still has limitations

This week felt like a turning point in how “real” healthcare AI is becoming. Not just pilots or proofs of concept, but infrastructure getting built, consumer tools getting closer to the medical record, and clinicians pushing back when implementation misses the mark. I’m using it to study, treat my patients better, and keep up-to-date with constantly changing landscape.

What’s happening this week in AI

  • Physicians nearly universally adopt AI

  • Killer robots may be further away than we think

  • DC rolls out the latest regulatory framework

Plus: The best tool for your clinical questions

Let’s dive in.

LATEST NEWS

🩺 Physicians nearing universal AI adoption: A new survey suggests nearly all U.S. physicians are now using some form of AI in their workflow, with adoption approaching 100 percent. Most use cases center on documentation, clinical decision support, and administrative efficiency, though many clinicians still report concerns about accuracy, liability, and workflow integration. The tone is less about whether to use AI and more about how to use it safely and effectively.
Why this Matters: We have crossed the tipping point, AI is no longer optional in clinical practice, but thoughtful implementation remains the real challenge.

🧠 Fitbit AI coach taps into medical records: Fitbit is rolling out an AI health coach that can read your labs, meds, and visit history to deliver personalized guidance. It is also adding glucose insights and care navigation, which starts to blur the line between wellness and clinical decision support.
Why this Matters: Patients may soon walk into clinic with AI-generated interpretations of their own chart.

⚠️ Kaiser clinicians push back on AI triage: Therapists are raising concerns that AI-supported intake workflows are delaying care and missing high-risk patients, with some alleging unsafe triage decisions. The tension here is not about AI replacing clinicians, but about where it sits in the workflow.
Why this Matters: Poorly placed AI can increase risk, even if the model itself performs well.

RESEARCH

🤖 Robotics and AI in medicine roadmap (Preprint): A multidisciplinary workshop report outlines what is holding back AI-enabled robotics in clinical care, including lack of datasets, evaluation standards, and regulatory clarity. It highlights opportunities in surgery, rehab, and austere environments, but emphasizes the need for coordinated national infrastructure.
Key Takeaway: Clinical AI will stall without shared benchmarks and real-world validation ecosystems.

⚖️ Limits of agentic AI in healthcare (Preprint, not peer-reviewed): This qualitative study highlights a major gap between the hype of autonomous AI agents and the reality of tightly constrained clinical use. Safety, liability, and unclear definitions of “agency” continue to limit real-world autonomy.
Key Takeaway: Fully autonomous clinical AI remains more narrative than reality, for now.

ETHICS/REGULATION

🏛️ White House releases national AI policy framework: A new federal blueprint outlines a lighter-touch approach to AI regulation, focusing on innovation, workforce, and federal coordination over strict oversight. It is not binding but signals direction of travel.
Why this Matters: Regulatory posture may accelerate clinical AI adoption, with fewer early barriers.
Read it in its entirety here

⚖️ Healthcare AI adoption push at HHS: Industry groups are actively lobbying for reduced regulatory friction, including easing requirements for clinical decision support tools and EHR-integrated AI. Why this Matters: The definition of “clinical-grade AI” may become more flexible, for better or worse.

TOOL I’M EXPLORING
🧠 OpenEvidence


What it does: AI-powered medical search engine for rapid clinical answers.

How I use it: I use this when I want a quick, evidence-backed answer without digging through multiple tabs. It is especially helpful mid-clinic when a guideline detail slips my mind. I treat it like a faster PubMed with synthesis.

Pro-tip: Ask for diagrams, tables, or flow charts whenever possible. For me, visual aids make it much easier to understand and process the information it gives me.

Prompt to try:

Create a comprehensive first-line treatment plan for chronic low back pain. Discuss pathophysiology, clinical course. Utilize lifestyle interventions as well as pharmacologic options for treatment. Create tables comparing different treatment strategies. Use only verified citations.

FINAL THOUGHTS

AI is continuing to grow. As more and more clinicians begin using it, it becomes more important to understand and keep up. You’re not behind yet, but it’s only going to get more complicated from here.

Keep an eye out in the next couple of weeks, I’ll be unveiling something big to help you make AI work for you.

Best Regards,
Chris Massey, MD

“Science is not only a disciple of reason but also one of romance and passion.”

Stephen Hawking

Are you enjoying Intelligent Medicine?

Disclaimer: This newsletter is for educational and informational purposes only and does not constitute medical advice. Readers should review primary sources and follow applicable clinical guidelines and institutional policies before implementing any changes. Always de-identify patient data and review all outputs for accuracy.

Reply

Avatar

or to participate

Keep Reading