Welcome to Intelligent Medicine!
AI isnt coming soon, it’s here now. AI is making its way to medicine: streamlining documentation, billing, authorization, and so much more. Each innovation promises to make our jobs easier and patients safer, but will it? As clinicians, our job is to make sure AI keeps patients safe and doesn’t increase administrative burden. We need to be positioned at the forefront of AI innovation.
This newsletter is designed to empower clinicians of all disciplines to understand AI better and utilize it to improve their clinical efficiency. Whether you’re learning more about AI or building something, this newsletter is for you. My hope is to help you understand AI better and inspire you to use AI for yourself.
Here’s what you can expect each week:
The latest news in healthcare AI, and WHY it matters
Cutting-edge research to keep you up to date
Up-to-date Ethical and Regulatory news
Plus: Each week, I’ll send you a “Call To Action,” designed to help you get started using AI. It may be in the form of a challenge using LLM prompts or an exercise designed to help you learn a new concept.
This introductory newsletter is packed with what you need to know to take your first steps to learn AI in healthcare. Each Friday, I’ll send you the latest newsletter to keep you up-to-date.
Let’s dive in.
LATEST NEWS
📢 OpenAI Announces ChatGPT Health

OpenAI has announced development of ChatGPT Health, which allows users to connect ChatGPT to their health apps like Apple Health, MyFitnessPal, Weight Watchers, etc. Users can even share their medical records with it, using a service called b.well. Medical information will be stored separately from other chats, to prevent leakage of sensitive data.
Why it matters: Patients are already using LLMs like ChatGPT to understand their health. OpenAI says the typical “ChatGPT Health is not intended for diagnosis and treatment, and it’s not supposed to replace medical care,” but we all know patients will use it and can be misled.
Interested? Sign up for the waitlist.
🔥 Also check out Open Evidence and DoxGPT. From what I’ve seen, they give accurate and up to date information concisely. But watch for hallucinations.
More News:
📝 AI can now refill prescriptions: Utah has launched a program with Doctronic.ai, a telehealth platform, that will allow them to refill certain medications. Its a trial that could lead to more widespread production. I’m not a fan of this. Medications are regulated for a reason. They need to be re-evaluated at every visit to make sure they are working and are necessary. If patients are able to refill prescriptions without a clinician, they may not be properly treated.
🏫 The AMA launched the Center for Digital Health and AI, designed to keep clinicians at the forefront of AI development. This initiative focuses on policy, clinical integration, education, and collaboration to ensure usefulness to clinicians and that patient safety is a priority.
📊 2025: State of AI in Healthcare: Published by Menlo Ventures, this report surveyed over 700 executives at hospitals, outpatient providers, insurance companies, and pharmaceutical companies. Some of the numbers are astounding. I won’t go into them here but I encourage you to take a look to get an idea of what stakeholders think is important in the current climate.
Note: No clinicians or healthcare providers were included in the surveys. We’re not the key decision makers, but we should be.
RESEARCH
🤖 AI Chatbots can be easily misled to produce misinformation:

A recent article published in Nature, showed how easily AI chatbots can be manipulated into spreading misinformation. This study systematically tested six different models across 5,400 simulated cases. Across standard settings, LLMs erroneously accepted and elaborated on fake clinical details (“hallucinations”) in 50–82% of outputs. Mitigation prompting, which involved explicitly instructing models to rely only on validated clinical information, reduced hallucination rates from about 66% to 44% overall, whereas adjusting model sampling (temperature) did not significantly lower errors. Even the best-performing model (GPT-4o) showed the lowest hallucination rate at about 50% under default conditions and around 23% with mitigation prompting.
Three Key Takeaways:
High hallucination susceptibility: LLMs frequently generate false clinical details when faced with adversarially embedded inaccuracies, posing risks for clinical decision support.
Prompt engineering mitigates but does not eliminate errors: Custom mitigation prompts significantly reduce hallucination rates but do not eradicate them, even in the best models.
Performance varies by model: Models differ substantially in susceptibility, with some exceeding 80% hallucination rates under default conditions and others performing comparatively better yet still imperfectly
Bottom Line: Even the best LLM model is prone to hallucinations. Patients will have more and more difficulty discerning false information from factual information. Clinicians need to know how these models can potentially mislead patients, so we can steer them in the right direction.
ETHICS/REGULATION
⚖️ Ambient AI hit with Class Action

Sharp Healthcare has been hit with a class action lawsuit, alleging it is “eavesdropping” on patient encounters and adding consent to being recorded when they were never discussed. They’ve also raised concern for data privacy, since patient information will be stored in the cloud.
It raises the question, what happens to our patient’s data if it is leaked? Who is going to be held accountable?
🏛️ Check out the Regulations in your state:
Each state has a different way of handling AI regulation. The National Conference of State Legislators has created a comprehensive list of all bills regarding AI. Here, you can see what bills failed, and what bills passed. The tool is easily searchable, and can be narrowed by state, depending on where you are. Use this to stay up-to-date with the latest AI Legislation
🤝 Ethical AI
Before implementing any healthcare AI application or service, it is important to ensure it is done ethically. Major concerns of implementing AI in healthcare settings include data privacy, autonomy, bias, and accountability.
To get started learning the ethics behind AI in Healthcare, check out these review articles. Each one gives a perspective on ethical use of AI that is important for clinicians to understand
Takeaway: Without proper regulation, AI companies can go unchecked, and patients can pay the price.
TOOLS I’M EXPLORING
🛠️ Get Started With Prompt Engineering:
Did you know there is an actual job titled “Prompt Engineer”? LLMs are incredibly sophisticated, but they still need constraints to ensure accurate and precise information. You can read about the history of prompt engineering here. Although, by the time you finish that article the job may be DOA.
To get started with prompt engineering, I created the C-SAFE method:
C-SAFE = Context → Scope → Assumptions → Format → Evaluation
Context (Framing, Role)
Scope (Boundaries, Intent)
Assumptions (Clinical Guardrails)
Format (Length, Paragraphs, etc)
Evaluation (Safety Check, Minimizes Hallucinations
Here is an example below. This can be copied and edited to create countless useful prompts
Context:
You are a board-certified [specialty].
This output is for clinician education and decision-support, not medical advice.
Scope:
Task: [single, explicit objective].
Out of scope: [what the model must not do].
Evidence window: [if applicable].
Assumptions:
- Follow evidence-based medicine principles.
- Clearly label evidence strength.
- Flag uncertainty and gaps.
- Do not fabricate data or citations.
Format:
- Use [table / bullets / numbered sections].
- Max length: [X words].
- Required sections: [list].
Evaluation:
- State confidence level of conclusions.
- Identify major limitations.
- Cite sources or explicitly state when unavailable.
Play around with different parameters, contexts, and constraints. Let the LLM work for you!
FINAL THOUGHTS
Thank you for subscribing to Intelligent Medicine. AI is here, and it’s going to reshape medicine. It’s the new Wild West, and a multi-billion dollar industry. It is important that clinicians prepare for and understand how to use AI. My goal every week is to empower you to take control of AI and make it work for you and your patients.
Are you using AI in your practice yet?
AI won’t replace clinicians, but clinicians who use AI will replace those who don’t
Best Regards,
Chris Massey, MD
What would you like to read and learn about in Intelligent Medicine:
Disclaimer: This newsletter is for educational and informational purposes only and does not constitute medical advice. Readers should review primary sources and follow applicable clinical guidelines and institutional policies before implementing any changes. Always de-identify patient data and review all outputs for accuracy.
