AI Hype Flourishes
AI hype is continuing to grow. Yesterday, a major shoe retailer, Allbirds, announced they were pivoting into a completely new direction - AI compute infrastructure. The result? Their stock price soared from $3 to nearly $20, increasing their market cap from $21 million to about $150 million. My takeaway? AI’s greatest use case continues to be the ability to create money from nothing.
What’s happening this week in AI
Big Pharma and AI companies join forces
AI still can’t replace doctors, but can it replace administrators?
The hype real, but trust may not be
Plus: Simplify your learning with Claude
Let’s dive in.
LATEST NEWS
🧠 Big Pharma tightens ties with AI companies: Companies like Novo Nordisk and Novartis are expanding partnerships with AI firms like OpenAI and Anthropic, aiming to accelerate drug development and data analysis. At the same time, executives from pharma are moving directly into AI company leadership roles.
Why this matters: The line between tech and pharma is blurring quickly, and that will shape the tools clinicians ultimately use.
💊 Amazon pushes deeper into AI drug discovery: Amazon just launched a new AI platform designed to accelerate early stage drug discovery, helping researchers generate and test candidate molecules faster than traditional workflows. Early partners include major pharma players, and the scale here is significant.
Why this matters: Big Tech is not experimenting anymore, it is building infrastructure that could reshape how therapies are discovered.
RESEARCH
📉 Many clinical AI models rely on questionable data: A recent analysis found that some disease prediction models were trained on flawed or low quality datasets, raising concerns about real world reliability. The researchers noticed some of the data did not match what you would see in real people, leading them to think some of it was fabricated. Some may already be in clinical use.
Key takeaway: Model performance is only as good as the data underneath it. Bad date = bad medicine.
🤖 AI chatbots struggle with medical accuracy: A study found that AI chatbots provided poor or misleading answers in a significant portion of medical queries, especially in misinformation prone areas. This raises real concerns as patient use increases.
Key takeaway: Many people are turning to chatbots and LLMs to answer their basic medical questions, but inconsistent answers creates problems that are difficult to solve.
🧠 AI still struggles with healthcare’s biggest cost problem: A Stanford analysis highlights that despite rapid progress, AI has not yet solved the core inefficiencies driving healthcare costs. Administrative complexity remains a major unresolved challenge.
Key takeaway: Reducing administrative burden should be a bigger focus than it is. The biggest problems in healthcare are not purely technical.
ETHICS/REGULATION
📊 Patients are already using AI, but don’t fully trust it: A recent Gallup poll shows about one in four Americans report using AI for health information, yet only a small fraction strongly trust its accuracy. Many are using it to supplement, and sometimes replace, clinical visits.
Why this matters: Adoption is happening from the patient side faster than clinicians may realize. As I discussed above, medical errors from AI are still significant.
🏥 Healthcare leaders acknowledge moving past hype: At recent industry meetings, health system leaders are increasingly acknowledging the gap between AI hype and real operational value, especially in revenue cycle and workflow automation.
Why this matters: The industry is starting to self correct, but slowly
TOOLS I’M EXPLORING
📖 Create Comprehensive Reviews and Guides with AI
Medicine is complicated. We spend years learning multitudes of information, but inevitably some of it slips out. When we need to recall the information, we use textbooks or Google or (lately) OpenEvidence. But what if you want to take a deeper dive on a newer topic but don’t know where to start?
For anything you want to learn more about, you can use Claude to create a comprehensive guide. This can be anywhere from a picture diagram to a full textbook.
I have been using this to better organize and understand the current data on the procedures we do in Pain Management. With insurance constantly circling like vultures trying to take away our procedures, its important to understand the procedures fully, including proper patient selection and efficacy data.
Try this out. Put this prompt (or a similar one) into the LLM of your choice. It doesn’t need to be overly engineered, but make sure you write everything you want included in the final product.
Create a prompt for claude to create an pdf guide discussing common peripheral nerve blocks used in pain management. Discuss indications, patient selection, risks/benefits, cpt codes, and data behind each nerve block. Bring in pictures from the internet whenever possible. Always cite information used with verified sources.Put the resulting prompt into Claude, edit as you see fit, et voila! Check out the result below. I didn’t edit anything and accepted all changes, so I can’t verify the accuracy. With that said, make sure you double check everything for accuracy!
FINAL THOUGHTS
The AI bubble conversation is no longer hypothetical. When capital floods in this quickly, it changes behavior, expectations, and decision making across the system.
The practical move is to stay grounded. Use what works, question what doesn’t, and remember that funding cycles and clinical value are not the same thing. The key, as usual, is to make sure it continues working for YOU. You are the only one who can distinguish AI hype from tools that actually work.
If this resonated, share it with someone navigating AI decisions in your organization.
Best Regards,
Chris Massey, MD
“Be patient toward all that is unsolved in your heart.”
🎬 IN CASE YOU MISSED IT
Are you enjoying Intelligent Medicine?
Disclaimer: This newsletter is for educational and informational purposes only and does not constitute medical advice. Readers should review primary sources and follow applicable clinical guidelines and institutional policies before implementing any changes. Always de-identify patient data and review all outputs for accuracy.
