Christina Rosario

By: Christina Rosario on April 22nd, 2026

Print/Save as PDF

The State of AI in Ambulatory EHR: What's Real vs. What's Hype

Medical Billing / RCM | AI

Ambulatory care practices are navigating one of the most confusing technology moments in the history of healthcare IT. Every electronic health records vendor is claiming AI capabilities. Every conference session has AI in the title. Every demo now includes a slide about machine learning.


The problem is that "AI" in healthcare software currently describes everything from genuinely useful ambient documentation tools to basic autocomplete functions that have been rebranded for 2026. For practice administrators and clinical leaders trying to make sound technology decisions, sorting real capability from inflated claims is harder than it should be.


This post breaks down what AI is actually doing inside ambulatory EHR platforms today, where the technology still has meaningful limits, and what questions to ask before any vendor gets your signature.



Understanding What "AI" Actually Means in an EHR Context

The term AI gets applied to a wide range of technologies, and not all of them are equivalent. In ambulatory healthcare software, you are most likely encountering one of three types:


Rules-based automation applies predetermined logic to structured data. A claim scrubbing engine that flags missing modifiers before submission is rules-based. It does not learn. It applies logic that humans programmed. This is mature, reliable, and valuable, but it is not machine learning. Many vendors call it AI anyway.


Machine learning models analyze patterns in large datasets and adjust their outputs based on what they find. A coding suggestion tool that improves its recommendations over time by analyzing how providers code similar encounters is machine learning. The quality of the output depends entirely on the quality and relevance of the training data.


Large language models (LLMs) process and generate natural language. Ambient documentation tools that listen to a clinical encounter and draft a SOAP note use LLMs. The sophistication of the output depends on how the model was trained, what specialty-specific data it was exposed to, and how tightly it is integrated with the EHR's coding and billing workflows downstream.


Knowing which category you are looking at changes how you evaluate the vendor's claims and what risks you need to manage.



Where AI Is Delivering Measurable Results Today

Clinical documentation. This is the most mature and validated AI application in ambulatory care. Ambient documentation tools transcribe the provider-patient conversation in real time, structure the content into a clinical note, and surface it for provider review before the patient leaves the room.


Documentation burden is well-documented as a major driver of physician burnout. According to 2024 AMA survey data, physicians spend an average of 13 hours per week on indirect patient care tasks, including documentation, order entry, and test result review. More than 20% report spending over 8 hours per week on EHR-related work outside normal business hours.


Peer-reviewed research on ambient AI documentation tools shows measurable time savings, though the range varies significantly by implementation. A randomized trial published in NEJM AI found a 9.5% reduction in documentation time among physicians using one ambient AI scribe platform. A study from UW Health found that ambient AI reduced documentation time by approximately 30 minutes per provider per day. A narrative review published in PMC found reductions ranging from 15% to nearly 29% across multiple studies and implementations.


The range matters. Vendors who cite a single favorable study as representative of what you should expect are overpromising. The honest answer is that results depend heavily on specialty, adoption rate, training, and how well the tool integrates with your existing EHR workflow.


The important caveat: the provider still reviews and signs every note. AI drafts. Clinicians decide. That workflow is not optional, it is a clinical and legal requirement, and any vendor who minimizes it in a pitch is worth scrutinizing.


Claim scrubbing and coding assistance. AI-assisted claim validation catches coding errors, modifier mismatches, and medical necessity gaps before a claim is submitted. The practical outcome is a measurable improvement in first-pass acceptance rates, which directly reduces denial management workload and accelerates collections. ADS clients have achieved a nearly 99% first-pass clean claim rate, built on nearly 50 years of accumulated claims data and a self-contained rules engine maintained entirely in-house.


Specialty-specific documentation support. The most meaningful AI gains in ambulatory care come when the AI understands specialty-specific workflows, not just general clinical language. In behavioral health, ASAM Level of Care assessments are time-intensive by design. ADS's ASAM AI reduces the time to complete a treatment plan from 38 minutes to under 15, returning meaningful clinical time to counselors each day.


The same principle applies in radiology, orthopedics, pain management, and neurology. Generic AI applied to specialty documentation often produces notes that need significant editing before they are clinically accurate. Specialty-trained AI produces drafts that require far less correction.



Where AI Still Has Real Limits

AI does not fix upstream problems. If charge capture is inconsistent, if documentation habits are poor, or if your billing team is working from incomplete data, adding an AI layer on top does not correct the root cause. It processes the existing dysfunction faster. The practices that report the best outcomes with AI tools almost always have clean operational foundations first.


  • Training data determines output quality. An AI documentation tool trained primarily on primary care encounters will produce primary care-shaped notes, even when used in a specialty practice. This matters because documentation requirements vary significantly by specialty: an orthopedic procedure note, a pain management evaluation, and a behavioral health treatment plan all have distinct structure, terminology, and payer requirements. Ask every vendor specifically what specialty data their model was trained on and how it was validated.

  • AI introduces its own accuracy risks. A review published in npj Digital Medicine found that even well-performing AI scribe systems carry hallucination rates of approximately 1% to 3%. In healthcare, even a low error rate carries real consequences. The UCLA NEJM AI trial noted that physicians reported AI-generated notes occasionally contained clinically significant inaccuracies, most commonly omissions or pronoun errors. Active physician review of every AI-generated note is not a formality. It is a patient safety requirement.

  • AI does not transfer liability. Every AI-generated note, every AI-suggested code, every AI-flagged claim still requires human review before it becomes a clinical or billing record. The regulatory and liability framework has not changed. Providers are responsible for documentation accuracy. Billing teams are responsible for claim integrity. AI is a tool that supports those responsibilities, not a substitute for them.

  • Compliance infrastructure requires certification, not just algorithms. CMS compliance, Cures Act requirements, HIPAA obligations, and ONC certification standards are not satisfied by having smart software. They require certified systems with documented audit trails, tested workflows, and ongoing regulatory monitoring. Several vendors are marketing AI-driven compliance features that are genuinely useful for workflow, but should not be confused with the certified compliance infrastructure that regulatory requirements actually demand.



Questions Worth Asking Before Any AI Demo

The following questions will tell you more about a vendor's actual AI capability than any feature slide:

What specialty-specific training data does your AI use? A vague answer here is a meaningful signal. Vendors with genuine specialty depth can describe their training data, the clinical settings it came from, and how it was validated.

Where does patient data go when it enters your AI system? This is not a paranoid question. It is a HIPAA question. Understand exactly what data the AI processes, whether it is used to further train the model, and what de-identification protocols are in place.


How does your AI handle errors, and what is your correction process? Every AI system produces errors. The question is what happens when it does. Can providers flag incorrect outputs? Does the system learn from corrections? What is the vendor's process for systematic improvement?


Is your AI built in-house or licensed from a third party? Many EHR vendors are wrapping general-purpose AI models in their interface without deep integration into billing and coding workflows. When problems arise, the support chain becomes complicated. Understanding who built the AI and who is accountable for its performance matters when something goes wrong at 4pm on a Friday.


How does the AI connect documentation to billing? Documentation AI that does not feed into your revenue cycle management workflow is a productivity tool. Valuable, but limited. The most meaningful AI implementations in ambulatory care connect the clinical note to the code to the claim in a single integrated workflow.



What Vendor Stability Has to Do With AI

This point does not come up often enough in AI conversations, but it should.


AI is not a product you buy once. It requires ongoing training, regular updates as payer rules and coding standards change, and consistent investment in quality improvement. When you choose an EHR vendor for their AI capabilities, you are making a bet on their long-term commitment to maintaining and improving those capabilities.


Vendor ownership changes reset roadmaps. Private equity-backed software companies operate on exit timelines that do not always align with the multi-year AI development cycles that deliver meaningful clinical results. Understanding who owns your EHR vendor, how long they have operated without ownership changes, and what their long-term development commitments look like is a legitimate part of any AI technology evaluation.



A Framework for Evaluating AI Claims

When a vendor makes an AI claim, run it through these four filters before accepting it at face value:


Is it measurable and sourced? "Reduces documentation time" is a marketing claim. "Reduced documentation time by 9.5% in a randomized trial of 238 physicians across 14 specialties, published in NEJM AI" is a data point. Ask for the study, the sample size, the methodology, and whether the results come from a setting comparable to yours.


Is it specialty-relevant? Results from primary care practices do not automatically transfer to specialty practices. Ask whether the evidence comes from practices in your specialty with similar patient volumes and documentation complexity.


Is it integrated? Documentation improvements that do not connect to coding and billing outcomes have limited revenue impact. Ask specifically how the AI affects claim submission downstream.


Is there a reference you can call? Vendors with genuine results are usually willing to connect you with a client who can speak to their experience directly. Reluctance here is a signal worth noting.



The Bottom Line

AI in ambulatory EHR is not hype. Ambient documentation, AI-assisted coding, and specialty-specific clinical decision support are delivering real, measurable results in well-implemented settings. The research is early but growing, and the practices seeing the strongest outcomes share a few things in common: they started with clean operational workflows, they chose AI built on specialty-relevant training data, and they treated provider adoption as a change management project, not just a software rollout.


The hype is in the word itself. "AI" has become a label attached to everything from basic autocomplete to genuinely sophisticated machine learning. Your job in any evaluation is to get specific. Ask for the study, the specialty, the methodology, and the reference. Vendors who have built something real will be able to answer.


Want to see how AI functions inside a specialty-specific EHR that has been built and maintained by the same team since 1977? Request a Live Demonstration and see the Medics Suite working in your specialty's actual workflow. A real person answers in under 2 minutes at 1-800-899-4237 ext 2264.

Sources: AMA Physician Organizational Biopsy 2024 (ama-assn.org); Lukac et al., "Ambient AI Scribes in Clinical Practice: A Randomized Trial," NEJM AI 2025 (pubmed.ncbi.nlm.nih.gov); University of Wisconsin Health / NEJM AI 2025 (med.wisc.edu); Arora et al., "Transforming Clinical Documentation with Ambient AI Scribes: A Narrative Review," PMC 2025 (pmc.ncbi.nlm.nih.gov); Haltaufderheide et al., "Beyond Human Ears: Navigating the Uncharted Risks of AI Scribes," npj Digital Medicine 2025 (nature.com); CMS ONC Health IT Certification Program (cms.gov)

About Christina Rosario

Christina Rosario is the Director of Sales and Marketing at Advanced Data Systems Corporation, a leading provider of healthcare IT solutions for medical practices and billing companies. When she's not helping ADS clients boost productivity and profitability, she can be found browsing travel websites, shopping in NYC, and spending time with her family.