Epic began in a Wisconsin basement in 1979 with one idea: patient information should follow the patient. Forty-five years later, that idea has grown into the infrastructure of American medicine — and is now the launchpad for AI that doesn’t just store information, but acts on it.
Primary sources: HIMSS 2025 reporting on Epic’s agentic AI strategy (Fierce Healthcare); Microsoft DAX Copilot for Epic documentation; AMA physician burnout surveys 2023–2024; Epic UGM 2025 announcements; peer-reviewed research on EHR adoption and AI clinical outcomes.
In the mid-1970s, Gordon Faulkner, a pediatrician in Madison, Wisconsin, learned that one of his young patients had died. The child’s family had moved to Milwaukee, 75 miles away. When the child fell ill at a new hospital, the physicians there had no access to her medical history. They did not know her conditions. They did not know how to treat her. Gordon believed, quietly and with certainty, that if those doctors had had her records, she would have survived.
His wife, Judy Faulkner, was a graduate student in computer science at the University of Wisconsin. She had taken one of the first courses ever offered on computers in medicine, under a visionary professor named Warner Slack. The morning after hearing about the child, she went back to work with a new sense of urgency. She had already been building a database for tracking patient information over time. It was no longer just a research project. It was a mission.
In 1979, working on a single refrigerator-sized minicomputer in the basement of a house at 2020 University Avenue in Madison, Judy Faulkner co-founded a company called Human Services Computing with $70,000 in startup capital. The company would later be renamed Epic Systems. She has run it ever since, now in her eighties, overseeing a company of 14,000 employees on a 1,670-acre campus outside Madison.
Epic has never gone public. It has never taken outside investment after those early days. It has never made a significant acquisition. Faulkner runs it according to principles she calls “commandments” painted on walls throughout the campus. The first three: do not go public, do not acquire or be acquired, and software must work. This independence has allowed Epic to make long-term decisions that publicly traded companies cannot — including investing decades in data infrastructure that now makes its entire AI strategy possible.
To understand Epic’s story, you have to understand what healthcare looked like before it. For most of the twentieth century, and well into the twenty-first, the medical record was a physical object: a manila folder stuffed with handwritten notes, carbon-copy prescription pads, typed discharge summaries, and stapled lab printouts. It lived in a filing cabinet at a single hospital or clinic. When a patient went to a different facility, their records did not go with them.
The practical consequences were severe. Physicians made decisions without a patient’s full history. Medications were prescribed without knowing what a patient was already taking elsewhere. Allergies discovered at one hospital were unknown at another. Redundant tests were ordered because no one knew the same tests had already been run. Specialists wrote letters to referring physicians that arrived weeks after the visit. Emergency physicians made life-or-death decisions with incomplete information as a matter of routine.
A 67-year-old woman arrives at an emergency department in severe abdominal pain. She takes medications for heart disease but cannot remember their names. Her primary care doctor is in a different health system. His office won’t open for three hours. The hospital has no record of her previous visits because she was last hospitalized at a different facility five years ago.
The ER physician orders a full medication reconciliation from scratch. Blood is drawn for labs that were already done at her cardiologist’s office two weeks ago. She waits. The physician makes decisions based on what the patient can remember, supplemented by educated guesses. This is not a worst-case scenario. This is Tuesday.
Epic’s earliest products addressed this problem on a small scale through the 1980s and 1990s. But adoption was slow and expensive. Most hospitals remained on paper, or on fragmented electronic systems that could not communicate with each other. A patient’s information was still trapped — just now in a digital silo instead of a filing cabinet.
The moment that changed everything was not a technology breakthrough. It was a law. In February 2009, President Obama signed the American Recovery and Reinvestment Act. Buried inside it was the Health Information Technology for Economic and Clinical Health Act — HITECH.
HITECH created a simple but powerful incentive structure: hospitals and physicians who adopted certified electronic health records and demonstrated “meaningful use” of them would receive substantial Medicare and Medicaid payments. Those who had not adopted EHRs by 2015 would face financial penalties. The federal government committed $27 billion to the program. The message to healthcare was unmistakable: digitize now, or lose money.
The results were dramatic. In 2008, fewer than 10% of hospitals had adopted basic EHR systems. By 2015, more than 80% had. Annual EHR adoption rates among eligible hospitals jumped from 3.2% per year before HITECH to 14.2% per year after. The U.S. healthcare system digitized faster, in a shorter time, than almost any comparable transformation in any other sector of the American economy.
Epic was the primary beneficiary. Its comprehensive integrated platform — covering clinical documentation, billing, lab ordering, pharmacy, and the patient portal (MyChart) — was exactly what hospital systems needed to meet Meaningful Use requirements quickly. Between 2009 and 2015, Epic went from serving hundreds of hospitals to thousands. Implementation wait lists stretched years into the future. The Verona campus expanded rapidly.
The digitization of American healthcare solved the access problem. Patient information could now follow the patient — at least within systems using the same software. But it created a problem no one fully anticipated: documentation burden.
Paper records were incomplete and fragmented — but they were fast to write. A physician could scrawl a few lines of notes, sign the chart, and move on. Electronic health records were comprehensive and structured, but they required orders of magnitude more data entry. Every medication had to be selected from a dropdown. Every diagnosis had to be coded against a classification system. Every clinical decision had to be documented in a structured template. Every insurance interaction generated forms. The EHR, designed to make information more accessible, also made creating that information far more labor-intensive.
This created a paradox at the center of Epic’s dominance. Epic’s software was simultaneously the essential infrastructure of modern healthcare and a primary driver of clinician burnout. The company built on the mission of serving patients had built a system that was, in many cases, pulling clinicians away from patients. The instrument of care had become a barrier to caring.
This is the context into which Epic began deploying AI. The problem was not primarily a data problem. It was a human problem: how do you give clinicians back the time and attention that digitization had, unintentionally, taken from them?
Before getting to the AI, we need to understand what made it possible: the data. What Epic built over forty-five years — before anyone was thinking about large language models or agentic systems — was the largest longitudinal clinical dataset ever assembled.
Cosmos is Epic’s de-identified clinical data platform. As health systems join Cosmos, their anonymized patient data flows into a shared research database. Today Cosmos contains records for more than 280 million patients, with over 16 billion clinical data points: diagnoses, medications, lab results, vital signs, imaging findings, clinical notes, procedures, and outcomes — accumulated over time, across multiple care settings, in standardized and structured format.
One concrete example: Epic’s Sepsis Prediction Model monitors every inpatient at participating hospitals every 15 minutes, recalculating each patient’s sepsis risk score using dozens of clinical variables updated in real time. A validation study found that after implementing the model, sepsis-related mortality at one hospital system declined by 44%. A separate AI sepsis surveillance tool found a 17% reduction in mortality. These are not incremental improvements. They are lives. And they are made possible by the data foundation that was built over decades before anyone was thinking about AI.
Epic’s AI journey did not start with large language models. It began years earlier with predictive models — statistical systems trained to find patterns in clinical data and alert clinicians to risk before it became crisis.
Each phase built on the previous one. Predictive models required Cosmos. Generative AI required the EHR integrations that Meaningful Use compliance had forced hospitals to build. Agentic AI requires the institutional trust that years of predictive and generative AI deployment have established. This is not a sudden transformation. It is the culmination of a forty-five-year compounding investment.
The most immediately impactful AI deployment Epic has made is also the most human: a system designed to give doctors back the ability to look their patients in the eye.
DAX Copilot (Dragon Ambient eXperience Copilot) is built through a partnership between Epic, Microsoft, and Nuance. A physician opens Epic on a tablet, taps a button to begin a visit, and DAX Copilot activates. It listens to the conversation between the physician and patient — not recording a transcript, but understanding the clinical meaning of what is being said in real time, using large language models fine-tuned on clinical language. When the visit ends, DAX Copilot produces a complete, structured draft clinical note within seconds. The physician reads it, edits anything that needs correction, and approves it for the medical record. Documentation is done before the patient leaves the room.
The measured results are striking. Clinicians using DAX report a 50% reduction in documentation time, 70% reduction in feelings of burnout and fatigue, an average of seven minutes saved per patient encounter, and the ability to see an additional five patients per day. At Northwestern Medicine, physicians using DAX in at least half their encounters were seeing an additional 11.3 patients per month. At WellSpan Health, 94% of physicians reported that DAX improved the quality of their patient interactions.
But the most important finding is not in the productivity data. When physicians are not typing, they look at their patients. That shift — from screen to face — changes the encounter in ways that are measurable in patient satisfaction scores but felt most directly in the examination room.
In late 2025, Microsoft extended ambient AI to nurses with Dragon Copilot for nurses — designed specifically around nursing documentation workflows, which historically had been poorly served by existing AI tools. Emergency department nurses who spend roughly a quarter of their shift on EHR tasks are primary beneficiaries. The principle is the same: capture documentation passively while the nurse cares for the patient, rather than demanding documentation at the expense of care.
Ambient AI listens and generates. It produces a draft note that a human reviews. It is powerful, but it is still a tool that responds to input and requires human action to produce an outcome in the medical record. Agentic AI is categorically different.
At HIMSS 2025, Epic’s VP of R&D Seth Howard described the shift directly: “We’ve woven AI into the foundational capabilities of Epic, and we’ve been working towards an agentic platform for the past year or so. We’re really building on the foundation that we created to have generative AI as part of the software to start building reusable components that can take action, under human oversight.”
This distinction matters enormously in healthcare, because the stakes of autonomous action are unlike almost any other domain. Every previous chapter in this book introduced AI that makes predictions or recommendations. Uber’s algorithm predicts demand and sets prices automatically. Spotify’s system personalizes playlists. The prediction and the action are tightly linked. Agentic AI is different because it can pursue a complex, multi-step goal that requires reasoning, tool use, and adaptation — not just a single algorithmic output.
And that is precisely why the phrase “under human oversight” in Howard’s statement is not a legal disclaimer. It is the central engineering and governance question of Epic’s entire AI program. The capability of the AI is not the binding constraint. The question is: at what level of verified accuracy, and for which tasks, is it safe to let an AI act without a human checking every output?
Epic is not building one agentic AI. It is building a fleet of specialized agents, each designed for a specific clinical or administrative workflow. Think of them less as a single AI assistant and more as a team of invisible staff members — each with a defined role, operating within defined constraints, always surfacing their work to a human before it takes effect.
Art is Epic’s clinician-facing AI agent. Before a physician walks into an exam room, Art prepares them. It reviews the patient’s entire clinical history in Epic, synthesizes the key information, and generates a concise pre-visit brief: the most relevant diagnoses, medication changes since the last visit, lab values trending in the wrong direction, and care gaps that should be addressed during the encounter. During complex cases, Art can search across the 16+ billion data points in Cosmos to surface comparable patients and what happened to them, answering questions like “What do patients with this combination of findings typically respond to?”
A primary care physician hasn’t seen a patient in 18 months. Without Art, she would spend the first several minutes of the appointment scrolling through the EHR, reconstructing what had happened since the last visit: what medications changed, what specialist notes arrived, what test results need follow-up. It is not unusual for this to consume a third of a 20-minute appointment.
With Art, she opens the chart and reads a focused summary: two chronic conditions, one medication change made by cardiology, a rising creatinine trend nobody has acted on, and a mammogram that is two years overdue. She walks into the room already knowing what matters. The patient gets more care in the same amount of time.
Penny addresses the part of healthcare patients rarely see: billing and insurance. When an insurance company denies a claim — which happens approximately 17% of the time for commercial insurers — a human staff member traditionally researches the denial, pulls together clinical justification, and writes an appeal letter from scratch. This process takes 45 minutes or more, and many denials go unchallenged simply because the labor cost of fighting them exceeds their financial value.
Penny does this autonomously: it reads the denial, retrieves the relevant clinical documentation from the EHR, identifies the applicable policy language, drafts the appeal letter, and presents it to an administrator for review and submission. The human approves or edits. The whole process takes minutes instead of hours — and the threshold for which denials are worth fighting changes entirely.
Epic is also developing conversational AI agents that interact with patients before they arrive. These agents contact patients ahead of appointments, ask about the goals of the visit, gather information about any symptoms that have changed, confirm which medications they are still taking, and identify whether any prerequisite tests should be scheduled. They summarize this for both the patient (in MyChart) and the physician (in the pre-visit brief). The patient arrives more prepared. The physician arrives more informed. The visit is more productive before it begins.
The AI Factory model we introduced in Chapter 1 — data feeding models, models generating predictions, predictions informing decisions, decisions creating value that generates new data — maps directly onto Epic’s AI architecture. With one important difference from every previous chapter: at Epic, the loop includes patients, and the value being created is not just commercial. It is clinical.
Notice step 4. Unlike Uber, which closed the prediction-decision gap entirely, Epic has — by design, and for now — kept the human inside that gap for all clinical decisions. Every AI-generated note, every flagged risk alert, every care gap suggestion passes through a clinician before it becomes part of a patient’s official record or triggers any action. This is the same approach Spotify takes with its “algotorial” model: AI does the fast, scalable work; humans approve the consequential outputs.
| AI Factory Step | What Epic does | Business lesson | Key concept |
|---|---|---|---|
| Data | Every clinical encounter generates structured and unstructured data flowing into Cosmos — the largest longitudinal clinical dataset ever assembled | Healthcare data is uniquely irreplaceable — built over decades through trusted relationships no competitor can fast-follow | Data moat; longitudinal clinical data |
| Model | Azure OpenAI (GPT-4 and successors) fine-tuned on Cosmos clinical data, plus domain-specific models for sepsis prediction, deterioration risk, and readmission probability | General foundation models need domain fine-tuning to perform in specialized fields — clinical expertise lives in what the model is trained on, not just its architecture | Fine-tuning; domain adaptation; foundation models |
| Prediction | Ambient notes (DAX), risk alerts (sepsis, deterioration), care gap identification, insurance appeal drafts, chart summaries, pre-visit briefings, patient outreach messages | In healthcare, AI predictions are outputs to be reviewed, not decisions to be automated — the prediction-decision gap is preserved by design, not by oversight failure | Ambient AI; predictive models; agentic AI |
| Decision | Physicians review and approve AI notes; nurses respond to or dismiss risk alerts; administrators approve appeal letters; clinicians act on care gap notifications | Keeping humans in the decision loop is both ethically correct and strategically necessary for adoption — clinician trust determines deployment success more than technical accuracy | Human-in-the-loop; augmentation vs. automation |
| Value | Reduced burnout, more time with patients, earlier sepsis detection, recovered insurance revenue, fewer preventable readmissions | Healthcare AI value is measured in clinician time returned and patient outcomes improved — not engagement metrics or click-through rates | Augmentation ROI; clinical efficiency |
| Loop back | Every physician edit of an AI note, every accepted or rejected risk alert, every corrected suggestion becomes training signal that improves the next model version | Expert corrections in high-stakes domains are extraordinarily valuable training data — the humans in the loop are also teaching the model how to improve | RLHF; continuous improvement; compounding advantage |
Two concepts introduced elsewhere in this course take on particular importance in healthcare, where the stakes of AI outputs are highest.
Prompt engineering is the practice of deliberately designing the inputs given to an AI system to produce reliable, accurate, and safe outputs. In consumer applications, prompt engineering is often casual — you experiment with different phrasings until you get a useful result. In clinical AI, prompt engineering is an engineering discipline with patient safety implications, and it is treated accordingly.
One of the most significant shifts in how Epic and its health system partners build new features is the rise of AI prototyping — using AI tools to produce working drafts of features and workflows in hours or days rather than months.
This is making healthcare software development more responsive to clinicians — which, historically, has been one of its most persistent and costly failures. EHRs were designed by engineers, for workflows that engineers imagined clinicians had. AI prototyping, combined with ambient AI tools that make clinical processes more visible to software developers, is beginning to close that gap.
Every AI system in this chapter keeps a human in the loop at the decision stage. This is intentional — and it will not stay this way forever. Understanding why this design choice was made, and how it will evolve, is one of the most important things this chapter can teach.
AI in healthcare is not a monolithic experience. It looks different depending on where you sit in the system. Here is what the current transformation means for each major stakeholder — including what each group gains and what each group risks.
For physicians, the AI era represents the first genuine attempt to reverse the burnout crisis that EHR adoption created. Ambient documentation is returning time that was stolen by the keyboard. Predictive models are adding a layer of continuous pattern recognition that no individual physician can maintain across dozens of simultaneous patients. Pre-visit briefing agents are making it possible to be fully prepared for complex patients without spending a third of the appointment catching up on the record.
The risk is different: alert fatigue. When AI systems generate too many notifications — too many risk scores, care gap flags, suggested actions — physicians stop reading them. The sepsis prediction model has been studied at multiple hospitals where clinicians began dismissing alerts reflexively because they arrived too frequently. The benefit of the model was neutralized by its own volume. Epic’s challenge is not just building AI that is accurate. It is building AI whose alerts physicians will actually act on — which requires calibration, context, and understanding of clinical workflow that purely technical optimization cannot provide.
Nurses have historically been the most underserved group in health IT. EHR systems were designed primarily around physician workflows. Nursing documentation was often retrofitted into tools that did not fit how nurses actually work: in motion, across multiple patients simultaneously, with constant interruption. The extension of ambient AI to nursing workflows, the development of AI-assisted flowsheet population, and use of AI to reduce end-of-shift documentation backlogs are meaningful steps toward healthcare AI that serves the people who provide most direct patient care.
Nurses are also among the most cautious about AI accuracy in specific contexts. A physician reviewing an AI-generated note can catch errors through clinical expertise. In some nursing workflows, the safety redundancies are fewer. Trust calibration matters enormously, and it must be earned task by task, not assumed.
For patients, the most visible change is often the most subtle: physicians who look at them again. The return of presence — eye contact, genuine listening, follow-up questions — is the human dividend of AI absorbing the documentation burden. Patients do not see DAX Copilot. They see what it gives back.
Less visible is the AI acting on their behalf when they are not in the room: the sepsis model that flags their deterioration at 3am, the care gap alert that prompts their physician to order an overdue mammogram, the pre-visit assistant that ensures the appointment addresses what actually matters to them. These interventions are invisible but may determine whether they receive the right care at the right time — or don’t.
For hospital administrators, AI represents a dual opportunity: reduce costs through automation of administrative workflows, and increase revenue through systematic recovery of denied claims and improved physician throughput. For the broader healthcare system, AI that reduces unnecessary tests and hospital readmissions is a societal benefit — but AI that helps hospitals fight more insurance denials is a direct cost to payers. The economic dynamics are not zero-sum across the whole system, but they create real tensions between stakeholders in specific transactions.
The agentic AI Epic is deploying in 2024 and 2025 is closer to the beginning of a long arc than to its endpoint. Here is where the evidence and the trajectory of investment suggest healthcare AI is heading.
The current generation of Epic’s agents operates in human-in-the-loop mode for clinical decisions. As confidence builds through validation studies and outcome data, lower-stakes tasks will shift to human-on-the-loop operation. Insurance appeal letters may be submitted automatically, with humans reviewing exceptions. Routine preventive outreach may become fully automated. The shift will happen task by task, evidence by evidence, not as a single policy change.
Epic’s leadership has explicitly described a roadmap toward native multimodal AI capabilities — processing not just text but video, images, and genomic data. This opens the door to AI-assisted diagnostics that go beyond pattern-matching in text records. An AI agent that can analyze a dermatology photograph, a radiology scan, and the patient’s full clinical history simultaneously — and surface a differential diagnosis for a physician to review — is within technical reach.
The U.S. faces a projected shortage of 86,000 physicians by 2036. AI cannot fill that gap by being a better search engine. But AI agents that handle routine chronic disease monitoring, medication refill management, follow-up on stable conditions, and patient question triage — while escalating anything requiring physician judgment — could meaningfully extend the productive capacity of each physician. This is not a future in which AI replaces doctors. It is a future in which AI makes it possible to extend physician-level oversight to populations who currently have very limited access to it.
Eight questions anchored in the themes of this chapter. These work equally well as written assignments or in-class discussion. Questions 2, 5, and 7 tend to generate the most debate.
MIS 432 · AI in Business · Case Study · For classroom discussion purposes.