2026 Risk Management Programs
AI in Clinical Practice: EMR Integration, Legal Implications and the Road Ahead
Presented by John Sly, Esq.
This two‑hour course introduces how AI is reshaping clinical practice and medical malpractice risk. It covers core AI concepts and common tools, examines risks from standalone apps and EMR‑integrated AI, and highlights workflow issues such as alert fatigue. Participants review key legal frameworks—including the 21st Century Cures Act, the Information Blocking Rule, and emerging questions about vendor versus provider liability—and learn practical strategies for documentation, oversight, and maintaining clinical judgment. The session ends with a look at future trends like AI agents and evolving FDA regulation of Software as a Medical Device to support proactive risk management.
Objectives
- Define key AI concepts (e.g., AI vs. ML vs. NLP) and identify types of clinical tools (e.g., chatbots, embedded algorithms) to evaluate their appropriate use.
- Assess risks of standalone AI tools in clinical settings, such as diagnostic errors from symptom checkers, and implement safeguards against misinformation.
- Explain AI-EMR integrations (e.g., ambient scribes, risk prediction) and their impacts on workflows, including strategies to combat alert fatigue.
- Outline the 21st Century Cures Act, Cures Rule, and Information Blocking provisions, including obligations for AI transparency and interoperability.
- Analyze emerging malpractice issues, such as liability for AI reliance, informed consent for algorithmic decisions, and standards of care under medical board scrutiny.
- Apply best practices for AI risk mitigation, including documentation of tool use, override policies, and maintaining clinical oversight, while anticipating trends like regulatory changes in SaMD.





