Wednesday, 17 September 2025

Extending our Patient Education Chatbot into a Patient-Aware Conversational Agent with MCP & FHIR

We are now extending our Patient Education Chatbot into a Patient-Aware Conversational Agent that uses FHIR data as personalized context using MCP (Model Context Protocol)

Please read previous articles for more context:

  1. Building a Voice-Based Health Education Assistant with React, AWS, and ChatGPT
  2. Extending a Patient Education Chatbot with RAG for Trusted Health Information


Healthcare chatbots are quickly moving beyond generic Q&A into patient-specific conversational agents.

Imagine a patient asking:

“What should I know about managing my diabetes based on my recent HbA1c test?”

A basic chatbot might explain what HbA1c means, but a patient-aware chatbot could personalize the answer:

  • Retrieve the patient’s lab results, diagnoses, allergies, and medications directly from their EHR (FHIR API).

  • Combine this patient context with trusted educational sources (Cleveland Clinic, Mayo Clinic, HealthDirect).

  • Generate a safe, tailored response via a Large Language Model (LLM).


This post explains how to extend a React + LangGraph + LangChain chatbot with an MCP server that fetches normalized FHIR data, while staying mindful of HIPAA and data security.


Why MCP + FHIR?

  • FHIR (Fast Healthcare Interoperability Resources): the industry standard for structured health data (diagnoses, labs, meds, allergies).

  • MCP (Medical Context Provider): a middleware layer between your chatbot backend and FHIR server. It:

    • Normalizes raw FHIR JSON into a clean format.

    • Masks or de-identifies PHI where needed.

    • Provides a consistent API for LangChain / LangGraph workflows.

This ensures your chatbot can reason with structured, normalized, patient-specific context instead of raw EHR data dumps.


High-Level Architecture

Here’s the flow:

  1. React App → Patient asks a question (voice (via AWS Transcribe) /text).

  2. LangGraph → Orchestrates workflow.

  3. MCP Server → Fetches & normalizes patient data from FHIR Server.

  4. LangChain RAG → Retrieves trusted medical content from Pinecone/OpenSearch.

  5. LLM (OpenAI) → Combines query + patient context + knowledge chunks.

  6. LangSmith → Observability/tracing.

  7. Polly (optional) → Converts answer back to speech.





Implementation Considerations

1. FHIR Normalization via MCP

  • Parse raw FHIR JSON into simple JSON:

    { "age": "52", "sex": "male", "diagnoses": ["Type 2 Diabetes", "Hypertension"], "labs": ["HbA1c: 9.2%", "LDL: 140 mg/dL"], "medications": ["Metformin", "Lisinopril"], "allergies": ["Penicillin"] }
  • Strip unnecessary identifiers (name, MRN, DOB) before sending downstream.

2. Retrieval-Augmented Generation (RAG)

  • Crawl and embed trusted health education websites (Cleveland, Mayo, HealthDirect).

  • Store embeddings in Pinecone

  • Retrieve top-N passages relevant to the patient’s query.

3. Prompt Construction

Example combined prompt:

You are a patient education assistant. Use ONLY the context below. Patient (de-identified): - Age: 52, Male - Diagnoses: Type 2 Diabetes, Hypertension - Labs: HbA1c 9.2% (poor control) - Medications: Metformin, Lisinopril - Allergies: Penicillin Trusted sources: [Excerpt 1: WHO guidelines on diabetes management...] [Excerpt 2: Mayo Clinic article on HbA1c...] Question: What should I know about my condition?

4. Compliance & Security

  • Keep PHI inside your VPC when possible.

  • Use TLS 1.2+, KMS for encryption, IAM least privilege.

  • Log patient consent before pulling FHIR data.

  • Add human-in-loop fallback for uncertain answers.


Benefits of This Approach

  • Personalized: Answers tailored to the patient’s actual health data.

  • Grounded: RAG ensures responses come from trusted sources.

  • Safer: De-identification + HIPAA-aligned architecture.

  • Traceable: LangSmith logs all steps for debugging & audit.


Closing Thoughts

By integrating MCP + FHIR context with your chatbot, you transform it from a generic health Q&A bot into a patient-aware digital assistant.

This hybrid model — combining patient context, trusted knowledge, and LLM reasoning — can empower patients with safe, reliable education while keeping sensitive data protected.

The future of digital health assistants isn’t just about answering questions. It’s about personalized, compliant, and context-aware guidance that patients (and clinicians) can trust.




Popular Posts