We are now extending our Patient Education Chatbot into a Patient-Aware Conversational Agent that uses FHIR data as personalized context using MCP (Model Context Protocol)
Please read previous articles for more context:
- Building a Voice-Based Health Education Assistant with React, AWS, and ChatGPT
- Extending a Patient Education Chatbot with RAG for Trusted Health Information
Healthcare chatbots are quickly moving beyond generic Q&A into patient-specific conversational agents.
Imagine a patient asking:
“What should I know about managing my diabetes based on my recent HbA1c test?”
A basic chatbot might explain what HbA1c means, but a patient-aware chatbot could personalize the answer:
-
Retrieve the patient’s lab results, diagnoses, allergies, and medications directly from their EHR (FHIR API).
-
Combine this patient context with trusted educational sources (Cleveland Clinic, Mayo Clinic, HealthDirect).
-
Generate a safe, tailored response via a Large Language Model (LLM).
This post explains how to extend a React + LangGraph + LangChain chatbot with an MCP server that fetches normalized FHIR data, while staying mindful of HIPAA and data security.
Why MCP + FHIR?
-
FHIR (Fast Healthcare Interoperability Resources): the industry standard for structured health data (diagnoses, labs, meds, allergies).
-
MCP (Medical Context Provider): a middleware layer between your chatbot backend and FHIR server. It:
-
Normalizes raw FHIR JSON into a clean format.
-
Masks or de-identifies PHI where needed.
-
Provides a consistent API for LangChain / LangGraph workflows.
-
This ensures your chatbot can reason with structured, normalized, patient-specific context instead of raw EHR data dumps.
High-Level Architecture
Here’s the flow:
-
React App → Patient asks a question (voice (via AWS Transcribe) /text).
-
LangGraph → Orchestrates workflow.
-
MCP Server → Fetches & normalizes patient data from FHIR Server.
-
LangChain RAG → Retrieves trusted medical content from Pinecone/OpenSearch.
-
LLM (OpenAI) → Combines query + patient context + knowledge chunks.
-
LangSmith → Observability/tracing.
-
Polly (optional) → Converts answer back to speech.
Implementation Considerations
1. FHIR Normalization via MCP
-
Parse raw FHIR JSON into simple JSON:
-
Strip unnecessary identifiers (name, MRN, DOB) before sending downstream.
2. Retrieval-Augmented Generation (RAG)
-
Crawl and embed trusted health education websites (Cleveland, Mayo, HealthDirect).
-
Store embeddings in Pinecone
-
Retrieve top-N passages relevant to the patient’s query.
3. Prompt Construction
Example combined prompt:
4. Compliance & Security
-
Keep PHI inside your VPC when possible.
-
Use TLS 1.2+, KMS for encryption, IAM least privilege.
-
Log patient consent before pulling FHIR data.
-
Add human-in-loop fallback for uncertain answers.
Benefits of This Approach
-
✅ Personalized: Answers tailored to the patient’s actual health data.
-
✅ Grounded: RAG ensures responses come from trusted sources.
-
✅ Safer: De-identification + HIPAA-aligned architecture.
-
✅ Traceable: LangSmith logs all steps for debugging & audit.
Closing Thoughts
By integrating MCP + FHIR context with your chatbot, you transform it from a generic health Q&A bot into a patient-aware digital assistant.
This hybrid model — combining patient context, trusted knowledge, and LLM reasoning — can empower patients with safe, reliable education while keeping sensitive data protected.
The future of digital health assistants isn’t just about answering questions. It’s about personalized, compliant, and context-aware guidance that patients (and clinicians) can trust.