Monday, 28 October 2013

Mirth Client - Android


With this post I wish to share a really exciting research/development to have a Mirth Dashboard on interface administrator’s mobile device (Android device in this case).

I have just relocated to Pune for a new job and was not able to post anything during this time. On my new job I am working on a mobile healthcare platform and evaluating Mirth Connect as interface engine for few of our clients. Last time I worked on Mirth was in 2008. But I feel the core of all the integration engines is same.
Now comming back to this article, the idea to have a mirth client on Android is not my own but came from my manager in a casual discussion and I picked it up.
There is very less that an interface administrator could do or would prefer to do with channels (create/edit/deploy) on a mobile device but there is a lot that he can monitor and control (view statistics /start/stop/pause/resume).
Let’s first explore the application then will share some technical details.
The application displays the mirth connect dashboard on android device. It requires the login credentials to connect with Mirth Server and HTTP endpoint of a special mirth channel that serves as base for this application.
On successful connection, the application will display a dashboard with list of channels deployed.
Mirth Client Android - Login
Mirth Client Android - Dashboard
Mirth Client Android - Channel Statistics
Channel Status Indicator
Channel Status
Started
Channel Status
Stopped
Channel Status
Paused
Login Channel List with Status Expanded Channel Statistics
Mirth Client Android - Channel Action
Long press any channel to display action menu.
Action(s) on Started Channel Action(s) on Paused Channel Action(s) on Stopped Channel

The dashboard refreshes automatically every 10 seconds (this is default interval) to sync recent data i.e. channel statistics & status. Everytime dashboard refreshes, it sends a request to our special mirth channel. A short refresh interval could impact performance of Mirth Server (I have not tested this).
To use this application your device should be connected to same network of Mirth Server on wireless LAN.
To start with my research experience (note: I am not a java developer or android developer and just learning the things)

Approach - 1 (not convincing)

My initial idea was to just display deployed channels with statistics. I started exploring mirth database to get these details, it was not difficult. Now the problem was to get these details on android and I couldn’t find a feasible solution. There was an option to have a service to listen for requests but this would lead to a complex deployment for a simple application.

Approach - 2 (failed)

After holding on the Approach-1, I started digging the mirth API and found that API provides all the functions to get the data that I wanted (deployed channels list with statistics) and much more like controlling the channels (start, stop, pause, resume, deploy, undeploy).
So with the Mirth Client API, I developed a small java project in eclipse and was successful to get the channel list and to send commands to start/stop/pause/resume channels.
After successful java project I tried same thing in Android Project and oops……. it didn’t work on Android. Same code, same jars that was used in java project but Android project was not able to resolve some classes of jars (ClassNotFoundException). I tried many things to resolve this with no luck. I suppose the problem was with JDK versions. Also the project becaome to heavy to load on low memory device after referencing all the mirth-client libraries.

Approach - 3 (worked like a champ)

While discovering ways to use Mirth client APIs I found something which lead me to the current solution.
The solution is to have a mirth channel with HTTP reader source and a Javascript Writer destination.
My HTTP reader listens on a specific HTTP port and expects HTTP POST request with a body containing XML with request details. This XML contains the login credential and action request (fetch channel list or start/stop/pause/resume a channel)
HTTP reader passes the XML to Javascript Writer which handles the request and returns response XML which is then sent to client by HTTP reader.

Javascript Writer parses the request XML to identify the request type. If the request type is not to fetch channel list then the request will have an additional parameter channelId, this will identify the channel on which the operation is to be performed.
For "fetch channel list" request it returns the list of channels, for other requests is returns status code 0 for failure, 1 for success.

Source code of this application is available on my git hub https://github.com/j4jayant/MirthClient-Android/tree/master/mirthClient
Mirth Channel designed for this application is available here in xml format, which can be imported using Mirth Administrator or CLI https://github.com/j4jayant/MirthClient-Android/tree/master/mirthChannel
Android APK file can be downloaded from here mirthClient.apk
This is not a carefully architected and designed solution and can be improved.

Thursday, 8 August 2013

HL7 Viewer - Android App


As I am going deeper into open source technologies I am gaining more & more interest.
Continuing my research on implementing HL7 with different technologies, I have implemented a simple HL7 Viewer on Android.

I started learning Android development recently after I replaced my Windows Phone with Android. I was reading android e-book (Beginning Android 4 Application Development by WEI-MENG LEE) & after first two chapters I was able to write this app. I had message parsing scripts ready from my earlier implementations using C#. It was not an issue in converting it to java.
I am not sure if people would use HL7 viewer on smartphones/tables but it was a good learning experience for me & I would appreciate your suggestions on improving this as Android App.
Here are few screenshots of the application.

Android HL7 Viewer - Enter Message
Android HL7 Viewer - Click Parse
Android HL7 Viewer - List of Segments
Android HL7 Viewer - Expanded Segment
Enter HL7 Message Click Parse Segment List Expanded MSH Segment

It has two Activities (two Forms for .NET guys). First Activity has a TextBox that accepts an input. You can write or paste HL7 message here. There is also a button called Parse on this Activity.
Clicking this button will parse the HL7 message and
  • would take you to Second Activity if message is parsed without errors
  • would display error message at the bottom in case of parsing errors

Second Activity would display the parsed HL7 message in an Expandable ListView where all the segments would be groups & fields would be list items. I have appended segment repetition count with segment names to handle repeated segments.
I have used ExpandableListView code from here & modified it a bit for my use.
http://theopentutorials.com/tutorials/android/listview/android-expandable-list-view-example/

Development Environment


  1. Windows 7 32 bit
  2. Android Developer Tools (ADT) - v22.0.5-757759
  3. JRE 7

Limitations


  1. Only following delimiters are supported
    1. Field Separator - |
    2. Segment Separator - \r or \n
  2. Limited validation on message
    1. Message should start with MSH
    2. 12 required fields in MSH
    3. Segment names should be three characters

I have tested this on Samsung Galaxy S Duos.
I would love to share the apk with all who are interested in reviewing it.

Tuesday, 9 July 2013

HL7 Analysis with Hadoop!


My first encounter with Hadoop, Bigdata, Ubuntu, Java & Python! Learing new things is a real fun.

I have extended very popular WordCount example & implemented a Hadoop MapReduce job to count different Trigger Events from a file containing HL7 messages. I have also provided this MapReduce script in java & python with this article.
Here is a breif background of my research to encourage people to start exploring Hadoop & Bigdata

 I had some free time recently when I was recovering from a viral fever. I thought of reading some new stuff during this time. I had heard about Hadoop & Bigdata a lot but never had a chance to look into these.  
 I searched a bit on google to find some ways to do hands on. I am a .NET developer so I tried to find out if I can use Hadoop in .NET & found that Microsoft is on the way with HDInsight on Windows Azure. But after some analysis I decided to look alternatives & found Hortonworks provides ready to use appliances for VirtualBox & VMWare.  
 I downloaded VirtualBox appliance. After installing & configuring the appliance I figured out that my old sony vaio laptop with T6400 processor doesn't support Intel Virtualization.  
 I then looked some other alternatives & found some posts about installing Hadoop on Ubuntu. I remember Ubuntu from my college experience. So I downloaded & installed Ubuntu 13.04 on my laptop. I didn't know what is LTS(long term support) at that time and downloaded most recent release v13.04 :) but it's working fine. I suppose LTS is same as current stable release, it is 12.04 for Ubuntu.  
 Wow! Ubuntu installation was as simple as Windows 7. Even Hadoop 1.1.2 (current stable release) configuration was quite simple. Luckily I didn't face many problems during configuration.  
 http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/  
 Finally, after some serious exercise I was ready with my single node Hadoop cluster on Ubuntu.  
 So let's see what I did with Bigdata on Hadoop (although for this example & on single node Hadoop cluster I have used a small data :) i.e. an HL7 file with 60386 messages)  
 I first ran the WordCount example available with Hadoop Setup & then implemented it by looking at source code. I was able to install Eclipse IDE on Ubuntu & run small java programs with few minor problems. Some of the things that I had to search on google were how to add CLASSPATH in Eclipse, how to create JAR files in Eclipse etc.  
 Then I decide to extend this word count example to analyze a file with HL7 messages & count different trigger events from MSH segment (MSH_9).  
 I was able to successfully run this & them implemented same for Segment count from file with HL7 messages.  


So here is the java code, I haven't included any validation on HL7 message
EventCount.java
https://github.com/j4jayant/Hadoop-Examples/blob/master/HL7_EventCount/java/EventCount.java

 package jayant.hadoop.mapreduce.HL7Test;  
 import org.apache.hadoop.fs.Path;  
 import org.apache.hadoop.io.*;  
 import org.apache.hadoop.mapred.*;  
 import java.io.IOException;  
 import java.util.*;  
 public class EventCount {  
   public static class Map extends MapReduceBase implements Mapper {  
     private final static IntWritable one = new IntWritable(1);  
     private Text word = new Text();  
     final String MSH_SEG_START = "MSH|^~\\&";  
     public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException {  
       String line = value.toString();  
       if(line.startsWith(MSH_SEG_START))  
       {  
         final String FIELD_SEPARATOR = "|";  
         final String[] splitStr = splitTokens(line,FIELD_SEPARATOR); //splitTokens() is custom method to split string  
         if(splitStr.length >= 12) { //check 12 required fields in MSH  
           word.set(splitStr[8]);  
           output.collect(word, one);  
         }  
       }  
     }  
   }  
   public static class Reduce extends MapReduceBase implements Reducer {  
     public void reduce(Text key, Iterator values, OutputCollector output, Reporter reporter) throws IOException {  
       int sum = 0;  
       while (values.hasNext()) {  
         sum += values.next().get();  
       }  
       output.collect(key, new IntWritable(sum));  
     }  
   }  
   public static void main(String[] args) throws Exception {  
     JobConf conf = new JobConf(EventCount.class);  
     conf.setJobName("EventCount");  
     conf.setOutputKeyClass(Text.class);  
     conf.setOutputValueClass(IntWritable.class);  
     conf.setMapperClass(Map.class);  
     conf.setCombinerClass(Reduce.class);  
     conf.setReducerClass(Reduce.class);  
     conf.setInputFormat(TextInputFormat.class);  
     conf.setOutputFormat(TextOutputFormat.class);  
     FileInputFormat.setInputPaths(conf, new Path(args[0]));  
     FileOutputFormat.setOutputPath(conf, new Path(args[1]));  
     JobClient.runJob(conf);  
   }  
 }  

Let's run this example in Hadoop
Compile this program & create jar file
To compile this program you will have to add "hadoop-core-1.1.2.jar" file to your class path

Create a new Input directory in HDFS to put input HL7 files
 hadoop fs -mkdir /home/hduser/hadoop/hl7eventcount/input  

Copy HL7 file from your local machine to HDFS
 hadoop fs -copyFromLocal /home/hduser/hl7_01.txt /home/hduser/hadoop/hl7eventcount/input/hl7_01.txt  

Run this command to start MapReduce job
https://github.com/j4jayant/Hadoop-Examples/blob/master/HL7_EventCount/java/readme.txt
hadoop jar /home/hduser/eventcount.jar jayant.hadoop.mapreduce.HL7Test.EventCount /home/hduser/hadoop/hl7eventcount/input /home/hduser/hadoop/hl7eventcount/output
Output of this command will be something like this, please note that I have parsed a file with 60386 HL7 messages
 13/07/08 15:57:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.  
 13/07/08 15:57:33 INFO util.NativeCodeLoader: Loaded the native-hadoop library  
 13/07/08 15:57:33 WARN snappy.LoadSnappy: Snappy native library not loaded  
 13/07/08 15:57:33 INFO mapred.FileInputFormat: Total input paths to process : 1  
 13/07/08 15:57:34 INFO mapred.JobClient: Running job: job_201307081500_0006  
 13/07/08 15:57:35 INFO mapred.JobClient: map 0% reduce 0%  
 13/07/08 15:57:48 INFO mapred.JobClient: map 100% reduce 0%  
 13/07/08 15:57:57 INFO mapred.JobClient: map 100% reduce 33%  
 13/07/08 15:57:58 INFO mapred.JobClient: map 100% reduce 100%  
 13/07/08 15:58:00 INFO mapred.JobClient: Job complete: job_201307081500_0006  
 13/07/08 15:58:00 INFO mapred.JobClient: Counters: 30  
 13/07/08 15:58:00 INFO mapred.JobClient:  Job Counters   
 13/07/08 15:58:00 INFO mapred.JobClient:   Launched reduce tasks=1  
 13/07/08 15:58:00 INFO mapred.JobClient:   SLOTS_MILLIS_MAPS=22864  
 13/07/08 15:58:00 INFO mapred.JobClient:   Total time spent by all reduces waiting after reserving slots (ms)=0  
 13/07/08 15:58:00 INFO mapred.JobClient:   Total time spent by all maps waiting after reserving slots (ms)=0  
 13/07/08 15:58:00 INFO mapred.JobClient:   Launched map tasks=2  
 13/07/08 15:58:00 INFO mapred.JobClient:   Data-local map tasks=2  
 13/07/08 15:58:00 INFO mapred.JobClient:   SLOTS_MILLIS_REDUCES=10388  
 13/07/08 15:58:00 INFO mapred.JobClient:  File Input Format Counters   
 13/07/08 15:58:00 INFO mapred.JobClient:   Bytes Read=87014854  
 13/07/08 15:58:00 INFO mapred.JobClient:  File Output Format Counters   
 13/07/08 15:58:00 INFO mapred.JobClient:   Bytes Written=130  
 13/07/08 15:58:00 INFO mapred.JobClient:  FileSystemCounters  
 13/07/08 15:58:00 INFO mapred.JobClient:   FILE_BYTES_READ=286  
 13/07/08 15:58:00 INFO mapred.JobClient:   HDFS_BYTES_READ=87015104  
 13/07/08 15:58:00 INFO mapred.JobClient:   FILE_BYTES_WRITTEN=154658  
 13/07/08 15:58:00 INFO mapred.JobClient:   HDFS_BYTES_WRITTEN=130  
 13/07/08 15:58:00 INFO mapred.JobClient:  Map-Reduce Framework  
 13/07/08 15:58:00 INFO mapred.JobClient:   Map output materialized bytes=292  
 13/07/08 15:58:00 INFO mapred.JobClient:   Map input records=1075055  
 13/07/08 15:58:00 INFO mapred.JobClient:   Reduce shuffle bytes=292  
 13/07/08 15:58:00 INFO mapred.JobClient:   Spilled Records=40  
 13/07/08 15:58:00 INFO mapred.JobClient:   Map output bytes=724632  
 13/07/08 15:58:00 INFO mapred.JobClient:   Total committed heap usage (bytes)=459800576  
 13/07/08 15:58:00 INFO mapred.JobClient:   CPU time spent (ms)=10040  
 13/07/08 15:58:00 INFO mapred.JobClient:   Map input bytes=87014283  
 13/07/08 15:58:00 INFO mapred.JobClient:   SPLIT_RAW_BYTES=250  
 13/07/08 15:58:00 INFO mapred.JobClient:   Combine input records=60386  
 13/07/08 15:58:00 INFO mapred.JobClient:   Reduce input records=20  
 13/07/08 15:58:00 INFO mapred.JobClient:   Reduce input groups=11  
 13/07/08 15:58:00 INFO mapred.JobClient:   Combine output records=20  
 13/07/08 15:58:00 INFO mapred.JobClient:   Physical memory (bytes) snapshot=557441024  
 13/07/08 15:58:00 INFO mapred.JobClient:   Reduce output records=11  
 13/07/08 15:58:00 INFO mapred.JobClient:   Virtual memory (bytes) snapshot=2116161536  
 13/07/08 15:58:00 INFO mapred.JobClient:   Map output records=60386  

Run this command to see the output file
 hadoop fs -cat /home/hduser/hadoop/hl7eventcount/output/part-00000  
Output file will look like this
 ADT^A01 6251  
 ADT^A02 2181  
 ADT^A03 2346  
 ADT^A04 5271  
 ADT^A05 248  
 ADT^A06 95  
 ADT^A07 1  
 ADT^A08 43607  
 ADT^A13 4  
 ADT^A17 366  
 ADT^A18 16  

Now let's see the Python scripts
mapper.py
https://github.com/j4jayant/Hadoop-Examples/blob/master/HL7_EventCount/python/mapper.py
 #!/usr/bin/python  
 import sys  
 strDelim = "|";  
 # input comes from STDIN (standard input)  
 for line in sys.stdin:  
   #each line will be one segment of HL7 message  
   # remove leading and trailing whitespace  
   line = line.strip();  
   # check if this one is MSH Segment  
   if line.startswith('MSH|^~\&'):  
     # split fields of segment  
     words = line.split(strDelim);  
     #check if all the 12 required fields in MSH are present  
     if len(words) >= 12:  
       event = words[8];  
       print ('%s\t%s' % (event, 1));  

reducer.py
https://github.com/j4jayant/Hadoop-Examples/blob/master/HL7_EventCount/python/reducer.py
 #!/usr/bin/python  
 import sys  
 # maps words to their counts  
 word2count = {};  
 # input comes from STDIN  
 for line in sys.stdin:  
   # remove leading and trailing whitespace  
   line = line.strip();  
   # parse the input we got from mapper.py  
   word, count = line.split('\t', 1);  
   # convert count (currently a string) to int  
   try:  
     count = int(count);  
   except ValueError:  
     continue;  
   try:  
     word2count[word] = word2count[word]+count;  
   except:  
     word2count[word] = count;  
 # write the tuples to stdout  
 # Note: they are unsorted  
 for word in word2count.keys():  
   print ('%s\t%s'% ( word, word2count[word] ));  

Run this command to start MapReduce job (change the file/directory locations). This one uses Hadoop Streaming.
https://github.com/j4jayant/Hadoop-Examples/blob/master/HL7_EventCount/python/readme.txt
 hadoop jar /home/hduser/hadoop/contrib/streaming/hadoop-streaming-1.1.2.jar -file /home/hduser/python/eventcount/mapper.py -mapper ./mapper.py -file /home/hduser/python/eventcount/reducer.py -reducer ./reducer.py -input /home/hduser/hadoop/hl7eventcount/input/* -output /home/hduser/hadoop/hl7eventcount/output  

Output of this command will be something like this, please note that I have parsed a file with 60386 HL7 messages
 packageJobJar: [/home/hduser/python/eventcount/mapper.py, /home/hduser/python/eventcount/reducer.py, /home/hduser/tmp/hadoop-unjar1397608400024094301/] [] /tmp/streamjob3276271105870771026.jar tmpDir=null  
 13/07/09 17:54:08 INFO util.NativeCodeLoader: Loaded the native-hadoop library  
 13/07/09 17:54:08 WARN snappy.LoadSnappy: Snappy native library not loaded  
 13/07/09 17:54:08 INFO mapred.FileInputFormat: Total input paths to process : 1  
 13/07/09 17:54:08 INFO streaming.StreamJob: getLocalDirs(): [/home/hduser/tmp/mapred/local]  
 13/07/09 17:54:08 INFO streaming.StreamJob: Running job: job_201307091539_0013  
 13/07/09 17:54:08 INFO streaming.StreamJob: To kill this job, run:  
 13/07/09 17:54:08 INFO streaming.StreamJob: /home/hduser/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=localhost:54311 -kill job_201307091539_0013  
 13/07/09 17:54:08 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201307091539_0013  
 13/07/09 17:54:09 INFO streaming.StreamJob: map 0% reduce 0%  
 13/07/09 17:54:21 INFO streaming.StreamJob: map 100% reduce 0%  
 13/07/09 17:54:29 INFO streaming.StreamJob: map 100% reduce 33%  
 13/07/09 17:54:32 INFO streaming.StreamJob: map 100% reduce 100%  
 13/07/09 17:54:35 INFO streaming.StreamJob: Job complete: job_201307091539_0013  
 13/07/09 17:54:35 INFO streaming.StreamJob: Output: /home/hduser/hadoop/hl7eventcount/output  

Another similar example would be to count different segments in HL7 file, here is the Mapper in java for this (without HL7 validations)
 public static class Map extends MapReduceBase implements Mapper {  
     private final static IntWritable one = new IntWritable(1);  
     private Text word = new Text();  
     public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException {  
       String line = value.toString();  
       if(line.length() >= 4)  
       {  
        String segment = line.substring(0, 3);  
        word.set(segment);  
        output.collect(word, one);  
       }  
     }  
   }  

Monday, 8 July 2013

HL7 Analysis with NoSQL MongoDB - 2


I would like to thank all who read my article on HL7 analysis with NoSQL MongoDB - 1 and provided their valuable inputs. This has helped me improve the schema design & execute more queries for HL7 message analysis.

Some of the important considerations during this schema design were
  • We will be inserting data once while loading messages for analysis
  • There will not be any updates
  • Schema should be designed to facilitate faster reads

Here is a snapshot of schema that I have created. I have provided some sample queries using Map Reduce & Aggregation at the end of this article.


Let's take same example HL7 message from previous post
 MSH|^~\&|ADT|ADI|ADT-1|ADI-1|20050215||ADT^A01|MSGADT003|T|2.4  
 EVN|A01|20031016000000  
 PID|1|111222333|H123123^^^^MR^ADT~111-222-333^^^^SS^ADT||John^Smith|GARSEN^^Melissa|19380818|M||2028-9|241 AVE^^Lake City^WA^98125^^^^100|100|(425)111-2222|(425)111-2222||S|CHR|1234567|111-222-333  
 NK1|2|GARSEN^Melissa  
 PV1|1|E|||||D123^Jeff^Carron|||MED||||7|||D123^Jeff^Taylor|E|3454|R^20050215|||||||||||||||||||EM|||||20050215  
 IN1|1|I123|ICOMP1|INS COMP 1|PO BOX 1^^Lake City^WA^98125||||||||||1|John^Smith|01|19380818  
 IN2|1||RETIRED  
 IN1|2|I456|ICOMP2|INS COMP 1|PO BOX 2^^Lake City^WA^98125||||||||||8|John^Smith|01|19380818  
 IN2|2||RETIRED  

Here is a json representation of this sample HL7 message:
 {  
      "_id": "MSGADT003",  
      "Event": "A01",  
      "MsgDt": "20050215",  
      "Segments": [  
           {  
                "_id": "MSH",  
                "Rep": 1,  
                "Seq": 0,  
                "Val": "MSH|^~\\&|ADT|ADI|ADT-1|ADI-1|20050215||ADT^A01|MSGADT003|T|2.4",  
                "FC": 12,  
                "VF": 11,  
                "Fields": [  
                     {  
                          "_id": "MSH_1",  
                          "Val": "|"  
                     },  
                     {  
                          "_id": "MSH_2",  
                          "Val": "^~\\&"  
                     },  
                     {  
                          "_id": "MSH_3",  
                          "Val": "ADT"  
                     },  
                     {  
                          "_id": "MSH_4",  
                          "Val": "ADI"  
                     },  
                     {  
                          "_id": "MSH_5",  
                          "Val": "ADT-1"  
                     },  
                     {  
                          "_id": "MSH_6",  
                          "Val": "ADI-1"  
                     },  
                     {  
                          "_id": "MSH_7",  
                          "Val": "20050215"  
                     },  
                     {  
                          "_id": "MSH_9",  
                          "Val": "ADT^A01"  
                     },  
                     {  
                          "_id": "MSH_9_1",  
                          "Val": "ADT"  
                     },  
                     {  
                          "_id": "MSH_9_2",  
                          "Val": "A01"  
                     },  
                     {  
                          "_id": "MSH_10",  
                          "Val": "MSGADT003"  
                     },  
                     {  
                          "_id": "MSH_11",  
                          "Val": "T"  
                     },  
                     {  
                          "_id": "MSH_12",  
                          "Val": "2.4"  
                     }  
                ]  
           },  
           {  
                "_id": "EVN",  
                "Rep": 1,  
                "Seq": 1,  
                "Val": "EVN|A01|20031016000000",  
                "FC": 2,  
                "VF": 2,  
                "Fields": [  
                     {  
                          "_id": "EVN_1",  
                          "Val": "A01"  
                     },  
                     {  
                          "_id": "EVN_2",  
                          "Val": "20031016000000"  
                     }  
                ]  
           },  
           {  
                "_id": "PID",  
                "Rep": 1,  
                "Seq": 2,  
                "Val": "PID|1|111222333|H123123^^^^MR^ADT~111-222-333^^^^SS^ADT||John^Smith|GARSEN^^Melissa|19380818|M|||241 AVE^^Lake City^WA^98125||(425)111-2222|||S|CHR|1234567",  
                "FC": 18,  
                "VF": 12,  
                "Fields": [  
                     {  
                          "_id": "PID_1",  
                          "Val": "1"  
                     },  
                     {  
                          "_id": "PID_2",  
                          "Val": "111222333"  
                     },  
                     {  
                          "_id": "PID_3",  
                          "Val": "H123123^^^^MR^ADT~111-222-333^^^^SS^ADT",  
                          "Repetitions": [  
                               {  
                                    "_id": "PID_3",  
                                    "Val": "H123123^^^^MR^ADT",  
                                    "Rep": 1  
                               },  
                               {  
                                    "_id": "PID_3_1",  
                                    "Val": "H123123",  
                                    "Rep": 1  
                               },  
                               {  
                                    "_id": "PID_3_5",  
                                    "Val": "MR",  
                                    "Rep": 1  
                               },  
                               {  
                                    "_id": "PID_3_6",  
                                    "Val": "ADT",  
                                    "Rep": 1  
                               },  
                               {  
                                    "_id": "PID_3",  
                                    "Val": "111-222-333^^^^SS^ADT",  
                                    "Rep": 2  
                               },  
                               {  
                                    "_id": "PID_3_1",  
                                    "Val": "111-222-333",  
                                    "Rep": 2  
                               },  
                               {  
                                    "_id": "PID_3_5",  
                                    "Val": "SS",  
                                    "Rep": 2  
                               },  
                               {  
                                    "_id": "PID_3_6",  
                                    "Val": "ADT",  
                                    "Rep": 2  
                               }  
                          ]  
                     },  
                     {  
                          "_id": "PID_5",  
                          "Val": "John^Smith"  
                     },  
                     {  
                          "_id": "PID_5_1",  
                          "Val": "John"  
                     },  
                     {  
                          "_id": "PID_5_2",  
                          "Val": "Smith"  
                     },  
                     {  
                          "_id": "PID_6",  
                          "Val": "GARSEN^^Melissa"  
                     },  
                     {  
                          "_id": "PID_6_1",  
                          "Val": "GARSEN"  
                     },  
                     {  
                          "_id": "PID_6_3",  
                          "Val": "Melissa"  
                     },  
                     {  
                          "_id": "PID_7",  
                          "Val": "19380818"  
                     },  
                     {  
                          "_id": "PID_8",  
                          "Val": "M"  
                     },  
                     {  
                          "_id": "PID_11",  
                          "Val": "241 AVE^^Lake City^WA^98125"  
                     },  
                     {  
                          "_id": "PID_11_1",  
                          "Val": "241 AVE"  
                     },  
                     {  
                          "_id": "PID_11_3",  
                          "Val": "Lake City"  
                     },  
                     {  
                          "_id": "PID_11_4",  
                          "Val": "WA"  
                     },  
                     {  
                          "_id": "PID_11_5",  
                          "Val": "98125"  
                     },  
                     {  
                          "_id": "PID_13",  
                          "Val": "(425)111-2222"  
                     },  
                     {  
                          "_id": "PID_16",  
                          "Val": "S"  
                     },  
                     {  
                          "_id": "PID_17",  
                          "Val": "CHR"  
                     },  
                     {  
                          "_id": "PID_18",  
                          "Val": "1234567"  
                     }  
                ]  
           },  
           {  
                "_id": "PV1",  
                "Rep": 1,  
                "Seq": 4,  
                "Val": "PV1|1|E|||||D123^Jeff^Carron|||MED||||7|||D123^Jeff^Carron|E|3454|R^20050215|||||||||||||||||||EM|||||20050215",  
                "FC": 44,  
                "VF": 11,  
                "Fields": [  
                     {  
                          "_id": "PV1_1",  
                          "Val": "1"  
                     },  
                     {  
                          "_id": "PV1_2",  
                          "Val": "E"  
                     },  
                     {  
                          "_id": "PV1_7",  
                          "Val": "D123^Jeff^Carron"  
                     },  
                     {  
                          "_id": "PV1_7_1",  
                          "Val": "D123"  
                     },  
                     {  
                          "_id": "PV1_7_2",  
                          "Val": "Jeff"  
                     },  
                     {  
                          "_id": "PV1_7_3",  
                          "Val": "Carron"  
                     },  
                     {  
                          "_id": "PV1_10",  
                          "Val": "MED"  
                     },  
                     {  
                          "_id": "PV1_14",  
                          "Val": "7"  
                     },  
                     {  
                          "_id": "PV1_17",  
                          "Val": "D123^Jeff^Carron"  
                     },  
                     {  
                          "_id": "PV1_17_1",  
                          "Val": "D123"  
                     },  
                     {  
                          "_id": "PV1_17_2",  
                          "Val": "Jeff"  
                     },  
                     {  
                          "_id": "PV1_17_3",  
                          "Val": "Carron"  
                     },  
                     {  
                          "_id": "PV1_18",  
                          "Val": "E"  
                     },  
                     {  
                          "_id": "PV1_19",  
                          "Val": "3454"  
                     },  
                     {  
                          "_id": "PV1_20",  
                          "Val": "R^20050215"  
                     },  
                     {  
                          "_id": "PV1_20_1",  
                          "Val": "R"  
                     },  
                     {  
                          "_id": "PV1_20_2",  
                          "Val": "20050215"  
                     },  
                     {  
                          "_id": "PV1_39",  
                          "Val": "EM"  
                     },  
                     {  
                          "_id": "PV1_44",  
                          "Val": "20050215"  
                     }  
                ]  
           },  
           {  
                "_id": "IN1",  
                "Rep": 1,  
                "Seq": 5,  
                "Val": "IN1|1|I123|ICOMP1|INS COMP 1|PO BOX 1^^Lake City^WA^98125||||||||||1|John^Smith|01|19380818",  
                "FC": 18,  
                "VF": 9,  
                "Fields": [  
                     {  
                          "_id": "IN1_1",  
                          "Val": "1"  
                     },  
                     {  
                          "_id": "IN1_2",  
                          "Val": "I123"  
                     },  
                     {  
                          "_id": "IN1_3",  
                          "Val": "ICOMP1"  
                     },  
                     {  
                          "_id": "IN1_4",  
                          "Val": "INS COMP 1"  
                     },  
                     {  
                          "_id": "IN1_5",  
                          "Val": "PO BOX 1^^Lake City^WA^98125"  
                     },  
                     {  
                          "_id": "IN1_5_1",  
                          "Val": "PO BOX 1"  
                     },  
                     {  
                          "_id": "IN1_5_3",  
                          "Val": "Lake City"  
                     },  
                     {  
                          "_id": "IN1_5_4",  
                          "Val": "WA"  
                     },  
                     {  
                          "_id": "IN1_5_5",  
                          "Val": "98125"  
                     },  
                     {  
                          "_id": "IN1_15",  
                          "Val": "1"  
                     },  
                     {  
                          "_id": "IN1_16",  
                          "Val": "John^Smith"  
                     },  
                     {  
                          "_id": "IN1_16_1",  
                          "Val": "John"  
                     },  
                     {  
                          "_id": "IN1_16_2",  
                          "Val": "Smith"  
                     },  
                     {  
                          "_id": "IN1_17",  
                          "Val": "01"  
                     },  
                     {  
                          "_id": "IN1_18",  
                          "Val": "19380818"  
                     }  
                ]  
           },  
           {  
                "_id": "IN1",  
                "Rep": 2,  
                "Seq": 7,  
                "Val": "IN1|2|I456|ICOMP2|INS COMP 1|PO BOX 2^^Lake City^WA^98125||||||||||8|John^Smith|01|19380818",  
                "FC": 18,  
                "VF": 9,  
                "Fields": [  
                     {  
                          "_id": "IN1_1",  
                          "Val": "2"  
                     },  
                     {  
                          "_id": "IN1_2",  
                          "Val": "I456"  
                     },  
                     {  
                          "_id": "IN1_3",  
                          "Val": "ICOMP2"  
                     },  
                     {  
                          "_id": "IN1_4",  
                          "Val": "INS COMP 1"  
                     },  
                     {  
                          "_id": "IN1_5",  
                          "Val": "PO BOX 2^^Lake City^WA^98125"  
                     },  
                     {  
                          "_id": "IN1_5_1",  
                          "Val": "PO BOX 2"  
                     },  
                     {  
                          "_id": "IN1_5_3",  
                          "Val": "Lake City"  
                     },  
                     {  
                          "_id": "IN1_5_4",  
                          "Val": "WA"  
                     },  
                     {  
                          "_id": "IN1_5_5",  
                          "Val": "98125"  
                     },  
                     {  
                          "_id": "IN1_15",  
                          "Val": "8"  
                     },  
                     {  
                          "_id": "IN1_16",  
                          "Val": "John^Smith"  
                     },  
                     {  
                          "_id": "IN1_16_1",  
                          "Val": "John"  
                     },  
                     {  
                          "_id": "IN1_16_2",  
                          "Val": "Smith"  
                     },  
                     {  
                          "_id": "IN1_17",  
                          "Val": "01"  
                     },  
                     {  
                          "_id": "IN1_18",  
                          "Val": "19380818"  
                     }  
                ]  
           },  
           {  
                "_id": "IN2",  
                "Rep": 1,  
                "Seq": 6,  
                "Val": "IN2|1||RETIRED",  
                "FC": 3,  
                "VF": 2,  
                "Fields": [  
                     {  
                          "_id": "IN2_1",  
                          "Val": "1"  
                     },  
                     {  
                          "_id": "IN2_3",  
                          "Val": "RETIRED"  
                     }  
                ]  
           },  
           {  
                "_id": "IN2",  
                "Rep": 2,  
                "Seq": 8,  
                "Val": "IN2|2||RETIRED",  
                "FC": 3,  
                "VF": 2,  
                "Fields": [  
                     {  
                          "_id": "IN2_1",  
                          "Val": "2"  
                     },  
                     {  
                          "_id": "IN2_3",  
                          "Val": "RETIRED"  
                     }  
                ]  
           }  
      ]  
 }  

Here is a snapshot of sample document from MongoVUE
HL7_MongoDB_Schema


Example: different Trigger Events with count using Map Reduce
 function Map() {  
   emit(this.Event, 1);  
 }  
 function Reduce(key, values) {  
   returnArray.sum(values);  
 }  

Example: different Trigger Events with count using Aggregation
 { $project : { "Event" : 1 }},  
 { $group: { _id: "$Event", count: {$sum: 1} } }  

Example: different Segments with count using Map Reduce
 function Map() {  
   this.Segments.forEach(function (s) {  
    emit(s._id, 1);  
   });  
 }  
 function Reduce(key, values) {  
   returnArray.sum(values);  
 }  

Example: different Segments with count using Aggregation
 { $unwind: "$Segments" },  
 { $project : { "Segments._id": 1}},  
 { $group: { _id: "$Segments._id", count: {$sum: 1} } }  

Example: distinct values for MSH_3 using Map Reduce
 function Map() {  
   this.Segments.forEach(function (s) {  
    if(s._id == "MSH") {  
      s.Fields.forEach(function (f) {  
       if(f._id == "MSH_3")  
        emit(f.Val, 1);  
      });  
    }  
   });  
 }  
 function Reduce(key, values) {  
 returnArray.sum(values);  
 }  

Example: distinct values for MSH_3 using Aggregation
 { $unwind: "$Segments" },  
 { $unwind: "$Segments.Fields" },  
 { $match : { "Segments.Fields._id": "MSH_3"} },  
 { $group: { _id: "$Segments.Fields.Val" } }  

Example: List of MSH_10 with PV1_2="E" using Aggregation
 { $unwind: "$Segments" },  
 { $unwind: "$Segments.Fields" },  
 { $match : { "Segments.Fields._id": "PV1_2", "Segments.Fields.Val": "E"}},  
 { $project : { "_id": 1}}  

Here is the list of queries that I have executed successfully with this schema in MongoDB.
  1. Total number of messages in sample
  2. Distinct MSH_9_2 (Trigger Events) [with count]
    1. This will tell you which Messages you should handle in your interface
  3. Different Segments in all messages [with count]
    1. This will tell you which segments you should handle in you interface
  4. Different Fields (with values)
    1. This will tell you which fields you need to process
    2. This list contains only those fields for which we have received values or HL7 null in any of the message.
    3. If the field is empty in all the messages in sample then it will not appear here in the list.
  5. List of fields which HL7 null (“”) value [with count]
  6. List of fields with components  [with count]
  7. Check if particular field has components
  8. Check if particular field has specified leading character
  9. List of fields with repetitions  [with count]
  10. Maximum number of repetitions received for each field
  11. List of different Z Segments  [with count]
  12. Find messages with particular segment
  13. Find messages without particular segment
  14. Find messages with particular field. For example messages where PID_19 is present
  15. Find messages with particular field & value. For example messages where PV1_2 = E
  16. Find messages with particular field excluding given value. For example messages where PV1_2 != E
  17. Find messages without particular field
  18. Distinct values of particular field. For example different values for MSH_3
  19. Maximum number of fields received in given segment
  20. Total fields with values (excluding empty fields) in given segment

You can download python script to insert HL7 messages into MongoDB database from my github. I am working on other scripts
https://github.com/j4jayant/HL7-2-MongoDB-Analysis/blob/master/python/hl7-2-mongodb.py
https://github.com/j4jayant/HL7-2-MongoDB-Analysis/blob/master/python/readme.txt

I welcome any comments or criticism to help improve this.

Monday, 3 June 2013

HL7 Analysis with NoSQL MongoDB - 1


Last week I had a discussion with Terry Montgomery. He was interested in HL7 message analysis and wanted my opinion. We discussed different tools that both of us have tried.
From discussion I found that he was looking for a tool which can analyze raw HL7 messages. This analysis could be used in designing HL7 interfaces.

I gave him few basic analysis points that we should do to start any interface development like:

  1. Total number of messages in a sample
  2. Different Trigger Events & their count in Feed
  3. Different Segments that are present in feed
  4. Total Number of PID_3, PID_18, PV1_19 with leading zeros
  5. Total Number of PID_3, PID_18, PV1_19 are componentized

He said he wants all these with many other points. In discussion he presented couple of his ideas to analyze raw HL7 messages. One of them was using NoSQL approach with MongoDB.
I am not MongoDB expert but I tried it few days earlier with one of my FHIR researches.
I was somehow sure that MongoDB can be used but was hesitated with approach. I have already seen couple of discussions criticizing HL7 to Json conversion.
But as a developer I decided to do something around this & used some free time over weekend.
I extended my HL7 Parsing Library with couple of methods to generate BsonDocument from HL7 message.
Let's take an example HL7 message

 MSH|^~\&|ADT|ADI|ADT-1|ADI-1|20050215||ADT^A01|MSGADT003|T|2.4  
 EVN|A01|20031016000000  
 PID|1|111222333|H123123^^^^MR^ADT~111-222-333^^^^SS^ADT||John^Smith|GARSEN^^Melissa|19380818|M||2028-9|241 AVE^^Lake City^WA^98125^^^^100|100|(425)111-2222|(425)111-2222||S|CHR|1234567|111-222-333  
 NK1|2|GARSEN^Melissa  
 PV1|1|E|||||D123^Jeff^Carron|||MED||||7|||D123^Jeff^Taylor|E|3454|R^20050215|||||||||||||||||||EM|||||20050215  
 IN1|1|I123|ICOMP1|INS COMP 1|PO BOX 1^^Lake City^WA^98125||||||||||1|John^Smith|01|19380818  
 IN2|1||RETIRED  
 IN1|2|I456|ICOMP2|INS COMP 1|PO BOX 2^^Lake City^WA^98125||||||||||8|John^Smith|01|19380818  
 IN2|2||RETIRED  

 I have generated a Json from BsonDocument which looks like following.

 {  
      "MSH":  
      {  
           "MSH_1": "|",  
           "MSH_2": "^~\\&",  
           "MSH_3": "ADT",  
           "MSH_4": "ADI",  
           "MSH_5": "ADT-1",  
           "MSH_6": "ADI-1",  
           "MSH_7": "20050215",  
           "MSH_8": "",  
           "MSH_9": "ADT^A01",  
           "MSH_9_1": "ADT",  
           "MSH_9_2": "A01",  
           "MSH_10": "MSGADT003",  
           "MSH_11": "T",  
           "MSH_12": "2.4"  
      },  
      "EVN":  
      {  
           "EVN_1": "A01",  
           "EVN_2": "20031016000000"  
      },  
      "PID":  
      {  
           "PID_1": "1",  
           "PID_2": "111222333",  
           "PID_3": "H123123^^^^MR^ADT~111-222-333^^^^SS^ADT",  
           "PID_4": "",  
           "PID_5": "John^Smith",  
           "PID_5_1": "John",  
           "PID_5_2": "Smith",  
           "PID_6": "GARSEN^^Melissa",  
           "PID_6_1": "GARSEN",  
           "PID_6_2": "",  
           "PID_6_3": "Melissa",  
           "PID_7": "19380818",  
           "PID_8": "M",  
           "PID_9": "",  
           "PID_10": "2028-9",  
           "PID_11": "241 AVE^^Lake City^WA^98125^^^^100",  
           "PID_11_1": "241 AVE",  
           "PID_11_2": "",  
           "PID_11_3": "Lake City",  
           "PID_11_4": "WA",  
           "PID_11_5": "98125",  
           "PID_11_6": "",  
           "PID_11_7": "",  
           "PID_11_8": "",  
           "PID_11_9": "100",  
           "PID_12": "100",  
           "PID_13": "(425)111-2222",  
           "PID_14": "(425)111-2222",  
           "PID_15": "",  
           "PID_16": "S",  
           "PID_17": "CHR",  
           "PID_18": "1234567",  
           "PID_19": "111-222-333"  
      },  
      "NK1":  
      {  
           "NK1_1": "2",  
           "NK1_2": "GARSEN^Melissa",  
           "NK1_2_1": "GARSEN",  
           "NK1_2_2": "Melissa"  
      },  
      "PV1":  
      {  
           "PV1_1": "1",  
           "PV1_2": "E",  
           "PV1_3": "",  
           "PV1_4": "",  
           "PV1_5": "",  
           "PV1_6": "",  
           "PV1_7": "D123^Jeff^Taylor",  
           "PV1_7_1": "D123",  
           "PV1_7_2": "Jeff",  
           "PV1_7_3": "Taylor",  
           "PV1_8": "",  
           "PV1_9": "",  
           "PV1_10": "MED",  
           "PV1_11": "",  
           "PV1_12": "",  
           "PV1_13": "",  
           "PV1_14": "7",  
           "PV1_15": "",  
           "PV1_16": "",  
           "PV1_17": "D123^Jeff^Taylor",  
           "PV1_17_1": "D123",  
           "PV1_17_2": "Jeff",  
           "PV1_17_3": "Taylor",  
           "PV1_18": "E",  
           "PV1_19": "3454",  
           "PV1_20": "R^20050215",  
           "PV1_20_1": "R",  
           "PV1_20_2": "20050215",  
           "PV1_21": "",  
           "PV1_22": "",  
           "PV1_23": "",  
           "PV1_24": "",  
           "PV1_25": "",  
           "PV1_26": "",  
           "PV1_27": "",  
           "PV1_28": "",  
           "PV1_29": "",  
           "PV1_30": "",  
           "PV1_31": "",  
           "PV1_32": "",  
           "PV1_33": "",  
           "PV1_34": "",  
           "PV1_35": "",  
           "PV1_36": "",  
           "PV1_37": "",  
           "PV1_38": "",  
           "PV1_39": "EM",  
           "PV1_40": "",  
           "PV1_41": "",  
           "PV1_42": "",  
           "PV1_43": "",  
           "PV1_44": "20050215"  
      },  
      "IN1":  
      {  
           "IN1_1": "1",  
           "IN1_2": "I123",  
           "IN1_3": "ICOMP1",  
           "IN1_4": "INS COMP 1",  
           "IN1_5": "PO BOX 1^^Lake City^WA^98125",  
           "IN1_5_1": "PO BOX 1",  
           "IN1_5_2": "",  
           "IN1_5_3": "Lake City",  
           "IN1_5_4": "WA",  
           "IN1_5_5": "98125",  
           "IN1_6": "",  
           "IN1_7": "",  
           "IN1_8": "",  
           "IN1_9": "",  
           "IN1_10": "",  
           "IN1_11": "",  
           "IN1_12": "",  
           "IN1_13": "",  
           "IN1_14": "",  
           "IN1_15": "1",  
           "IN1_16": "John^Smith",  
           "IN1_16_1": "John",  
           "IN1_16_2": "Smith",  
           "IN1_17": "01",  
           "IN1_18": "19380818"  
      },  
      "IN1":  
      {  
           "IN1_1": "2",  
           "IN1_2": "I456",  
           "IN1_3": "ICOMP2",  
           "IN1_4": "INS COMP 1",  
           "IN1_5": "PO BOX 2^^Lake City^WA^98125",  
           "IN1_5_1": "PO BOX 2",  
           "IN1_5_2": "",  
           "IN1_5_3": "Lake City",  
           "IN1_5_4": "WA",  
           "IN1_5_5": "98125",  
           "IN1_6": "",  
           "IN1_7": "",  
           "IN1_8": "",  
           "IN1_9": "",  
           "IN1_10": "",  
           "IN1_11": "",  
           "IN1_12": "",  
           "IN1_13": "",  
           "IN1_14": "",  
           "IN1_15": "8",  
           "IN1_16": "John^Smith",  
           "IN1_16_1": "John",  
           "IN1_16_2": "Smith",  
           "IN1_17": "01",  
           "IN1_18": "19380818"  
      },  
      "IN2":  
      {  
           "IN2_1": "1",  
           "IN2_2": "",  
           "IN2_3": "RETIRED"  
      },  
      "IN2":  
      {  
           "IN2_1": "2",  
           "IN2_2": "",  
           "IN2_3": "RETIRED"  
      }  
 }  

And this is how this BsonDocument looks in MongoVUE

HL7 Bson in MongoVUE


As I said I am not an expert in MongoDB and taking this as an opportunity to learn NoSQL, I have successfully written few basic queries. I am not sure how efficient these queries are but I shall improve them as I learn more about MongoDB & C# driver.
  1. Total number of messages in feed
  2. Different Event Types in feed & their count in Feed
  3. Different Segments received in Feed
  4. Find PID_3 with leading zeros
  5. Find PID_3 with components

I would request you to please comment the approach if this is a feasible way to analyze raw HL7 messages.
I welcome any comment/criticism to help improve this.

Tuesday, 28 May 2013

HL7 Parsing in C#


I learned a little bit of nHAPI during my last post on HL7 over HTTP and wanted to use it for a utility that I am working on. But some more research on nHAPI influenced me to develop my own small library in C#.

nHAPI is a great library and I do not want to compete with it. One of the reasons I decided to write a small library is I needed very basic features like read any HL7 message, evaluate some fields, update some fields, get updated HL7 message, generate ACKs. At this time I have decided to overlook message type or trigger event specific features/validations. However it performs basic validations to ckeck message header, format of segment names, delimiters etc.
This library treats every message in same manner while parsing HL7 messages. After successfull parsing it provides all the components of HL7 message like segments, fields (with repetitions), components, subcomponents in easily accessible way.
I do not want to claim it as HL7 Parsing Library. But it provides some basic features and simple ways to access those features.

Now let’s see how to use this library:

Create a Message object and pass raw HL7 message in text format

 Message message = new Message(strMsg);  

Parse this message
 bool isParsed = false;  
 try  
 {  
   isParsed = message.ParseMessage();  
 }  
 catch(Exception ex)  
 {  
   //handle the exception  
 }  

Now let’s see some of the functions, Please note indexes are zero based, so if you access FieldList[3], it’s actually a fourth field.

Accessing Segments


Get list of all segments
 List<Segment> segList = message.Segments();  

Get List of list of repeated segments by name For example if you have multiple IN1 segments
 List<Segment> IN1List = message.Segments("IN1");  

access a particular occurrence from multiple IN1s you can provide the index, please note index 1 will give you 2nd element from list
 Segment IN1_2 = message.Segments("IN1")[1];  

Get count of IN1s
 int countIN1 = message.Segments("IN1").Count;  

Access first occurrence of any segment
 Segment IN1 = message.DefaultSegment("IN1");  
 //OR  
 Segment IN1 = message.Segments("IN1")[0];  

Accessing Fields


Access field values
 String SendingFacility = message.getValue("MSH.4");  
 //OR  
 String SendingFacility = message.DefaultSegment("MSH").Fields(4).Value;  
 //OR  
 String SendingFacility = message.Segments("MSH")[0].Fields(4).Value;  

Check if field is componentized
 bool isComponentized = message.Segments("PID")[0].Fields(5).IsComponentized;  
 //OR  
 bool isComponentized = message.IsComponentized("PID.5");  

Check if field has repetitions
 bool isRepeated = message.Segments("PID")[0].Fields(3).HasRepetitions;  
 //OR  
 bool isRepeated = message.HasRepeatitions("PID.3");  

Get list of repeated fields
List<Field> repList = message.Segments("PID")[0].Fields(3).Repetitions();

Get particular repetition i.e 2nd repetition of PID.3
 List<Field> repList = message.Segments("PID")[0].Fields(3).Repetitions();  

Update value of any field i.e. to update PV1.2 – patient class
 message.setValue("PV1.2", "I");  
 //OR  
 message.Segments("PV1"[0];).Fields(2).Value = "I";  

You can access some of the required MSH fields with properties
 String version = message.Version;  
 String msgControlID = message.MessageControlID;  
 String messageStructure = message.MessageStructure;  

Accessing Components


Access particular component i.e. PID.5.1 – Patient Family Name
 String PatName1 = message.getValue("PID.5.1");  
 //OR  
 String PatName1 = message.Segments("PID")[0].Fields(5).Components(1).Value;  

Check if component is sub componentized
 bool isSubComponentized = message.Segments("PV1")[0].Fields(7).Components(1).IsSubComponentized;  
 //OR  
 bool isSubComponentized = message.IsSubComponentized("PV1.7.1");  

Update value of any component
 message.Segments("PID")[0].Fields(5).Components(1).Value = "Jayant";  
 //OR  
 message.setValue("PID.5.1", "Jayant");  

Adding new Segment


 //Create a Segment with name ZIB  
 Segment newSeg = new Segment("ZIB");  
 //Create Field ZIB_1  
 Field ZIB_1 = new Field("ZIB1");  
 //Create Field ZIB_5  
 Field ZIB_5 = new Field("ZIB5");  
 //Create Component ZIB.5.2  
 Component com1 = new Component("ZIB.5.2");  
 //Add Component ZIB.5.2 to Field ZIB_5  
 //2nd parameter here specifies the component position, if you want to insert segment on particular position  
 //If we don’t provide 2nd parameter, component will be inserted to next position (if field has 2 components this will be 3rd, if field is empty this will be 1st component  
 ZIB_5.AddNewComponent(com1, 2);  
 //Add Field ZIB_1 to segment ZIB, this will add a new filed to next field location, in this case first field  
 newSeg.AddNewField(ZIB_1);  
 //Add Field ZIB_5 to segment ZIB, this will add a new filed as 5th field of segment  
 newSeg.AddNewField(ZIB_5, 5);  
 //add segment ZIB to message  
 message.AddNewSegment(newSeg);  

New Segment would look like this
 ZIB|ZIB1||||^ZIB.5.2  

After you have evaluated and modified required values you can again get the message in text format

 String strUpdatedMsg = message.SerializeMessage();  

Generate ACKs
AA ACK
 String ackMsg = message.getACK();  

To generate negative ACK message with error message

 String ackMsg = message.getNACK("AR", "Invalid Processing ID");  


This is my small library to parse HL7 messages.

I welcome any comments/criticism to help improve this.

Friday, 26 April 2013

HL7 Message Player


I was looking for something related to HL7 on Google & saw a link named HL7 Message Player. The name influenced me to follow the link.

It is a free utility by Caristix. I am bit confused on how to start because I do not want this article to look like product review or advertisement. These are couple of reasons that attracted me to write this small article.
  • It is free
  • A simple utility with interesting/innovative packaging. Can you imagine HL7 Message Player?

Well, the name and look of this utility are like media player but what is the role of player in HL7? This is where the curiosity starts and this curiosity made me to read about this utility.

When I started reading I found that I have used several other utilities that provide similar features. I have also developed some of these features for testing purpose. Almost every HL7 integration developer/tester uses this kind of solutions as part of their routine job for:
  • Testing a connection between HL7 interfaces
  • Receive & store/forward HL7 messages from other interface/system
  • Send HL7 messages to other interface/system

So what’s special about HL7 Message Player? HL7 Message Player does these things in much interesting way:
HL7 Message Player
  1. Record (like record button in audio/video tools)
    1. Starts receiving messages from external system on a port specified
    2. Stores the received messages
    3. I tested this feature by sending around 60K messages in one go & it recorded all & saved in a file in 12+ minutes.
  2. Play (like play button of media player)
    1. Starts sending the messages (recorded earlier) on ip:port specified
  3. Stop (like stop button in audio/video tools)
    1. Stops the current recording
    2. Saves the recorded messages in file

http://caristix.com/products/message-player/get-started-in-5-minutes/ This link will provide you more details about utility.
Many of us (interface developer/tester) would like a simple HL7 simulator & listener which sends/receives HL7 message on single click without doing much of the configurations.
I do not ask you to use this utility for your needs but this utility provides some interesting way to do your routine tasks. That’s what influenced me to write this small article.

Wednesday, 24 April 2013

HL7 Data Analysis Tool


HL7 Data Analysis Tool is a small utility used to analyze HL7 data, extract results based on criteria given and generate statistics report in spreadsheet format.

I developed this small utility for specific purpose. We had a requirement to analyze HL7 messages in specific way. I feel this should be shared to others who might have developed similar custom solutions or are in process to develop.

Problem

My team was using HL7 viewer application to analyze and calculate the data count of different HL7 components from messages based on a filtration criteria (e.g. filter criteria “MSH.5=3MLAB”). However, they were only able to select one category at a time and had to manually enter the totals on a spreadsheet. The same process has to be repeated for each category. This was time consuming.

Solution

A simple lightweight tool that reads HL7 messages from a file and analyze messages to generate statistical report in spreadsheet. You can configure the criteria for analysis in an XML file. For example you might want to all the counts and specific values for MSH.3 (Sending Application), MSH.4 (Sending Facility)

The Solution: Data Analysis Tool:

Data analysis tool provides a UI to the client, wherein the user can upload the HL7 file having multiple messages and then select multiple components and get complete statistical analysis of the data by generating spreadsheet on single click. Primary feature of the Data Extraction Tool are the configurable checkbox and filter textbox.
Checkboxes are loaded from an XML file where you can specify the components which require analysis.
Filter textbox provides you that way to specify criteria that should be satisfied to process the message during analysis. For example MSH.3=MLAB, so only messages having MLAB as sending facility will be processed.

Feature:

  • Select multiple HL7 messages from the single file.
  • Use filter criteria to filter the records
  • Filter criteria is generic and could be applied on any component of the HL7 message.
  • Select multiple components from checkboxes and extract statistical analysis for each component on excel.
  • Add multiple component checkboxes in UI by adding the simple entry in XML file.
  • Generated output is properly formatted and exported to the excel sheet.

Advantage of Tool:

  • Saves significant amount of time required in the manual analysis & recording of HL7 data.
  • Filter criteria is generic and helps to filter message from multiple feeds.
  • Tool is generic and we can use in multiple implementation for analysis of data.
Future upgrades of tool may help in many implementations for analysis and generating reports and to process the feed data as per requirement

User Guide


Step 1: Launch Application & click on "Browse" button
HL7 Data Analysis Tool - Select File

Step 2: Specify wether you want to process all messages of only those which satisfy some match criteria
HL7 Data Analysis Tool - Select Option

Step 3: Select the components which should be analyzed for count & values
HL7 Data Analysis Tool - Select Components

Step 4: Click on "Export To Excel" button to generate analysis report in spreadsheet
Output File is generated under “My Documents\Data Analysis Tool\Result Files” File Name: [Name of file selected in Step#1]_timestamp. For Eg: Test_20121121120405.xls
HL7 Data Analysis Tool - Report


Configure XML


HL7 Data Analysis Tool has a feature where we can add the checkbox (Component level filter criteria) by editing Data Analysis Tool.XML. This would add the checkboxes on the Data Analysis Tool UI.
Location of Data Analysis Tool.XML File: My Documents\Data Analysis Tool\Application Data\ Data Analysis Tool.xml
Each Item in XML represents a check box on UI
HL7 Data Analysis Tool - XML


Please write to me @ deepak30dec2006@gmail.com to get the source code or executable of this utility.

Download Requests

Name Date Comments
Michael Anrake (michael.anrake@gmail.com) 21st Jun 2013 I work with HL7 regularly and found your blog to be useful and informative. If it's still available, I'd like to download your HL7 analysis tool - it looks to be useful for several projects.
Frederic BUISSON (fredericbuisson@hotmail.com) 6th Jun 2013 I was very interested by this tool and try to undertsand how it works and practice a little to help me to supervise my HL7 connexion.
Huw Jones (Huw.Jones6@wales.nhs.uk) 4th Jun 2013 Am interested in your data analysis tool to process data segments in pathology messages to analyse their content within healthcare here in Wales.
Hans Monnig (hans_monnig@yahoo.com) 31st May 2013 I would like to review it.
Moses Rosario (moses.j.rosario@gmail.com) 4th May 2013 I recently viewed your tool for analyzing HL7 files and am very interested in using it. The current method I have is a homegrown spreadsheet app that accomplishes a few of the things you have done in your application.
Guillermo Zeller (gzeller@gmail.com) 25th Apr 2013 I saw your HL7 tool and I am very intersted in that. I am working in a Health Care project, and we need some tool to extract estadistics from the messages. Now a days, we are using excel files... but is not really productive.

Monday, 22 April 2013

HL7 over HTTP


There are few implementations of HL7 v3 around the world and many of CCD/CDA. Another promising standard FHIR – Fast Healthcare Interoperability Resources is on its way.

With all these movements in market, an integration analyst like me (who still loves HL7 v2) loves to see new initiatives on HL7 v2. One such initiative is HL7 over HTTP – specification available on HAPI project site.

When I first read about this about 2 weeks ago, I started exploring the ways to implement it. I had some technical challenges to use this API:
  • I don’t know java (a typical software developer excuse :)
  • I never had an opportunity to use nHAPI (a .NET version of HAPI) in any of my projects. But I found it very easy to learn & use for this exercise.
So I tried a simple implementation in C# for R&D and not to develop an API. I wish to spread this (initiative by HAPI project team) among people who do not know java.
Here is the implementation:

SimpleServer – This class uses System.net.HttpListener

This will listen for HTTP requests on specific URL like http://localhost:8889/
Members:

 public Dictionary<string, EndPoint> EndPoints;  

EndPoint – This class will serve as base class. We will derive this class to create end points.
For example, class ADTEndPoint : EndPoint
This will provide us ADTEndPoint, which will be accessible on http://localhost:8889/ADT
This way we can extend all the required end points from EndPoint class
We will override ProcessMessage() function of EndPoint class in each child class to provide end point specific processing of received HL7 message. Depending on our processing we will return HL7 ACK message or error message.
When we have implemented all the endpoints we will add these to the dictionary of SimpleServer

 server.EndPoints.Add("ADT", new ADTEndPoint());  
 server.EndPoints.Add("LAB", new LABEndPoint());  

server.start() will start the server & our server will listen for HTTP POST requests.

If we receive a request for endpoint which is not available in dictionary of SimpleServer, we will send HTTP 400 – Bad Request with ContentType: text/html.
Whenever server receives any request on ADT end point it will trigger ProcessMessage() function overridden in ADTEndPoint class to process that message and generate ACK for it.
  • If ACK is AA, we will send it with HTTP 200 - OK, ContentType: application-hl7-v2
  • If we encounter any error while processing the message, we will send HTTP 500 with error message - ContentType: text/html
Here is an exapmle of how to implement an End Point by extending EndPoint class

 class ADTEndPoint : EndPoint  
   {  
     public ADTEndPoint()  
     {  
       Name = "ADT";  
     }  
     /// <summary>  
     /// Process Message  
     /// </summary>  
     /// <param name="strMsg">HL7 message in string format</param>  
     /// <returns>HoHResponse</returns>  
     public override HoHResponse ProcessMessage(string strMsg)  
     {  
       HoHResponse resp = new HoHResponse(); //HoH response - contains ackCode, hl7 ack msg & error msg  
       Message msg = null;  
       IMessage hl7Message = null;  
       PipeParser hl7Parser = new PipeParser(); //nHAPI Pipe Parser  
       try  
       {  
         hl7Message = hl7Parser.Parse(strMsg);  
         msg = new Message(hl7Message);  
       }  
       catch (Exception ex) // Exception occured while parsing the message  
       {  
         resp.ackCode = "0"; // send response with custome ack code of 0 - means failure - HTTP 500 will be sent  
         resp.errMsg = "Unable to process incomming message, error: "; + ex.Message; // err msg  
         return resp;  
       }  
       //implement any custom logic to process the message here  
       //after processing the message, let's generate ack  
       IMessage ackMessage = null;  
       try  
       {  
         ackMessage = msg.generateACK(); //this will generate ack(AA) for the message..  
         //ackMessage = msg.generateACK("AE", new MyException("failed processing the message"); //this will generate ack(AE) for the message..  
         resp.ackCode = "AA"; // set response ack code  
         resp.ackMsg = hl7Parser.Encode(ackMessage); // set response ack message  
       }  
       catch (Exception ex)  
       {  
         resp.ackCode = "0"; // send response with custome ack code of 0 - means failure - HTTP 500 will be sent  
         resp.errMsg = "Unable to generate ACK, error: " + ex.Message; // err msg  
       }  
       return resp;  
     }  
   }  


Now, let's initialize our HTTP server & add this end point

 SimpleServer ss = new SimpleServer("http://localhost:8889/");  
 ss.EndPoints.Add("ADT", new ADTEndPoint()); // add ADT endpoint http://localhost:8889/ADT  
 ss.EndPoints.Add("LAB", new LABEndPoint()); // add LABendpoint http://localhost:8889/LAB  
 ss.start(); // start http server  

I do not have any mechanism to provide downloads here, those who are interested in full source code can request me by email. Other than exploring Hl7 over HTTP, this source code also provides ways to use HttpListener & HL7 message/ack creation using nHapi.

Download Requests

Name Date Comments
Ned Shah (nedshah@gmail.com) 3rd Jun 2013 I read your articles on your website re HL7 and found them very informative. I appreciate your help and contribution of your research work with the community. Do you mind share your HL7 Http code with me.
Maqbool Hussain (maqbool110@gmail.com) 25th Apr 2013 I have read your article "HL7 over HTTP", which is very interesting and I enjoyed lot. It's great work and is really helpful in my one of healthcare project. I would like to have complete source code of your sample, explained in article.
Nevin Brittain (nevin@healthnumeric.com) 23rd Apr 2013 I would like to see the .NET solution. Please send me the source code.

Sample Immunization Records Blockchain using Hyperledger Composer

This is basic and sample Blockchain implementation of Immunization Records using Hyperledger Composer.  Source Code available at:  https...