Clinical Informatics – Transcript and Audio

The views expressed in written materials or publications and by speakers and moderators do not necessarily reflect the official policies of the Department of Health and Human Services, nor does the mention of trade names, commercial practices, or organizations imply endorsement by the U.S. government.

On This Page

Play Audio

Transcript

Date of session: 10/30/20

Facilitator
Triona Henderson, MD, MPH
Centers for Disease Control and Prevention

Didactic speaker
Bryan Dangott
Mayo Clinic – Jacksonville

TRIONA HENDERSON: All right. Good afternoon, everyone, and thank you for joining us on this Friday afternoon for our 10th session in this ECHO Pilot Project. So my name is Triona Henderson. I’m a clinical pathologist and the facilitator of this ECHO Pilot Project. I extend a warm welcome to you from the Division of Laboratory Systems at the Centers for Disease Control and Prevention in Atlanta, Georgia. Thank you for joining us.

Today’s interactive web discussion is on a laboratory analytics in the community hospital. Our subject matter expert is Dr. Bryan Dangott who will present today’s topic. Thank you for joining us today, Dr. Dangott. Here’s some technical information before we get started with the presentation. So if possible, please use your video. I understand a lot of your work, and you don’t have video capabilities. But if you do, please turn it on. It’s really helpful to put a name to the face, and it will enhance interactivity of these sessions.

All of your microphones are now muted. During the discussion section, please unmute yourself to speak. If you’re connecting by phone and you would like to speak, please announce yourself by name and organization when beginning to speak. If you’re experiencing any technical difficulties during the session, please send a private message to Johanzynn, who’s also labeled as DLS ECHO Tech. And she’ll do her best to respond to your issues.

Finally, we’ve designed relevant sessions based on your evaluation feedback. So we actually go back every month and review what you’ve submitted even for future ECHO sessions. So please continue to fill those out. And we encourage you to fill out the post-session evaluation. Some of you require P.A.C.E.® credits for your continuing education. So there are two options. You can fill out and get a P.A.C.E.® certificate or certificate for participation if you do not require P.A.C.E.® credits.

If you have any additional comments or feedback for us, please send the private message to Jo Hansen. Or you can email all of us directly at dlsecho– that’s dlsecho@cdc.gov..

So how are ECHO sessions different than teleconferences or webinars? So for our sessions, they’re related to discussions of cases of a clinical laboratory challenge. Our subject matter experts share the work that they’ve done that may be translatable to you in your individual laboratories. Just a reminder that these particular ECHO sessions focus exclusively on United States and US territory clinical laboratories.

Once again, we value the discussion among you that ensues, and we ask you to share your experiences in this discussion session. So here’s an outline of how we will proceed with this ECHO session. First, we’ll have a case presentation and discussion by our subject matter expert. Then I will ask some clarifying questions of our subject matter expert. And then we’ll open up for discussion, ideas, and sharing of experiences.

We’ll have some closing comments, reminders, and then we’ll adjourn. Today’s session is being recorded. If you do not wish to be recorded, please disconnect now. Closed captioning is also being provided, and you can find that link in the Zoom chat box. The audio and transcript will be shared on the ECHO website after this sessions. So if you have friends or colleagues who would like to review this after the session is over, they can go to our website on the CDC website.

We’ll also be providing slides for the presentation via email for those registered participants. If you are on this call in did not register directly through CDC TRAIN, you can send an email to dlsecho@cdc.gov for us to manually register you. And then we’ll get those slides out to you. So here’s a short bio sketch of Dr. Dangott.

So he is a pathologist with subspecialty training in informatics and hematopathology. He has worked in academia and industry and recently joined Mayo Clinic Florida as the director of clinical informatics. His current interests include enterprise data analytics, digital pathology, and machine learning.

We now invite Dr. Dangott to begin his presentation. Please be thinking of similar situations or questions that you may have for him or situations that you’ve encountered that you may apply to his experiences. We expect to have a robust discussion. And please offer your questions and feedback in the discussion period. Dr. Dangott, the floor is open.

BRYAN DANGOTT: Thank you, Dr. Henderson. And I thank you for the opportunity to present today. I will have some interactive slides with a polling format. So people can open up a web browser, or you can even use your phone to answer some of the questions that will be interactive, so I can gauge what people are interested in and kind of tailor the content to the audience interests.

So we’re going to talk about analogs in the community hospital. This data is very generic. It can apply to a variety of different settings, from one hospital to multiple hospitals. There are different approaches that can be used inside of analytics. And we’ll just go through the first few slides here and get into some of the presentation. So there is no disclosure for the presentation today.

Learning objectives include laboratory analytics with scenarios and case-based examples. And we’ll compare some of the benefits of different approaches. The first poll is, what — let me go back on the slide here. So here’s how you get to the slides, sli.do. Just enter that in your web browser. It will open up a window that looks like what you see on the right-hand side of your screen with the event code, 75994.

And if you bring that up, the first question is, what is your role or interest in analytics? And you can answer multiple different options here. And it should be live. So as people answer, we’ll see updates as they occur. there we go. A couple of people have responded. So far, there’s only three answers in here. And I will answer a couple of these as well.

There is also a Q&A. If you go to this website, sli.do, and enter that number, there is a Q&A tab as well. So you can enter questions during the presentation that we can review at the end of the presentation. So I encourage you to go to this to answer some of these questions. It will help gauge the content.

I think there were 15 or 20 people, and we only have six participants. So there will be multiple polls in here. So I will give people a moment to get logged in. So far, we’re seeing quality improvement and quality assurance as the two most popular answers, operations and executive administration innovation also being part of the responses.

Nobody from research or the IT or LIS team. So that will help me gauge what level of detail to go into. And the business development team did not have anybody in this particular group. So let’s define analytics. And you can go to that website. The poll will come up later on in the presentation for two more polls that are present.

So what is analytics? It’s really looking at data. And there’s a cyclical process. So you gather the data. You analyze it in some way, shape, or form. And then you’re going to deliver that in a meaningful form to make a decision, essentially. And that may have a reiterative process. So you go back and do the same thing over and over and over again to see if your implementation of any change management that you’ve done or any small process improvements that you’ve done is really having an impact.

And this is a generic definition. There are many, many different ways to look at analytics. But it’s really using the data that you have in your system in a meaningful way. This is a presentation of the types of different analytics which can be performed.

So from the bottom left, looking at descriptive analytics, moving up to diagnostic analytics is why did something happen — so looking at things retrospectively. And then, kind of shifting the time frame to predictive analytics, looking at maybe what might happen.

And then, really, the far upper-right-hand quadrant of this graph is prescriptive analytics. So how can we make it happen? And most organizations are really trying to transition from the diagnostic analytics to the predictive analytics. And that’s basically the transition of time frame, so looking at retrospective versus forecasting.

Analytics scope — so laboratory is a big source of data in the EHR. The lab data includes clinical pathology, anatomic pathology, and molecular pathology. There are various different ways to break that down. They may reside in different LIS systems. Some laboratories may only do clinical pathology, and some may only do an atomic or molecular. But a lot of the community hospitals are going to have at least clinical pathology and probably anatomic pathology, and they may be sending molecular pathology out.

But that is only one piece of data that’s really able to be analyzed. In the entire health care spectrum, you can add in a lot of EHR data. And you’ll see in that second column many of the other parameters which can be analyzed. And the primary one which is of interest is probably diagnoses. So correlating laboratory data with diagnoses and possibly vital signs, look at the pharmacy data, potentially looking at medications.

So what medication is the patient on that might be impacting some of the laboratory data? And looking at radiology data– so with digital pathology, there is an integration of some of the morphology in the radiology images with what we’re seeing in digital pathology. And then there is also the organizational metrics. So this is really the administrative component of health care and looking at our inpatient facility is performing versus ambulatory sites, and looking at other affiliated institutions.

One of the other big considerations is analytics scope. So what are the users? Are the analysts, research, clinical, executives? That’s kind of why I asked the initial question is, what are people’s interests, and what are their roles? So we have most people talking about QA and process improvement. So what sources of data do you need to do that? And those are going to be dependent on your institution and what you’re really trying to look for improvements on.

So in terms of defining the use case — so there’s so much laboratory data. What do you really want to analyze? And I just present, on the left-hand side, just various terms which are thrown around in the context of analytics and looking at the metrics workflow. So that could be a process improvement trying to improve workflow and turnaround times potentially. Population health is another big one, specimen tracking.

And just to give you a scenario where you may be looking at maybe process improvement is looking at the types of tests which are ordered. So on the right, there is a TSH test and looking at the patterns of the results. So the column A of that chart, the green represents a normal value, and the blue represents a high value, and the red represents a low. So this gives a pattern of TSH results for that facility.

That maybe an inpatient facility, may be an outpatient facility. But that pattern is important because it tells you the pattern that you see may actually dictate what other tests are being ordered at that time. So for instance, if there is a reflex for a free T4 being done, 90% of the time, if that’s being ordered upfront, that’s not going to trigger within a test which has a normal TSH value in most cases.

Now, if they’re not reflex testing, and they’re just ordering it and getting a result every single time, that may be testing which is being done which may not be necessary. And that’s going to depend on your endocrine team and your clinical providers on what they think is appropriate or inappropriate in that context. So there are a lot of complexities in doing analytics in the laboratory.

The way that lab results report data may vary within the same institution from one laboratory to another. So if you have multiple hospitals in your setting, the CDC may have different normal ranges at one facility versus another. And that makes it difficult to compare results. So you need some context as to how you’re going to look at that data.

There also may be a point-of-care instrumentation which has, for instance, glucose point-of-care analyzers. And you may have a glucose read coming off your chemistry analyzers. They have different sensitivities, and they may also have different ranges. They also may have different confounding factors. So a point-of-care instrument may have different interferences that are impacting compared to the chemistry analyzer.

So when you look at these things in the EHR in a larger clinical data warehouse, you have to be aware of which data you’re pulling from. And you want to put it in the context of the type of instrumentation it was run on and the reference range for that instrument.

There are also many other complexities and resulting. So laboratory data frequently has numeric values. It may also have text values in the context of anatomic pathology or even in microbiology. It may be a combination of numeric and text, as in the CFUs in a microbiology report maybe quantitative, but you may have bug names, which are text-based. And that can be difficult to analyze and run alerts on.

So there is an alert, which is a flag on a numeric value which has a reference range is relatively easy to do. But it becomes more difficult when you’re looking at narrative data. In the case — so one of the things which is very important in looking at analytics is that all these pieces of data are sent throughout the health care system using HL7 messaging. And I’ll show example that a little bit later.

But that can have a major impact on how you track down challenges or problems in the system when a message is transferred from one system to another. And these special characters, which I have listed here like the tab, tilde, carrot, and pipe character — those all have special meanings in HL7 formatting. If they’re included in a narrative report, it can impact what that report actually shows.

I’ve had scenarios where a report was coming in from a reference lab, and it came in and said approximately. And it used a tilde to represent approximately. And everything after that report was lost. You have to know that that occurs because you could look at that report and say, oh, this report’s done.

But if you read the report, and it says approximately, and then there’s nothing after it, we had to track down the messaging between our LIS system and the reference laboratory system to understand that that tilde character was being rejected and basically truncating the entire result.

I’ve also seen the situation where the HL7 message in the HER — there was a field that, in one version of the EHR, it had an unlimited length of characters. An update was applied, and that character field went down to a limited number and say, it was 255 characters.

Well, that caused a truncation of the results, which impacts results which are longer than 255 characters. And sometimes a microbiology result, for instance, would have multiple descriptions of bug names and can have other things in that field that were truncated or shifted.

That is not what you want to see when you’re dealing with microbiology data in patients that have sensitivities to specific — or bugs that have sensitivities to specific antibiotics and resistances to specific antibiotics. A truncation in that can be very problematic. In understanding how these things occur, where the messages pass is one of the things, which is the takeaway from this presentation is that, the infrastructure that you deal with is a key to understanding how to analyze data and really how to troubleshoot when a problem occurs.

Here’s an example of an HL7 message. And they don’t normally come color-coded like this. This is a generic message. You can see that link on the bottom as where this came from. It has — that actual link has a good description of so many HL7 fields. But you have these headers. And on the left-hand side of this slide, you see MSH PID, ORC, OBR, NTE, OBX. Those are the HL7 headers.

And you see that, right after that, you see that pipe character. So that pipe character is what separates the HL7 message components. So you can think about this as like an Excel spreadsheet, that pipe would separate one cell from the next cell. And you also see a carrot in there, which is another — it’s a subfield delimiter. So I could separate — each field can be separated further with a carrot and tildes in here can actually represent libraries.

So you can see how these special characters can impact some of the resulting. And this is particularly important in text-based reporting like anatomic pathology. So anatomic pathology, when it goes across the HL7 interface, it’s going across frequently in this form. And that’s one of the reasons why the format of anatomic pathology reports do not look the same way when we sign them out in an anatomic pathology system versus when they get to the EHR.

There are ways to make them look the same way by using a PDF that gets sent across in a clinical data architecture, which is a different interface in HL7. But it’s a little bit beyond the scope of what I want to present today. So here is a basic infrastructure talking about the electronic order flow.

So as a physician places an order, you follow the green arrow pathway. And it generally, it goes from a EHR to an interface engine. You can have a variety of different interface engines, maybe not even know that it even exists. But it generally is present. And there’s a team that manages that. If it’s a director at the reference lab it may just bypass your laboratory system entirely and get sent out to that reference lab and get communicated back into EHR without even touching the LIS.

In some cases, it goes to the LIS, and the LIS would have its own direct pathway to the reference lab, which was what I talked about before with that tilde issue. And then, if the test is being done within your own laboratory, then that laboratory LIS system will pass a message off into a piece of middleware which communicates directly to the instruments. And the instruments pass the results back, and the pathway goes back the other way up.

So there’s multiple hand-offs between — these are generally software-based systems until you get to the instrument level. So the software is passing messages across. And when a problem occurs — so if there’s a problem between the LIS and the interface engine, you want to be able to focus in on that as fast as you can, especially when you detect a problem which can have a critical impact on patient care.

And this is generally going to be handled by your LIS team. So that [? LIS — ?] I didn’t see people from IT or LIS on here. But if you see a problem in your laboratory results, contact your LIS or IT team so that they can trace where this occurs.

So going back into the bigger picture of analytics — so your systems and how well they are connected is going to impact what data you have access to. Some institutions are going to have multiple LIS systems. Could be an AP system and a CP system. Or it could be one hospital has one AP system; one hospitals another AP system. You can get very complicated. And you end up having a lot of different systems to manage and different sources of data.

Same thing can happen at an inpatient versus outpatient. And then you have other systems like EHR, pharmacy, finance, and even infection control can have its own separate module, even though the microbiology module may have responsibility for reporting. Infection control can actually be a separate module managed by another team.

So looking at a single hospital with one lab and then having a couple send-out labs — this is a basic model of how communication occurs. They’ve taken all those pieces out and just looking at where orders are coming from. This is a relatively simplistic until you add clinics. So now you may have clinics, which are affiliated with your health care institution, which are also sending orders. And these may have different pathways.

They may have different timelines. So there may be an order which is sent to your inpatient lab directly from that clinic. Or it may be sent on top of the existing pathway for the inpatient facility. Alternatively, it can be sent directly to our reference lab. And this becomes too when you’re analyzing data to understand where the actual data is coming from. So let’s look at some actual lab metrics coming down from institution and being down from the entire health care scope into a just laboratory.

So laboratory metrics, turnaround times are very common. A lot of people involved in QA and quality improvement. So these turnaround times are one of the things that commonly comes up. Testing volumes come up. What’s the cost of testing comes up? How do we staff to particular days of the week or different volumes? And what does that workflow look like, and how can we improve those things?

So here is another polling question. And I just want to get a gauge for what challenges people face in performing analytics in your institution? Two people have responded so far, and I know one of them is me.

HENDERSON: And the other is me. For those who jumped on late, go to S-L-I-D-O or slido.com, and then you enter the code 75994. I know some of you have jumped on later after we did the first poll.

DANGOTT: This is for me to gauge the content to what your interests are. So if I know what your challenges are, I can maybe help you address those. It also helps the other people in the presentation to understand what other people are dealing with. So maybe you don’t feel alone if you’re the only person that’s struggling with access to data. I think that’s a common theme, actually.

HENDERSON: We had six people answer the last time. We have more people on this time, and less people answering.

DANGOTT: That’s OK. It gives me a general trend. And this is somewhat what I expected. I thought there’d be more people. And not sure where to go or how to get started? Access to the data — really, if you can’t access the data, your ability to analyze data is going to be extremely limited. And sometimes the data is going to come already processed. And if it comes to you already processed, you have to really depend on the person that processed that data to present it in a format that’s helpful to you.

So that’s where that data is not displayed and a usable format question comes in. So access to data is a challenge, especially in health care, primarily because of the type of regulations and security around health care data is very tight. And IT teams generally don’t provide access to some of these tools unless you’ve been through some training or demonstrated that you have the ability to do this.

So if we’re going to analyze the data, you’re going to want to be very closely involved with LIS or IT team or have a QA team that you work with to do these things. And these top four answers seem to be the most common. So there is an overwhelming amount of data. If you don’t have access to the data or people that can analyze the data, it’s going to be a challenge. It’s going to provide hurdles that you need to overcome.

So and tools to analyze the data — so I’ll talk about tools to analyze the data a little bit as well later in the presentation. And maybe access to the data may also be one of the things which comes up in terms of potential solutions. But looking at turnaround time, I think clinicians, when they think about turnaround time, they think about when they placed the order to when the result is received. But turnaround time is really variable when you’re looking at when the order is placed.

If you’re in an emergency room, when you place an order, you can probably track back or the stat lab. That turnaround time from when it’s ordered to when it’s collected maybe a little bit more reliable than in the setting where you’re in an outpatient clinic and you schedule somebody for a follow-up visit to go to the lab three weeks later to get a CBC before the next appointment. That order to collect is very long. So there’s a lot of variability. And that’s why there are question marks in that first segment.

So collection and transport — collection and transport is going to vary depending on where the lab is. So if there’s a collection center, let’s say, from an outpatient draw facility that’s sending out to another laboratory, that transport time can be extremely long. In fact, that laboratory may be in another state. That’s another thing which is hard to really measure. It’s institution-dependent. To get a meaningful perspective on that, you have to understand the logistics of how specimens are transported.

Once the specimen is received in lab to the time it is resolved, that is probably the most — well, the variability is the lowest between those two data points. Because it really measures how the lab is handling that specimen. So you account for when the lab picks it up and when it’s placed on the instruments to the time and that result is given. That’s really the thing that the lab has the most control over.

And then, that final part, that collection review — we often do not have either the tools to know when they reviewed it, or it may be another system. So you may have access to LIS data, but the EHR is telling you when the clinician reviewed it. So that’s another question mark on that timeline because it comes more and more difficult to review that.

So let’s look at turnaround time data. So this is the collector to receive. This is representative of EHR top 20 tests in the ER from our community hospital example. So these will we your common tests, like CBCs, CMPs, troponins, things like that. So you see the collect-to-receive time is actually relatively consistent in comparison to the receive-to-result time.

So this collect to receive measures the transport time. So it’s collected in the ER and transported to the lab. If your ER had its own laboratory within the walls of the ER, that time may be much shorter. But once you put that specimen on an instrument, you have a receive-to-results time. So that’s dependent on how much handling is done in the laboratory and how long a specimen actually takes to go through the instrument.

I think administration, when I presented this data, I was actually surprised that some of these tests, they actually had more than 50% of the actual turnaround time from collection to result, more than 50% of it based on a collect-to-receive time. So you can see, generally, almost 30 minutes for some of these tests. So these are representative of specimens coming down from floors of the hospital to a laboratory before they’re actually accessioned into the lab — so up to 30 minutes in some cases.

And the tests themselves — so CBC has a relatively quick turnaround time. And troponin has a relatively long turnaround time. That’s just based on the instrumentation. So another example of data — and I think maybe crossing some of the systems — so going from the LIS system to the EHR system — this goes into a diagnosis field. So this is representative of RBC transfusions correlated with the hemoglobin level that the transfusion was given at just immediately prior to the transfusion for all specialties in one facility.

This is everything. So it does not stratify things, and I would like to stratify things further. So looking at patients which are chemotherapy patients versus GI bleeds versus trauma. An adult-versus-pediatric population is going to have a different histogram of when these transfusions are given. Does the patient have a marrow disorder. Are they having nutritional anemia? All these kinds of things are important in stratifying the data and really providing more meaning to the data.

But this exists in the EHR. So getting access — correlate laboratory data with EHR is important in adding value to that LIS data. Some of the health care — basically a lot of health care is talking about value in health care and the value in terms of outcomes, which are hard to benchmark. So we have quality versus cost is one of the things which is a big thing which is measured in business terms. Then we have outcomes versus patient experience and health care and direct cost versus indirect costs.

So value in health care is hard to measure. Value in the context of a manufacturing setting is a lot easier to measure. And the difference is, in health care, we have biologic heterogeneity, and we variable presentation time length. So the metric for patient outcomes — so length of stay, quality of life, disease-free survival, readmission rates, all these kinds of things are some of the measures that we use.

But as you can see on the right, the timeline for each patient is going to be very individualized. So we can’t necessarily translate one patient’s present timeline to another patient’s presentation timeline. And we can try to do things like the lab to the transfusion. Those kinds of things are a little bit more measurable. But when that lab is drawn is going to be dependent on when the patient was scheduled for that draw or potentially how they presented to the institution.

So this is one of the areas that I think the laboratory has a large opportunity, especially now and in the near future with machine learning and AI is demonstrating value on laboratory data. So the lab, it has very broad reach. So basically, every department in the hospital setting will use laboratory data at some point in time. Laboratory data is high impact and actionable. So many decisions are made just laboratory data by itself.

Oftentimes, laboratory data is quantitative. So think about your CBCs and your CMPs. You can mix in there image analysis for anatomic pathology, and provide quantitative results that way. And that also ties into — so quantitative and high impact. So an ER result or HER2 result on a patient will determine some of the therapeutics for chemotherapy.

Laboratory it is high volume. So there have been reports of up to 70% of the data in the EMR is based on lab. And many of the decisions for clinical points — so when somebody was making a clinical decision, many of those decisions are based on laboratory data.

And then, finally, the laboratory is very low cost with respect to health care. So it’s less than 5% of the total health care spend. So the laboratory has a this great value proposition that it’s high volume and low cost with all of these things, a high impact and quantitative, this is great for machine learning and AI.

Here’s an example of just looking at just not even in machine learning or even just basic metrics for looking at different clinics. So you see the color pattern of test results at the top right versus color pattern test results in the bottom right, those both represent two different practice settings.

So the top right is a just general family practice. And the bottom right hand is an endocrine practice. You see the pattern is completely different. You can actually compare providers within one practice setting to another provider within the same practice setting and come up with metrics in that setting.

Here’s another poll. One of questions that I wanted to gauge is, how is your lab currently handling analytics? This will tie into some of those prior questions looking at quality and performance improvement and access to data. We’ll see what people have responses on this.

OK, so it’s the same five or six people that are responding. Using the LIS or EHR, I think that is common. So many LIS systems have tools built in them to do some of the turnaround time metrics, the QA metrics, those kinds of things. Centralized enterprise analytics portals are becoming more popular. Clinical data warehouses are a great way to do analytics. A lot people using Excel, Python, or R.

HENDERSON: And there’s one Excel. Somebody is having an issue connecting, so it’s also Excel.

DANGOTT: OK. So I’ll compare some of those solutions in the next part of the presentation here. So this is the clinical data warehouse. And it basically takes data from a variety of sources. On the left, we have the molecular lab, the clinical lab, the atomic lab, and the blood bank — common sources of data. And really, all of those things are generally hosted within the laboratory.

The next icon down is the pharmacy. And below that, we have radiology. All those things are interacting with the major hospital system, which has different data. So we have metrics on providers, some of the admits and discharges and claims and those kinds of things. Some institutions will have patient satisfaction data and outcome data. They may have medical legal data and safety data as well. And really, all of these kinds of things can help add value to analytics. And really, the clinical data warehouse can host all this and provide a lot of insight.

Some barriers to insightful analytics may be technical and may be financial and may be organizational, could be all three. So I already talked about storing data in different systems. So an LIS system hosts laboratory data. EHR hosts clinical data, but it also hosts laboratory data. So in some cases, it’s easier to do analytics from within the context of the EHR. There may be some things which the EHR does not have access to.

So the EHR may not have access to some of the quality and control samples which are around instrumentation, may not have access to which instrument performed a specific test. So that’s one trade-off between EHR and LIS. Technical challenges can be, does the data exist in a format that can be queried? That is a big challenge.

If your organization does not have a clinical data warehouse or an infrastructure or a vendor which provides an infrastructure to do that, it can be very difficult. So you may get a huge spreadsheet of data, which is very time consuming to process if you don’t have staff that processes data in that way, so can take a very long time to get to an answer. So financial — so some of these analytics tools cost money, or software costs associated with them.

And then organizational — is the organization really focused on analytics? Are there data gatekeepers which are preventing people from having access to data in such a manner that it becomes difficult to even analyze data? And really, how is the data going to be used? So if there are institutional champions that say we want to be focused on quality improvement, and this is how we want to do it, those people can help move some of the barriers to getting analytics tools.

I already mentioned most of these. I think there’s just bullet points from things we’ve already talked about. So silos and fragmentation — this is a challenge everywhere in health care. So health care data is fragmented, not just between LIS and the EHR. But within an institution, you may have multiple LISs and EHRs.

And when you think about that across the entire country and think about how many different EHRs are out there and how many different LIS systems are out there, it becomes very difficult to think about things on a — analytics on a national scale because of these silos that are out there.

There are many different software vendors, many different interfaces, different laboratory testing methodologies, different data definitions. So an EHR implementation — one organization you may have another organization with the exact same each hour but different data definitions of data stored in different ways, in different parts of the database, so it becomes difficult to analyze.

And then the frame of reference — so if you’re frame reference is lab, how are you going to get access to other data if you need it? So blood bank example — how do you get access to those diagnoses which impact the transfusion? How do you address gaps? So is it done through software? Is it done through enterprise solutions? Are there ways that you can process the data? Are there people that can help you? Those are all different ways that you can bridge some of these gaps.

We’re kind of going into the solutions. So are you going to build something or buy something? So if you’re going to build, you probably have a very strong IT infrastructure, very strong LIS team, very strong analytics team. If you don’t have those things, probably buying is an easier solution. You can buy software off the shelf which will do some of the things you might want to accomplish. But it might not get exactly what you want, but very close to it.

Advantages of buying is that vendors have domain experience. This is what they do. They talk to multiple different hospitals. So they’ve seen this question maybe multiple times and in many different contexts, and they have different ways to present it. They can implement things much faster than an internal team. So some internal teams may take six months before they even clear the queue of products which are ahead of your request. So those things are important to consider.

One of the other things on here — so vendor is generally going to have handled how they approach security in a very standardized way, which may be different than when you’re building something internally. So internally, the tools to build security protocols may be a little bit more difficult.

So vendor perspectives — I asked vendors what they think the difference is between buying a solution versus building a solution? And then they say that the health care organizations, their specialty is care for patients, hospital operations, and they may lack core competencies for software development. In most cases, that’s true.

On the other hand, the analytics software company has economies of scale. They can build something and serve hundreds of sites or hundreds of clients with the same solution. And they have the experience, and they can spread these costs all at once, sell it many times, so they have the ability to scale.

These are some of the tools that are available, Qlik, Tableau, Alteryx, SAS. Again, I don’t recommend any one of these particular products, and I don’t — I’m not putting it up here as an endorsement. These are just examples. And then there are lab-specific solutions. So the different LIS vendors and different alias LIS analytics vendors are listed on the bottom.

Also, reference labs — so reference labs often have analytics portals for their clients. So they can provide benchmarks of what other clients may see in terms of the standard turnaround time. They can benchmark some of the provider metrics.

But they may not be able to analyze what’s beyond their walls. So if you send a specimen to a reference lab, the reference lab does not necessarily — and probably does not have access to the diagnostic information or other elements of the EHR. So that’s a limitation that you may see.

So this slide is a comparison. It’s a very busy slide. You probably want to spend a little time on here if you’re actually making a build-versus-buy decision. I’ll just give one example on here just to kind of highlight how this is organized. So on the left is the build; on the right is the buy. And as you move down — so if you’re going to do benchmarking internally among your own physicians, if you develop that internally, you can do that.

In contrast, in some cases, external benchmarking may be very difficult. So how are you going to benchmark your results to another hospital system? How do you know what your hospital system is doing in comparison to everybody else in the country. So one of the ways to get access to that is either using a lab-specific vendor, reference laboratory, or even consulting vendors can provide that data.

So then, I’ll open it up for questions. Thank you for your attention and participation in the polls. I think I got some helpful guidance and know what to talk about.

HENDERSON: Thank you so much, Bryan. And yeah, some people did have technical difficulties. So I asked them to just text me their responses to the surveys. I don’t know if I have feelings of PTSD back to work in the lab with data. But we were lucky, even being in a community practice, having a clinical informaticist who’s a pathologist and not just a tech person and you.

And so listening to you, I wondered, how many community and rural people practice as laboratorians who, as our target audience, have access to that? Do you have any advice to them of how they would ensure that people with the appropriate lab experience are part of the lab analytics decision-making? Because I think I’ve been on both sides. I’ve been in an academic place where you have laboratorians who happen to have informatics training.

And then, what do they do they do? They recruit them to build, and then they leave, and they never come back to the lab. Or you train up your laboratory people to have some type of informatics and analytics data background, but then you kind of take them away, and then they start loving this new thing. I just find it hard to have a balance. Do you have any advice or experience from your years of experience?

DANGOTT: I think it depends. It depends on the organization. So if the goal is to build a strong analytics team, having people which come from a laboratory background and become involved in IT is an easier pathway than having IT learn about laboratory. Laboratory has so many nuances and workflows and things like that are more complicated. And there’s a lot of things — I’ve seen both pathways work.

So people coming from IT into laboratory analytics — but generally, it’s easier for somebody to transition from a med tech role with some computer interest and computer background into an analytics role. And for anybody that is really involved in the lab, and then let’s say they’re involved in QA, I think the most important thing is get engaged, and go to a QA meetings. Ask about the data. Ask if you can see different parts of the data. There may be people in that room that know how to do it.

HENDERSON: Awesome. And do you know — you’ve worked in the spectrum of places from academic to community and everything in between– of resources that our community partners can use online? Is their laboratory informatics training, or is there an annual course or something?

DANGOTT: There are — so I’m also involved in the Association for Pathology Informatics. They do have educational materials. There are educational materials provided by ASCP and others. It really varies on what people are interested in. So most of those are general informatics. They don’t really go into the analytics side of things.

I think, analytics, if you want to learn analytics, the best way to learn it is by doing it. So let’s say you have a problem that you want to address or try to solve, get access to that data in some way, shape, or form. Understand how you need to present the data, how you need to analyze it. And you can do a lot in Excel. I do a lot of things in Excel, which is — most people have access to Excel.

There are things which can be done in Excel that can get you very far along the lines of presenting data. Learning how to do that in Excel is a process. So that’s where the challenges come in. It can take a very long time to learn how to do that.

HENDERSON: Yes, I had to do it by force and by fire. So we’re going to open up the floor to our community and rural partners. I’m really curious to hear any challenges on what you’ve done to either become involved in laboratory analytics. Or how do you even present your QI and QA data? We can go back to your other slides, because we’ll do this at the end.

Anyone, you can either type your question in, or you can unmute yourself, or raise your hand. We’ll unmute you. I don’t want to have to call on anyone. No questions? Hold on, let me get out my participant list. Maybe that’s a good thing, Bryan.

DANGOTT: I don’t know. Could be. It might be a bad thing. I checked the Q&A on the slido.

HENDERSON: Oh, yes, did anybody submit Q&As? I forgot about that. So there’s also a Q&A section in the slido. OK, perfect. Martha Casassa has a question. So how do you justify– that’s a good question– having an informatics FTE on your lab payroll to the C suite.

DANGOTT: So it depends on what you really want to do. So there are many different ways that analytics can help impact the bottom line. So first, understanding the process of where you are is I think is the most important thing. So looking at retrospective data and understanding that there are things in that data that may have opportunities for impact for the future state?

One example of that is looking at testing and laboratory testing that may be inappropriate or maybe underutilized. So that TSH example which I showed, there was an example in a community hospital setting where free T4 was being ordered up front very frequently. And in the inpatient setting, there was no extra revenue, which generally comes through for federal payers and state payers. So there’s things like that, which can be done.

Now, to do that very well, you need to have access to the financial data as well and the reimbursement data. And there is also the outpatient side. So looking at the outpatient data and being able to provide analytics to a new clinic — so if you’re maybe operating in a community that has smaller clinics that don’t have access to a lab themselves, they’re going to look for reference lab partners. And that community hospital maybe reference lab partner competing with a lot of other larger reference labs.

So being able to build that is really a longer vision than something — so the big reference labs are going to have access to a lot of analytics tools. If you’re building initially, it’s going to be a challenge.

The other way that that can help is in workflow. So inefficiencies in workflow are extremely expensive. If you think about the number of specimens which run through the lab, they’re generally in the hundreds of thousands for a very small facility to several million per year for a larger facility. So every time that runs through the system, inefficiencies add up. So addressing those inefficiencies and be able to provide quality improvement on those is another thing that can help justify those costs.

And then, also, it depends on the perspective. So there is a clinical laboratory perspective, and then there is an anatomic pathology perspective. So digital pathology and how that’s used — so how that data may be used to develop software and algorithms to help with your anatomic pathologist. So there are many different ways to do it.

And I think one of the biggest things which is out there right now is that there is so much out there in the EHR. So EHRs were adopted very rapidly. But it doesn’t mean they were adopted very efficiently. So inefficiency in laboratory testing I think is still a challenge for health care in the United States. I think there’s a lot of opportunities for improvement there.

HENDERSON: Awesome. And even to Martha’s question, that question was on an ESM panel I think at the end of last year. And that question came up. Like, how do you justify the need exactly. You’re being asked to do more with less, but then laboratory analytics is such a huge thing.

And there was a discussion on the type of language that you use. There’s a language that we understand in the laboratory realm that, in C suite, there’s a different language. It’s buzzwords that you need to bring to the table to them that only then will they pay attention and kind of listen to.

And exactly as you’re saying, Bryan, bottom line and efficiencies and value-based and all this other stuff. And I kind of learned rogue and kind of listened in to others and learned the language of those in the C suite. Because for us, the technical things matter so much. But to them, exactly, it’s just so much different. They’re looking at something different, although, it’s part of the same thing if that makes sense.

DANGOTT: I’m not sure if the screen share is still presenting. But that demonstrating value of laboratory data — so this is how you demonstrate, not only value for having an informatics person there, but the value of lab in general. So this is something that I think is so much overlooked in administration. So laboratory is used by everybody. It’s actual data. It’s quantitative — high volume, and low cost.

You want to have very high-quality laboratory data. This is a way to decrease some of the costs to the institution and really provide the value statement, which is basically quality divided by cost. So those things is a very strong position for a laboratory. And your informaticist can help get access to this data to demonstrate it.

HENDERSON: Exactly. Perfect. Last call for any more questions. Otherwise, we’re going to shift to closing. I’m going to give them 10 seconds.

DANGOTT: While we’re waiting —

HENDERSON: And Martha, I hope that answered your question. Go ahead, Bryan.

DANGOTT: I’ll just say one more thing about this slide. So if we think about data right now, in the context of machine learning and AI — so laboratory has the richest source of quantitative data to do prediction. So looking at infection — some of the things which are inputs to predicting an infection are going to be lab values in addition to vital values.

So respiratory rate, temperature, those things of which went into EHR, but also, what’s the patient’s lactate level? What are these things which are other components of a laboratory data looking at also urinary tract infections. So looking at the early results — so when that culture grows actually has an importance.

It’s not really well analyzed. But the time that culture grows and is positive, if it grows on day one, it’s far more significant than it grows on day three. And you have to access that data and analyze it in order to be able to demonstrate some of the value in that data and pulling it out.

When I worked on some those projects for trying to predict urinary catheter infections, there’s some federal guidelines, and we had to put all these things together. There’s so many pieces there. The actual challenge wasn’t in a lab. It was in the EHR determining things like flank pain. So flank pain in the EHR is recorded in some different areas.

We couldn’t have a query that pulled flank pain regularly because it may be stored in a clinician note, may be stored in a nursing flow sheet. It may be called something other than pain. So there’s many different ways to store that data. Whereas, the laboratory is generally discrete data fields. And even the text in the microbiology report can be processed.

HENDERSON: Perfect. Thank you. Can you take us to the last slide, please?

DANGOTT: Yes.

HENDERSON: Thank you.

DANGOTT: Oh, I went right through it. Pull that back up. There we go.

HENDERSON: Perfect. So thank you so much, Bryan, for your time. This was really awesome. And I’m sure even those who were not able to see will be able to review the audio and the transcript. Our next session, our penultimate session in this ECHO series will be on Wednesday, November 18 from 12:00 to 1:15. The discussion area will be transfusion medicine and blood banking, one of my favorites.

And it will be presented by Dr. Susan Weiss from the University of North Carolina Chapel Hill in Chapel Hill, North Carolina. So please visit the DLS ECHO website to register for this session and also the final session in December for 2020.

We thank you for taking part in our discussion today. We hope that you find it really valuable. And don’t forget, we’ll be sending out these slides that you guys can review and chew more on what Bryan discussed today. We look forward to your participation in future sessions, and we’ll keep you posted on any changes that occur.

So now we will adjourn. Thank you and have a great day.

DANGOTT: Thank you.

HENDERSON: Thanks, Bryan.

DANGOTT: Bye-bye.

HENDERSON: Bye.