Implementation Plan Brainstorm and Feedback (7/26/21)
The purpose of this section is to:
Provide input to initial SHIELD implementation Plan
Define the WHAT needs to be done to achieve laboratory data interoperability
Set stage for HOW to deliver the “what’s”
Set stage for milestones, goals, timeline
- 1 Use Case Scenarios
- 2 Implementation Plan
- 3 Comments on what needs to be accomplished
- 3.1 Laboratory Interoperability Data Resources (LIDR) - LIVD expansion file management
- 3.2 Controlled Medical Terminology (Ontology) Unification
- 3.3 Clinical Information System Vendor Engagement and HL7 Engagement
- 3.4 IVD Vendor Engagement
- 3.5 SHIELD Data Hub for FDA Use
- 3.5.1 Features needed
- 3.5.2 Data elements needed
- 3.5.3 How to populate and maintain
- 3.6 Sustainability Plan
- 3.7 Implementation Approach
Use Case Scenarios
Use cases below are meant to provide tangible scenarios where laboratory data is created, stored, exchanged and/or analyzed for purposes of contextualization of laboratory data interoperability
Clinical and Public Health Scenario Use Cases
Patient A, a 65 y/o male, originally seen at a critical access hospital is transferred to the regional health center and admitted for a suspected case of infectious pneumonia and low oxygen levels. The patient also has cirrhosis of the liver. A respiratory pathogen panel is ordered by the admitting physician to confirm the pneumonia is due to a bacterial infection. The lab test is ordered in the EHR. A n/p swab specimen is collected and sent to the micro lab for testing.
The test performed in the lab is a PCR-based panel. Within 4 hours, the laboratory reports the positive identification of a Klebsiella pneumoniae infection. A reflex susceptibility test is ordered by the laboratory. In the meantime, the physician orders the preferred antibiotic in line with empirical evidence provided by the institutional antibiogram and in accordance with the antimicrobial stewardship protocols of the hospital.
Susceptibility results indicate that the Klebsiella pneumoniae is an extended spectrum beta lactamase (ESBL) producing organism that is resistant to the initially prescribed therapy. It is only susceptible to antibiotics in the carbapenem family, but many of the readily available carbapenems impact the liver and will be difficult for the patient to tolerate. The physician must identify a suitable alternative. In previous years, this would have been a very challenging scenario and involved consultations with the infectious disease attending physician and the pharmacist on duty. However, the hospital has adopted the integrated terminology standards that link lab testing, organisms and medicinal products. As a result, the physician is able to query the EHR/CDSS for alternative carbapenem drugs on the hospital formulary. The physician is able to identify a suitable cabapenem in the hospital formulary that will control the infection and should be tolerated well by the patient. The medications are ordered. The physician requests that past hepatic profile test results from the critical access hospital be imported into the EHR, and the physician orders a series of hepatic function tests to monitor the patient’s liver function compared to baseline function during carbapenem therapy. The patient recovers after 5 days of therapy and is subsequently discharged.
During the course of laboratory testing, the organism identified along with its susceptibility profile is recorded. The data are immediately incorporated into the hospital’s antibiogram calculations and will be incorporated into the next publication by the antimicrobial stewardship (ASM) team. The organism and susceptibility data are further reported to the state department of epidemiology via HL7 message including S/I/R and MIC data along with testing methods for organism identification and susceptibility test results as ESBL-producing Klebsiella pneumoniae are a reportable condition due to the rise in antimicrobial resistance in this species. Furthermore, isolates of the organism are sequenced and profiled for known or suspected genes associated with antimicrobial resistance as part of a broader AMS discovery and intervention program.
Clinical Scenario 2 Use Case
A third year house staff in emergency medicine is preparing a quality of care project to meet certification requirements of the ACEM. In collaboration with a dozen of their colleagues at sister institutions they will compare opioid testing results on patients presenting to the ED with recurring abdominal pain. They wish to be inclusive of any urinary or blood results for any opioid-receptor agonist agent. They plan to identify whether any testing was done within 24 hours of the ED visit and to classify the results as COMPLETE, NOT DONE, OPIOD PRESENT or OPIOD NOT DETECTED. The project will evaluate the effects of the screening data on the expense and outcomes of the ED evaluation.
Clinical Scenario 3 Use Case
A patient with severe COVID-19 is transferred from an outside hospital. Laboratory results from the outside hospital, available through the EHR vendor’s HL7 integration engine, included a troponin I of 4.0. The troponin result was trended with previously reported troponin results from the in-house laboratory. After 24 hours, cardiac decompensation is observed including venous distension and increased peripheral edema. A follow-up specimen is collected and (high-sensitivity) troponin I is measured and reported as 7,000 ng/L.
Numerous clinical assays lack analytical harmonization. Consequentially, test results, reference intervals (normal range), and measuring units are often not comparable between various assays and subsequently between laboratories. In the scenario described above, the troponin of 4.0 was using a 4th generation assay that reports troponin in ng/mL. In contrast, the high sensitivity, 5th generation troponin assays typically report in ng/L. By harmonizing the units, the patients initial troponin would be reported as 4,000 ng/L. Using high-sensitivity assays, 4 ng/L are inconsistent with myocardial injury, and results >1000 ng/L are consistent with a considerable myocardial infarction. Because the external and internal test results were co-mapped in the EHR, the units could have been absent and assumed, displayed deceptively as that of the other test, or displayed appropriately but missed by the busy clinician. Interpretation of results in this case are further complicated by lack of analytical harmonization between troponin assays. Even if the same units were used, the physician did not know the method for the previous result; in many clinical scenarios this would prevent a clinician from discerning if an acute myocardial injury had occurred (rapid change in troponin), or if these differences were simply assay dependent. Overall, in this case it was easy for the treating physician to miss the apparent cardiac injury because the test method and units were not clearly defined.
In the clinical scenario described above, the treating clinican would have clearly benefitted from knowing: 1) the method used for assessing troponin concentration (including the generation of assay used) 2) the units that the results were reported in 3) the reference interval from the reporting laboratory.
Because of the risk of this specific error for this specific test, much has been done to decrease the change of this outcome, including the creation of distinct LOINC codes for 4th generation and 5th generation assays and the generation of specific guidelines for troponin assay implementation. However, there are many other laboratory tests that are poorly analytically harmonized that do not have such safeguards in place.
Clinical Scenario 4 Use Case
Current State
A 60 year old male liver transplant patient is receiving care at two healthcare institutions, a large academic medical center about an hour away and a small regional laboratory. On a recent visit to the academic medical center for his transplant care, his physician notices that his creatinine has jumped from 1.2 md/dL to 1.7 mg/dL. The two clinical laboratories perform different tests. Both assays are traceable to an isotope dilution mass spectrometry reference method using the NIST Standard Reference Material 967 and are annotated with the same LOINC code (2160-0). Since the patient is on medications that are potentially toxic to their kidneys, this change prompted concern for acute kidney injury. The patient was urgently admitted to the hospital for monitoring. The patient’s creatinine was found to be stable over the next 24 hours and there was no evidence of acute kidney injury.
Details
Creatinine assays are one of a handful of tests that have been standardized and their results are traceable to reference materials. Despite this, there are substances in patient’s specimens that can lead to erroneous results and the directionality and magnitude of these effects differ assay by assay. The most likely explanation in the above case is that high concentrations of bilirubin and its metabolites led to a method-specific false decrease in the test result at the regional laboratory.
How to encode harmonization
A single binary indicator for harmonization cannot handle this situation well. Either it could be decided that no creatinine results will be treated as equivalent and thereby lose the opportunity for cross-lab monitoring for this important marker of acute and chronic kidney injury. Or it would be decided that all creatinine results from methods that are traceable to an approved reference standard will be treated as equivalent, leading to mismanagement as in this case or potentially inappropriate prioritization of patients on liver transplant listings.
This is one very specific example of a limitation in using a binary indicator for harmonization status. There are many such nuances such that it is extremely challenging to use a binary indicator in a way that is both safe and useful. It also highlights the importance of domain expertise and conscientious governance if decisions will lead to broad adoption.
Research Scenario Use Case
A research faculty at U of Anywhere is designing a study to compare the sensitivity, specificity and positive predictive value of two novel screening tests for COVID-19 respiratory infection in patients with diabetes mellitus. The test strategies are nasal swab for SARS-CoV-2 N gene versus salivary SARS-CoV-2 N gene. Diabetes mellitus is defined by a history of random non-stimulated blood glucose >= 200mg/dL or Hemoglobin A1C/Hemoglobin ratio >6.5%. The research team plans to conduct this as an observational trial across the N3C network and they are struggling to define the value sets of coded test results which they will incorporate into their SAS query for cohort identification and outcomes assessment.
Quality 1 Use Case
Goal: Real-time surveillance for test performance deficiencies (proficiency testing)
Current state
Methotrexate is a chemotherapy commonly used to treat cancer and autoimmune disease. It has a narrow therapeutic range and is frequently monitored to adjust dosing and avoid toxic concentrations.
To help ensure accurate test results, clinical laboratories participate a few times a year in external proficiency testing (PT) whereby they compare their test results to those of peers. On one laboratory’s most recent PT survey, three samples of five showed a clinically significant negative bias relative to their peers. The root cause of the negative bias was ultimately determined to be incorrect automated dilution. The identification and correction of this deficiency occurred more than 30 days after its onset. In the interim, many patients results were falsely increased, leading to potentially toxic methotrexate dosing.
Situations like the one described above are not uncommon in laboratory testing and PT represents an important way for laboratories to detect systematic errors. Unfortunately, results from most PT surveys take weeks to months to process.
Potential state
Improved interoperability of laboratory data could enable much more rapid detection of such testing deficiencies, preventing considerable downstream patient harm. Some such errors could be detected by systematically surveying multi-site patient test results and changes in these results, comparing across laboratories performing the same exact test. Some such errors could be detected by comparing results for QC samples, which are tested as part of standard of care. Some such errors could be detected more rapidly by modernizing the PT process by leveraging the aggregation of clinical information streams. Such approaches would reduce the burden of paperwork, potential human error in recording results, improve efficiency and ultimately improve the standardization of laboratory results.
These surveillance approaches would require collation of laboratory test results inclusive of UDIs for instruments and assay. Using patient test results would benefit from clinical and ordering context, such as provided by standardized order and result codes. Using QC samples would require transmission of QC lot information. All of these approaches would benefit from test component (e.g. reagent) lot information.
Quality 2 Use Case
Goal: Earlier and more sensitive detection of laboratory test component (reagent, calibrator, container) deficiencies. Some testing device defects are not readily detectable by existing, standard manufacturing and process controls. Surveillance of aggregated laboratory testing data could speed the detection of such defects, thus decreasing the potential harm. In addition, this higher resolution labeling of test results would facilitate better cleaning of data for additional secondary uses.
Examples:
Class 2 Device Recall of specific collection transport media for SARS-CoV-2 Antigen testing (Recall Number: Z-1266-2021). Specific viral transport media led to false-positive results.
Class 2 Device Recall of specific Unconjugated estriol reagent lots (Recall Number: Z-3006-2020; 08/26/2020). Specific formulation of reagent lot led to falsely elevated results in a subset of patients, potentially leading to false-positive prenatal screening results for Down Syndrome.
Class 2 Device Recall of specific Parathyroid hormone reagent lots (Recall Number: Z-1342-2021; 11/23/2020). These specific reagent lots had potential to ‘produce falsely elevated…results’, potentially leading to false-positive diagnoses for hyperparathyroidism.
Class 2 Device Recall of specific Rheumatoid Factor calibrator lots (Recall Number: Z-0283-2020; 08/09/2019). These specific calibrator lots caused falsely low results, potentially leading to missed clinical diagnoses.
Recall of specific Creatinine reagent lots (11/2020). These specific lots led to false-increase in results for serum (but not plasma) specimens, which directly led to inappropriate referrals and biopsies.
Explanation:
Laboratory tests are performed using several components, including specimen collection devices and reagents, test reagents, and test calibrators. Components are labeled with device identifiers and lot numbers, which indicate the batch in which they were prepared. Together these identifiers comprise the Universal Device Identifier (UDI). Not all potential combinations of devices can be evaluate pre-market. Performing laboratories should locally verify specific collection devices and reagents, but these practices are variable and limited. Specific component lots are not evaluated pre-market. For some assays, batch-to-batch variability can be substantial and clinically impactful. Manufacturers verify each batch, but typically using few or no patient samples. Clinical laboratories verify batches, but practices are variable and use few to no patient samples. These processes are not designed to detect errors that affect some but not all patients.
Barriers:
Not all relevant devices and components are assigned UDIs
The transmission of UDIs is not required
Data is not aggregated
Requirements:
Device, reagent, calibrator, collection container are all assigned a UDI
Testing device transmits the set of UDIs (OBX-18 field in HL7 v2.5.1) (to LIS to EHR)
o Instrument -> (Middleware / LIS)
o Middleware -> LIS
o LIS -> EHR
o EHR -> EHR
Analysis of data grouped by UDIs (and other contextual factors) for changes in aggregate results
Implementation Plan
Presentation slides and extended draft below
Implementation Plan Power Point
Implementation Plan Word Document
Comments on what needs to be accomplished
Laboratory Interoperability Data Resources (LIDR) - LIVD expansion file management
Data elements needed
Codification of laboratory data requirements
Test and test result harmonization index
Features of LIDR Tooling
Controlled Medical Terminology (Ontology) Unification
Harmonization of LOINC, SNOMED CT and RxNorm on single ontologic model
Publication of harmonized terminology
Clinical Information System Vendor Engagement and HL7 Engagement
Engage CIS vendors regarding management of LIVD encoded data
Engage CIS vendors and HL7 on Data Exchange of LIVD encoded data
IVD Vendor Engagement
Creation of LIVD data elements and population of LIDR database
LIVD Data exchange between instrument and LIS
SHIELD Data Hub for FDA Use
Features needed
Data elements needed
How to populate and maintain
Sustainability Plan
Implementation Approach
Phased Pilot Approach
Prioritization of Lab Tests to Address First