LIDR White Paper

, Audience

Proposed Journal

Rationale

, Audience

Proposed Journal

Rationale

Informatics

JAMIA

make more aware about laboratory issues

Laboratorians

ADLM Journal for Applied Clinical

Archives

 

 

 

 

 

Why we need a Laboratory Interoperability Data Repository?Registry (LIDR) and what we need to get one NEED TITLE!!!

Introduction

Systemic Harmonization and Interoperability Enhancement for Laboratory Data (SHIELD) is a public-private initiative to develop and launch collaborative policies and business models to overcome laboratory interoperability barriers as described in the SHIELD Community Charter [ ]. Due to the complexity of the United States (U.S.) laboratory market today, SHIELD has embraced an ecosystem perspective that recognizes no single government agency or industry actor has the authority or market influence to meaningfully impact the state of laboratory interoperability. SHIELD’s goal is to achieve laboratory data interoperability by describing the same test the same way, every time [CLSI AUTO17].

Laboratory testing has long occupied a prominent place in healthcare with approximately 70% of all medical decisions reportedly based on laboratory test results [Raymond]. Of all clinical data exchange transactions in the country, laboratory data make up the largest share and have the longest history of being digitized. American Society for Testing and Materials’ (ASTM’s) Standard Specification for Transferring Clinical Laboratory Data Messages Between Independent Computer Systems, printed in 1988, is the world’s first published balloted consensus standard for clinical data [NCCLS].

It is thus paradoxical that laboratory data, despite their importance and high usage, represent a failure of clinical interoperability. Health data messaging Health Level Seven (HL7®) and coding standards exist such as Logical Observation Identifiers Names and Codes (LOINC®), Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT®), and others, but consistent and accurate adoption across the healthcare ecosystem of such standards on both sides of data exchange transactions has yet to be established due to the high fragmentation of the clinical laboratory market in the U.S. [LOINC, SNOMED]. The lack of interoperability in the healthcare ecosystem has been discussed in a number of publications. [Stram, Cholan, Bernstam, McDonald]

The United States pays a high but largely hidden price for the lack of nationwide laboratory interoperability in terms of safety, quality, innovation, and efficiency [Menachemi, Hersh]. As patients experience transitions in care, lack of access to relevant clinical data across providers can result in potential patient harm including unnecessary repeat testing or receiving prescriptions for new medications that interact with existing regimens [Kern]. As more clinical laboratory data are exchanged, the risk of commingling data with underspecified semantic meaning could compromise patient safety more in the near future [Stram]. Laboratory data interoperability is not simply the transmission of test result values without error, it also requires the consistent transmission of data elements that allow for meaningful interpretation and use of the laboratory result and value. Laboratory test results require additional data elements besides the result (e.g., units of measure, reference range values, specimen type, methodology) to be correctly interpreted and to meet interoperability standards and requirements [CLSI AUTO17]. Yet even these are not always exchanged. A new approach to interoperability standards is required for standardized digital representation of laboratory tests to allow for accurate interpretation and equivalence determination and a shared understanding of the laboratory data as it moves across the healthcare ecosystem. The inability to fully utilize laboratory data for patient care, quality measurement, clinical decision support, and population health management undercuts the quality-of-care delivery.

Real-world evidence (RWE) for discovery, clinical trials, research, and post-market surveillance are severely hampered by usable data quality. Local test codes, tests names, normal range values, formats of test results and associated units all vary by individual laboratories [Stram]. The limitations created by lack of interoperability include surveillance used to respond to the COVID-19 pandemic and other public health reportable conditions [Alamo, Naude]. Additionally, the resources devoted to laboratory data mapping and curation at each point of the data exchange are compounded across the entire value chain, imposing costs on the U.S. healthcare system [Uchegbu]. Accurate mapping of local laboratory codes to standardized terminologies would enable semantic interoperability in data sharing and aggregation across systems for a variety of purposes [CLSI AUTO17]. However, existing literature documents the poor outcomes associated with creating terminology mappings without guidance, prior education, or an easily accessible authoritative source of truth. Indeed there is no “LOINC police” to ensure accurate mapping, outside of public health. Although many public health jurisdictions validate LOINC and SNOMED CT mapping for ELR, inaccurate mapping is the most common issue with public health onboarding. One study showed laboratories had an overall rate of 80.4% for correct LOINC® code selection for coagulation and cardiac markers assays [Stram]. Another study showed a 41% mismatch rate between diagnostic test manufacturers’ recommended LOINC® codes and the LOINC® codes used at five major medical center laboratories for the same test. In the study, the manufacturer-recommended LOINC® codes were often more granular than the laboratory-selected codes, correctness of coding was not assessed [Cholan]. Best practice is mapping to the most granular LOINC®code to avoid loss of information and meaning. Reasons for the 41% mismatch rate included: encoding with a LOINC® code with a different set of units, LOINC® codes to/from a quantitative/qualitative tests, and LOINC® codes to/from a methodless terms versus codes with the method specified [Cholan]. A study of 68 oncology sites showed a representation agreement rate of 22-68% for 6 medications and 6 laboratory tests for which well-accepted standards exist [Bergstram].

This objective is realized when a specific laboratory test result from one In-Vitro Diagnostic (IVD) platform and the same laboratory test performed on a another IVD platform can be considered equivalent and can be safely intermingled to achieve complete clinical interoperability [Rychert]. A more limited, but necessary, stage of laboratory data is that laboratory test data performed on a particular IVD platform can be associated electronically with laboratory test data performed on the same IVD platform at any healthcare institution, this is referred to as structural interoperability [CLSI AUTO17].

The SHIELD collaborative emerged out of multi-agency workshops in 2015 and 2016, and a FDA solicitation of funds to Patient-Centered Outcomes Research (PCOR) in 2017. SHIELD currently brings together stakeholders including IVD manufacturers, commercial and institution-based (e.g., hospital) laboratories, Association of Public Health Laboratories (APHL), standards development organizations (SDOs), Pew Charitable Trusts, National Evaluation System for Health Technology (NEST)/Medical Device Innovation Consortium (MDIC), College of American Pathologists (CAP), Medical Device Epidemiology Network (MDEpiNet), American Clinical Laboratory Association (ACLA), and numerous federal agencies.

The SHIELD community took a major step when the Coronavirus Aid, Relief, and Economic Security (CARES) Act required “every laboratory that performs or analyzes a test intended to detect SARS-CoV-2—or to diagnose a possible case of COVID-19” —to report the result values of every test to state and local public health agencies [CARES Act]. During this time SHIELD members produced a spreadsheet providing proper codes for reporting to public health using the LOINC to IVD Test Result Mapping format (LIVD: https://ivdconnectivity.org/livd-specification/.

SHIELD’s value proposition follows the use cases of protecting patient safety, improving clinical care, reducing lab data user burden, and making RWE less expensive and timely as further described in the SHIELD Community Roadmap [cite: ]. The Laboratory Interoperability Data Registry (LIDR) is envisioned as a centralized repository of codes that serves as an easily accessible authoritative source for the standardized digital representation of laboratory tests. It is hoped that LIDR will not only provide a standardized means to store, search and access data about unique laboratory test results needed for semantic interoperability, but allow automated export of the data standards for use in healthcare information technology, such as the LIS, LIMS and EHR. The LIVD format can be used to provide input data into LIDR from the manufacturers; LIDR will have support for additional data elements beyond what is currenty captured in the LIVD file format. LIVD could also be used as a standardized format to share the codes expected to be used in data exchanges. This will reduce the burden on individuals mapping laboratory data elements.

References

  1. SHIELD Community Charter - (Accessed December 27, 2023)

  2. CLSI. Semantic Interoperability for In Vitro Diagnostic Systems. 1st ed. CLSI report AUTO17. Clinical and Laboratory Standards Institute; 2023.

  3. Raymond L, Maillet É, Trudel MC, Marsan J, de Guinea AO, Paré G. Advancing laboratory medicine in hospitals through health information exchange: a survey of specialist physicians in Canada. BMC Med Inform Decis Mak. 2020 Feb 28;20(1):44.

  4. NCCLS. Standard Specification for Transferring Clinical Observations Between Independent Computer Systems. NCCLS document LIS5-A [ISBN 1-56238-493-7]. NCCLS, 940 West Valley Road, Suite 1400, Wayne, Pennsylvania 19087-1898 USA, 2003.

  5. LOINC® from Regenstrief. Home. LOINC®. 2021. http://www.LOINC®.org/ (accessed December 27, 2023).

  6. SNOMED CT® starter guide - SNOMED CT® starter guide. SNOMED International. 2021. https://confluence.ihtsdotools.org/display/DOCSTART/SNOMED+CT+Starter+Guide (accessed December 27, 2023).

  7. Stram M, Seheult J, Sinard JH, Campbell WS, Carter AB, de Baca ME, Quinn AM, Luu HS. A survey of LOINC code selection practices among participants of the College of American Pathologists’ CGL and CRT proficiency testing programs. Arch Pathol Lab Med 2020 May;144(5):586-596.

  8. Cholan R, Pappas G, Rehwoldt G, Sills A, Korte E, Appleton K, Scott N, Rubinstein W, Brenner S, Merrick R, Hadden W, Campbell K, Waters M. Encoding laboratory testing data: case studies of the national implementation of HHS requirements and related standards in five laboratories. J Am Med Inform Assoc. 2022 Jul; 12;29(8):1372-1830.

  9. Bernstam E, Warner J, Krauss J, Ambinder E, Rubinstein W, Komatsouis G, Miller R, Chen J. Quantitating and assessing interoperability between electronic health records. J Am Med Inform Assoc. 2022; 00(0):1-8.

  10. McDonald CJ, Baik SH, Zheng Z, et al. Mis-mappings between a producer's quantitative test codes and LOINC codes and an algorithm for correcting them. J Am Med Inform Assoc. 2023;30(2):301-307.

  11. Menachemi N, Rahurkar S, Harle CA, Vest JR. The benefits of health information exchange: an updated systematic review. J Am Med Inform Assoc. 2018 Sep 1;25(9):1259-1265.

  12. Hersh WR, Totten AM, Eden KB, Devine B, Gorman P, Kassakian SZ, Woods SS, Daeges M, Pappas M, McDonagh MS. Outcomes From Health Information Exchange: Systematic Review and Future Research Needs. JMIR Med Inform. 2015 Dec 15;3(4):e39.

  13. Kern LM, Grinspan Z, Shapiro JS, Kaushal R. Patients' Use of Multiple Hospitals in a Major US City: Implications for Population Management. Popul Health Manag. 2017 Apr;20(2):99-102.

  14. Alamo T, Reina DG, Mammarella M, Abella A. Covid-19: Open-Data Resources for Monitoring, Modeling, and Forecasting the Epidemic. Electronics. 2020; 9(5):827.

  15. Naudé, W., & Vinuesa, R. Data deprivations, data gaps and digital divides: Lessons from the COVID-19 pandemic. Big Data & Society 2021;8(2).

  16. Uchegbu C, Jing X. The potential adoption benefits and challenges of LOINC codes in a laboratory department: a case study. Health Inf Sci Syst. 2017 Oct 11;5(1):6.

  17. Rychert J. In support of interoperability: A laboratory perspective. Int J Lab Hematol. 2023;45(4):436‐44.

  18. Coronavirus Aid, Relief, and Economic Security Act (2020). https://www.govinfo.gov/content/pkg/COMPS-15754/pdf/COMPS-15754.pdf. (Accessed December 27, 2023).

  19. LIVD – Digital Format for Publication of LOINC to Vendor IVD Test Results, IVD Industry Connectivity Consortium https://ivdconnectivity.org/livd-specification/ (Accessed December 27, 2023)

  20. Systemic Harmonization and Interoperability Enhancement
    for Laboratory Data (SHIELD) Community Roadmap (Accessed December 27, 2023)

The Business Case for LIDR

Thousands of distinct clinical laboratory tests exist and are performed routinely on patients to inform their medical care.  These tests differ from each other in many ways, ranging from gross features, such as the substances they measure or the types of specimens they analyze, to minor variations, such as the specific testing methods they use, the laboratory instruments they’re performed on, or the number of hours a patient fasts prior to providing a blood sample.  To correctly and safely interpret the results of laboratory tests, clinicians must be aware of all the pertinent and unique features of the tests that were performed on the patients they are treating.  Automated clinical decision-support systems must also have access to the pertinent features of lab tests to reliably match test results against the encoded logic in their rules and guidelines.  Finally, accurate data analytics for population health, clinical trials, observational studies, artificial intelligence, and machine learning also depends on complete and precise information about the individual lab test result values that appear in aggregated data sets.

Within individual laboratories, care is taken to record and report the clinically pertinent features of the lab results that they produce.  However, when test results are electronically reported, this information is often represented in local and idiosyncratic ways by each reporting lab, using local code sets, nomenclatures, and naming conventions.  Different labs may use different codes to refer to what is actually the same test or to what is actually the same feature of a test (such as the units of measure – e.g., “grams/L” vs. “gms/Liter”).  Alternatively, different labs may use the same code to refer to actually different tests or to actually different features of a test (such as the tested analyte – e.g., “CMV” used to represent both “Cytomegalovirus Antibody” and “Cytomegalovirus Antigen”).  When clinical information systems receive test results from different laboratories, therefore, there exists significant potential for confusion regarding the features of such tests and whether their results can be compared, trended, aggregated, and analyzed or whether they are, in fact, different tests whose results should not be pooled in these ways for either clinical or secondary uses. 

LOINC Coding and its Limitations

The health I.T. world has long recognized this problem and attempted to address it.  The best and most widely implemented solution to date is the Logical Observation Identifier Names and Codes (LOINC) representation system.  The LOINC model (which is thoroughly described elsewhere) seeks to standardize the representation of lab tests by characterizing them with respect to six relevant distinguishing features:  Component, Property, Timing, System, Scale, and Method.  A set of standardized values have been defined by the LOINC model for each of these features, and every lab test may be characterized by assigning it a single value from these sets (for example, the value “Creatinine” from the set for Component and the value “Urine” from the set for System, etc.). 

Per the LOINC model each unique combination of values for the six features constitutes a unique and distinct class of laboratory tests, and those classes that correspond to existing lab tests are assigned standard identifiers called LOINC Codes.  In this manner, the LOINC model allows the same tests performed by many different laboratories and represented using many different local codes to be assigned to the same LOINC code and represented using the same standard LOINC codes.  The assignment process, based on the six defining and distinguishing features of lab tests per the LOINC model, ideally designed to ensure that all tests that are clinically the same (regardless of the lab that performed them) are assigned the same LOINC codes and all tests that are clinically different are assigned different LOINC codes.  Information systems that have aggregated test results from multiple labs, such as Electronic Health Record (EHR) systems, disease registries, and public health data repositories, can ostensibly rely on the tests’ LOINC codes to represent which results correspond to the same tests and which to different tests.  Figure 1 illustrates this improved situation.

Figure 2. Aggregation of lab results based on mappings to standard LOINC codes.

The LOINC model has defined and named more than 55,000 different classes of clinical laboratory tests, and it represents the most comprehensive effort to date to standardize the representation of such tests based on their relevant clinical features.  Despite this progress, however, LOINC coding alone cannot address issues with consistently coding and distinguishing lab test results from different laboratories.  At least three significant challenges remain.

Problem #1: Inaccurate assignment of LOINC codes by individual laboratories

The effectiveness of the LOINC system depends on the accurate assignment of locally coded tests to LOINC classes (i.e., the accurate “LOINC coding” of locally coded tests).  Errors in LOINC coding may result in the same tests being assigned to different LOINC classes or different tests being assigned to the same LOINC class, i.e., the original naming confusion that the LOINC system is intended to address.  Anecdotal evidence and empirical research have shown that such assignments are done incorrectly by individual laboratories for a not-insignificant proportion of their tests [[i], [ii], [iii], [iv]]. 

Correct LOINC mapping requires a detailed technical knowledge of both the tests being mapped (laboratory science) and the LOINC information model (informatics science), because differences among tests can be subtle and the rules for correctly assigning LOINC codes can be complex and arcane [[v]].  For example, certain tests have Component values that are very similar, but require mapping to distinct LOINC codes (such as “CYP11B1 gene targeted mutation analysis” [LOINC code 57308-9] versus “CYP11B1 gene mutations tested for” [LOINC code 57311-3]).  Likewise, certain tests have Property values that are very similar and difficult to distinguish, such as Prid (“the presence of a kind of analyte and the specific identity of the analyte if it is present”) versus Type (“the specific analyte in cases when the baseline presence of the analyte is known”) [[vi]].  It is often too much to ask laboratory technicians and/or health I.T. staffs to discern all of the details of their tests and all of the LOINC classification rules that apply, and the availability of experts in this task is limited.  Further, the LOINC-mapping task must be repeated for hundreds of tests by every individual laboratory, even when many labs perform the same tests in exactly the same ways, which further contributes to the opportunity for error, as well as inconsistency.

Problem #2: Inconsistent assignment of LOINC codes by individual laboratories

For certain tests, there are multiple LOINC class assignments that are technically correct, and different labs may choose to map these tests to different LOINC codes. For example, there are thousands of pairs of LOINC classes that differ only in their values for the Method attribute, such as the pair of codes in Figure 3.  One class in each such pair has no method specified, denoting that it maps to all tests characterized by the combination of Component, Property, Timing, System, and Scale alone.  The other class is identical, except that it specifies a particular Method value, denoting that the class maps only to tests in that subset of the first class performed using the specified method (“IA”, or immunoassay, in the case of Figure 3).  However, these latter tests are also technically described by the first “method-less” class (albeit more generally), and certain labs will choose to map such tests to the more general LOINC code (“1832-5” in the case of Figure 3), whereas other labs will map them to the method-specific LOINC code (“83075-2”).  Identical tests may, thereby, be represented using different LOINC codes depending on the labs which perform (and map) them, again undermining the standardization intended by the LOINC system. Best practive is to map to the most specific LOINC reflecting test details. However, a number of laboratories have misunderstood to achieve interoperability, mapping to generic LOINC codes is needed.

Figure 3. Two LOINC codes that represent the same test at different levels of specificity, based on whether the method is or is not specified.

A similar problem occurs because over one thousand pairs of LOINC classes differ only in their values for the System attribute, as shown in Figure 4.  One class in these pairs has no particular system specified (i.e., the System value is “XXX”), denoting that it maps to all tests characterized by the combination of Component, Property, Timing, Scale, and Method alone.  The other LOINC class does specify a particular System value (“Ser”, or serum, in the example of Figure 4), again denoting that it maps to only that subset of tests in the first class that are performed on the specimen serum.  However, both classes technically describe the test performed on serum, and certain labs that perform this test may choose to map it to the more general LOINC code  “6387-5”, whereas other may choose the system-specific LOINC code “6386-7”. 

Again, the potential for labs to assign different LOINC codes to the same tests undermines the intended standardization of the LOINC system.  In these cases, some means is required to ensure that different labs make the same mapping decisions, for example that they all map to the most specific LOINC code that applies to their test.

Problem #3: Insufficient information within the LOINC data model to make clinically relevant distinctions among similar tests performed at different laboratories

While LOINC codes attempt to assign tests to a category of tests that have the same meaning, LOINC coding alone was never intended to contain all of the information necessary for safe use of laboratory data. [Huff] For certain types of lab tests, there remain clinically relevant differences among even tests that have been correctly and consistently mapped to the same LOINC code, i.e., that share the same values for the LOINC Component, Property, Timing, System, Scale, and Method attributes.  These differences among tests with the same LOINC codes, if unrecognized, can lead to clinically misleading interpretations of lab results, for example when trending results that have been aggregated from multiple labs or when applying a clinical guideline that was developed using test results from one lab to a patient whose tests are performed by a different lab.

These differences occur in both quantitative and qualitative test results.  For quantitative tests, they occur if the level of an analyte varies significantly when measured by different labs, usually because the testing instruments, materials, and/or processes used by the labs are different and have not been mutually calibrated or harmonized.  For example, LOINC code 100677-4 denotes the class of lab tests that measures the log concentration of Ebstein-Barr Virus (EBV) DNA in serum or plasma using nucleic acid amplification with probe detection (see Figure 5).  Empirical investigation has shown that results for this test on the same reference specimen can vary by an order of magnitude among labs using different instrumentation and/or reagent kits [[i]].  The LOINC system does not and cannot represent these distinctions in instrumentation and reagent kits and, therefore, represents all of the different labs’ tests with the same LOINC code, despite the tests’ demonstrated differences in measurement.   In the case of the EBV DNA test, these differences can have clinical implications because the management of patients who have undergone solid-organ transplantation varies depending on their specific viral loads of EBV. 

In situations such as this, clinicians may need additional information about the instrumentation and reagent kit used to generate the test results they are viewing, or the results may need to indicate whether or not different labs’ have undergone standardization or normalization to produce mutually consistent results.   Although labs usually provide reference ranges with their test results, this information typically only reflects the normal ranges of the tests (close to zero, in the case of EBV viral loads) and may not indicate variations in measurements seen at higher viral loads.  These issues are important for any quantitative test used to trend or track abnormal levels of an analyte over time or used to guide treatment based on the specific abnormal level of an analyte at a point in time.

A separate issue in the reporting of quantitative tests is the use of different units of measure for tests that share the same LOINC codes.  For example, the mass concentration of hemoglobin in whole blood (LOINC code 718-7) is variably reported by different laboratories using the units of measure “g/dL,” “g/L”, “g/100mL,” or “mg/mL” [vii].  Hemoglobin test results reported using different of these representations can vary by an order or magnitude (e.g., “2.3 g/dL” vs. “23 mg/mL”), complicating the trending or aggregation of test results from different labs (in the best case), and presenting patient-safety risks if not correctly interpreted by busy clinicians or automated decision-support systems (in the worst case).  Hence, there is value in standardizing the units of measure for reporting quantitative test results by different laboratories.  However, LOINC codes, themselves, do not indicate or prescribe any such standard.  Note that the LOINC model does assign different codes to tests that measure different properties of an analyte (e.g., the mass concentration of hemoglobin versus the substance concentration), but it does not specify a particular unit of measure among the several options that may still exist (e.g., “g/dL” vs. “mg/mL” for mass concentration, or “mmol/L” vs. “µmol/L” for substance concentration).

Clinically relevant differences among tests represented by the same LOINC code can also occur for qualitative tests.  In these cases, the nominal or ordinal values that express the results of a qualitative test can vary from lab to lab, based on local conventions or preferences.  When the set of result values produced by one lab is inconsistent with that produced by another, the reliable aggregation and comparison of test results from these labs may be compromised. 

For example, the LOINC code 25145-4 denotes the class of lab tests that assess the presence and level of bacteria in urine sediment samples using light microscopy (see Figure 6).  Although the LOINC system specifies that the result values for this test must be qualitative and ordinal, it does not specify the values themselves, nor how many different values may be reported.  As such, one lab might report the results of this test as “0, 1+, 2+, 3+ 4+”, another may report them as “None, Rare, Few, Moderate, Many,” and yet another as “No Bacteria, Few Bacteria, Moderate Bacteria, Many Bacteria.”  Although all three labs may have correctly LOINC coded their tests as 25145-4,  their results may not be directly comparable, even with efforts at terminology mapping.  For example, if the third lab were to report “Few Bacteria,” it would not be clear whether that result corresponded to “Rare” or “Few” reported by the second lab, since the third lab did not have the option to use the value “Rare.”  Because the six attributes that define LOINC codes do not and cannot specify the sets of reportable values (i.e., “value sets”) for nominal and ordinal test results, tests that have been correctly assigned the same LOINC code may still not be comparable, and such incompatibilities would be opaque to persons and systems attempting to aggregate the tests’ results based on LOINC codes alone.

Fortunately, these challenges and limitations associated with the LOINC coding system can be addressed by auxiliary resources that facilitate correct and consisted LOINC coding, as well as represent tests at a more granular level than that supported by the six existing attributes of LOINC codes alone.  One such envisioned resource is the Laboratory Interoperability Data Repository (LIDR).  The LIDR is intended as a shared resource that collects, stores, curates, and disseminates the detailed information about laboratory tests that is needed to correctly represent and interpret the tests’ results when they are aggregated across multiple labs. 

  1. Huff SM, Rocha RA, McDonald CJ, De Moor GJ, Fiers T, Bidgood WD Jr, Forrey AW, Francis WG, Tracy WR, Leavelle D, Stalling F, Griffin B, Maloney P, Leland D, Charles L, Hutchins K, Baenziger J (1998). Development of the Logical Observation Identifier Names and Codes (LOINC) vocabulary. J Am Med Inform Assoc, 5(3), 276-92.

Problem #4: Inadequate EHR and HIT term builds.

There are several aspects to the terms built in LIS and EHR dictionaries that can contribute to interoperability issues. 1. Often the performing lab’s test name is not represented exactly in the EHR in favor of HIT vendor “starter lists” of lab tests or provider preferred terms for test orders and results. Both of these are usually disjunctive from the actual performing lab’s test naming conventions and details. They often include a generic component or test built, but methodology, speicmen information and other test details outlined above contributing to the full meaning of a lab test may be missing or found in other fields. There may be “assumed” meaning about test details within an institution that are inherent knowledge and use of a test, that is missing from HIT and thus the test when it is exchanged and used beyond theses HIT builds [iv] . In fact a VA paper by Wiitala found across the VA’s standardized EHR, the same test had in some cases, 60 different naming conventions built and used for teh same test. This is another facotr contributing to inoperability of results and certainly as noted in the paper, researchers and others are missing key information about the test meaning as a result.

Another HIT term build issue occurs where there are multiple different laboratories interfaced to a particular EHR and there is more than one version of a laboratory test order or result performed by one or more of them. Separate, distinct EHR terms are needed to delineate these different tests and their different LOINC codes. Many EHRs will create a single lab result term which 2 or more different lab results and their values are mapped. When the EHR only supports a single LOINC code, then the detailed, specific LOINCs from each performing lab are often not supported in the EHR. [cite / see https://pubmed.ncbi.nlm.nih.gov/23192446/ ] Instead, a generic lab order or result term is built and thus it is mapped to a methodless or specimenless LOINC to match the EHR term. Thus the test meaning and mapping is changed as it passes through these HIT systems, instead of preserving the providence and details. While the intent is to include these important distinguishing test details in LIVD, the existing state needs to change so that access to these details are supported in the EHR and health ecosystem to prevent these issues of information loss. To be clear, not all tests are impacted by these issues, but a number of common lab results utilized for decision making are impacted.


doi: 10.1097/MLR.0000000000000996

PMCID: PMC6417968

NIHMSID: NIHMS1509119

PMID: 30394981

Variation in laboratory naming conventions in EHRs within and between hospitals: A nationwide longitudinal study

Wyndy L. Wiitala, PhD,1 Brenda M. Vincent, MS,1 Jennifer A. Burns, MHSA,1 Hallie C. Prescott, MD, MSc,1,2 Akbar Waljee, MD,1,2 Genna R. Cohen, PhD,3 and Theodore J. Iwashyna, MD, PhD1,2


[i] Rychert J, Danziger-Isakov L, Yen-Lieberman B, Storch G, Buller R, Sweet SC, Mehta AK, Cheeseman JA, Heeger P, Rosenberg ES, Fishman JA. Multicenter comparison of laboratory performance in cytomegalovirus and Epstein-Barr virus viral load testing using international standards. Clin Transplant. 2014 Dec;28(12):1416-23.

[i] Lin MC, Vreeman DJ, McDonald CJ, Huff SM. Correctness of Voluntary LOINC Mapping for Laboratory Tests in Three Large Institutions. AMIA Annu Symp Proc. 2010 Nov 13;2010:447-51.

[ii] Drenkhahn C, Ingenerf J. The LOINC Content Model and Its Limitations of Usage in the Laboratory Domain. Stud Health Technol Inform. 2020 Jun 16;270:437-442.

[iii] Stram M, Seheult J, Sinard JH, Campbell WS, Carter AB, de Baca ME, Quinn AM, Luu HS; Members of the Informatics Committee, College of American Pathologists. A Survey of LOINC Code Selection Practices Among Participants of the College of American Pathologists Coagulation (CGL) and Cardiac Markers (CRT) Proficiency Testing Programs. Arch Pathol Lab Med. 2020 May;144(5):586-596.

[iv] McDonald CJ, Baik SH, Zheng Z, Amos L, Luan X, Marsolo K, Qualls L. Mis-mappings between a producer's quantitative test codes and LOINC codes and an algorithm for correcting them. J Am Med Inform Assoc. 2023 Jan 18;30(2):301-307. 

[v] Vreeman DJ.  Top 10 Tips for Mapping to LOINC.  Blog entry, 1/23/2016. https://danielvreeman.com/blog/2016/01/23/top-10-tips-for-mapping-to-loinc/ (Accessed 7/30/2023).

[vi] LOINC Users’ Guide, Sections 2.3.2 and 2.3.3.  Regenstrief Institute, updated 8/8/2022.    https://loinc.org/kb/users-guide/major-parts-of-a-loinc-term/ (Accessed 7/30/2023).

[vii] https://testresult.org/en/useful-information/units-of-measurement (Accessed 12/4/2023)

High-Level Use cases for LIDR

As envisioned, the Laboratory Interoperability Data Repository (LIDR) will form the basis of a new ecosystem to ensure that laboratory test results are represented accurately, consistently, and precisely when shared among labs, health care providers and other organizations.  Figure 7 illustrates this ecosystem and the role of the LIDR within in.  The participants in the ecosystem are denoted by rectangles, and the exchange of information among these participants is denoted by arrows.  The two rectangles that represent the shared LIDR resource and the set of clinical laboratories that will use the LIDR resource are further specified in terms of the relevant databases and tools that these participants will need to fulfill their roles in the ecosystem.  Note that Figure 7 is conceptual and subject to further refinement as the envisioned ecosystem is realized.

The envisioned functioning of the ecosystem that is depicted in Figure 7 and the role of the shared LIDR repository within that ecosystem are characterized by the set of use cases below.  The use cases are described at a high level to explain the envisioned workflows and their purposes.

Use Case 1: Manufacturers submit LIDR entries to the shared LIDR resource

In vitro diagnostic (IVD) device manufacturers submit LIDR entries that specify the correct LOINC codes and other standard representations that labs should use when they report the results of specific tests performed using the vendors’ devices (“prescriptive” LIDR entries).   These entries uniquely identify each device produced by the manufacturer, the analytes that the device can measure, and the specimens from which the device can measure those analytes.  For each combination of device (method), analyte, scale/property as well as specimen (system) and timing, a submitted LIDR entry specifies the correct LOINC code to assign to the results of the test when it is reported, as well as the recommended unit-of-measure to use (for tests with quantitative results) or value set to use (for tests with qualitative results) when reporting results. 

LIDR entries may also prescribe how specimens for the test should be collected and stored, as well as whether the manufacturer has formally harmonized its version of the test with those of other device manufacturers, such that results will be consistent.  Certain tests from different manufacturers have been harmonized in this manner, and the harmonization of others is underway [[i],[ii]].  In some sense, the LIDR entries that device manufacturers submit for each of their tests are analogous to the “package inserts” that pharmaceutical companies provide with each of the medications they produce.

Note that IVD devices include instruments (complex machines that automatically analyze test specimens), as well as test kits (simple collections of materials and reagents that are used to manually analyze test specimens).  For example, a mass spectrometer is an IVD instrument, whereas a home Covid test is an IVD test kit.  The manufacturers of both types of IVD devices would submit LIDR entries to the shared resource.  Further, tests performed on instruments also require chemical reagents, which are frequently provided in the form of reagent kits.  Test kits and reagent kits are distinct in that test kits are used stand-alone to perform a test manually, whereas reagent kits are used in conjunction with an instrument to perform a test automatically.  LIDR entries for “automated” tests (i.e., tests performed using instruments) would specify both the unique instrument and the unique reagent kit used for that test as approved by the applicable regulatory entity (the FDA in the United States).  LIDR entries for “manual” tests (i.e., tests performed using a test kit and no instrument) will only specify the test kit used.  Figure 8 depicts a provisional set of data elements that would be included in the LIDR entries submitted by IVD device manufacturers.

Use Case 2: LIDR administrators review and curate the submitted LIDR entries

A quality-assurance and knowledge-curation process is undertaken within the shared LIDR resource to ensure that IVD device manufacturers have correctly and consistently specified the LOINC codes and other standard representations within their submitted LIDR entries. 

For Correct LOINC Coding

Manufacturers are expected to assign the most specific LOINC code that applies to the test described within each LIDR entry meaning they may include more than one entry per combination of device and LOINC System; e.g., one System specific LOINC for each specifically allowable specimen type.  Also, for example, the specified units of measure must be consistent with the Property attribute value of the specified LOINC code (e.g., “mg/dL” would not be consistent with the property “Substance concentration,” but rather “Mass concentration”).  Finally, administrators ensure that LIDR entries submitted by different manufacturers for the same combinations of analytes, specimens, and methods all have the same LOINC codes specified, given that consistent LOINC coding of equivalent tests is a core goal of the LIDR resource.

For Consistent Units of Measure and Value Sets

Beyond such quality-assurance steps related to LOINC-code assignments, LIDR administrators also identify instances of unnecessary variance among the units of measure (for quantitative tests) and values sets (for qualitative tests) that IVD test manufacturers have specified for otherwise-equivalent tests.  For example, if one manufacturer’s LIDR entry for its automated hemoglobin test specifies a “g/dL” unit of measure, whereas another’s manufacturer’s LIDR entry for the same test specifies a “mg/mL” unit of measure, the variance would be identified and investigated by LIDR administrators to determine if the two vendors could not standardize on a single unit of measure (depending on the contents of their package inserts, the details of their respective tests, and other factors).  In the medium term, at least, the goal of the LIDR would be to standardize such unnecessary sources of variance in units-of-measure and in value sets as much as possible.

For Context Specificity

For certain types of tests, LIDR administrators will also need to supplement the LIDR entries provided by IVD device manufacturers.  This need particularly applies when the correct assignment of a LOINC code for a test depends on contextual factors that vary from one running of the test to another and which the IVD device manufacturer cannot know at the time it defines the LIDR entry for that test.  For example, an automated IVD device may always measure the mass concentration of glucose in a serum sample, but different LOINC codes apply to this measurement depending on whether the sample was obtained at a random time (LOINC code 2345-7) or whether the specimen was obtained one hour following an oral glucose challenge (LOINC code 20438-8) versus 2.5 hours following such a challenge (LOINC code 26554-6), or whether the oral challenge consisted of lactose (LOINC code 72895-6) rather than glucose, etc.  Device manufacturers are likely to submit LIDR entries for the uses of their tests as specified in their package inserts, rather than for all of the clinically distinct contexts in which their tests are run, contexts that may warrant the assignment of different LOINC codes.  Hence, the curation of LIDR entries for such tests will require the creation of additional, distinct LIDR entries for the different clinical contexts in which the tests may be run (e.g glucose challenge tests or calcuations that are performed in the LIS, not by the instrument), because such entries must specify different LOINC codes. 

For Mandatory Data Elements in Result Reporting

Lastly, for each LIDR entry, LIDR administrators designate the subset of data elements that labs must include when electronically exchanging the results of that test.  The subsets are specific to the types of data exchanges that are anticipated and that are describe below. These “must-include” data elements comprise the set needed to fully characterize and distinguish the results of tests when they are displayed to clinicians, trended across time, used for automated decision-support functions, aggregated for data-analytical purposes, and shared with other stakeholders who may perform similar functions.  Note that not all of the data elements in each LIDR entry will need to be included when reporting the results for that test, because certain of the elements are needed only for labs to match their local tests to corresponding LIDR entries so as to identify the correct LOINC code (e.g., specimen information); once a correct LOINC code is assigned, such information will be included in and communicated via the LOINC code (e.g., specimen information), so it need not be redundantly sent in a separate data element.  Also, certain specimen-collection or specimen-handling instructions that are included in package inserts and useful for labs to be aware of need not be included when reporting the results of tests (or, at least, all tests).  Each LIDR entry will designated which data elements do need to be included with the results of that test, which may vary from test to test, as determined by the LIDR administrators.

LIDR administrators use a suite of tools to analyze and curate the LIDR entries submitted by manufacturers.  When administrators correct errors in LIDR entries, the modified entries are sent back to the manufacturers so that they may update their own records, as well as refine their own processes for specifying LIDR entries (such as correctly mapping to LOINC codes). 

The LIDR tools also maintain a local copy of the LOINC knowledge base, which must be periodically refreshed as the Regenstrief Institute updates the set of LOINC codes.  Finally, a mechanism exists to request new LOINC codes from the Regenstrief Institute when no existing code corresponds to a test submitted by an IVD manufacturer.  In these situations, a local code for temporary use may need to be created in the LIDR’s local copy of the LOINC knowledge base so that it may be assigned to a LIDR entry during the time that the Regenstrief Institute reviews and processes the new-code request.

Use Case 3: Labs retrieve and consult the LIDR entries that correspond to the tests they perform

Participating clinical laboratories consult the LIDR repository and identify the specific entries within it that correspond to the tests that the labs perform.  This matching of the labs’ tests to corresponding LIDR entries is based primarily on the instrument and/or test kit the lab uses to conduct each of its tests, the analyte that the test measures, scale and units of measure if applicable, and the kind of specimen that the test analyzes. In certain cases, the specific laboratory method used by the test is also factored into the matching process (particularly when the instrument and/or test kit used does not, itself, imply a single specific method).  Additionally, certain contextual information available only at the time a test is ordered, such as the patient’s fasting state or post-challenge status, is considered (as necessary) when matching items in the laboratory’s test master to entries in the LIDR repository.

The matching of labs’ tests to LIDR entries may be performed manually, semi-automatically, or fully automatically, depending on the manner in which labs internally represent their own tests and depending on the tools available to labs for automatically matching these representations to LIDR entries.  For example, if a lab uses SNOMED-CT codes to represent the analytes that are measured by tests it performs, then it may be able to automatically map its tests to corresponding entries in the shared LIDR resource that also use SNOMED-CT codes to represent the measured analytes.  Other labs that use free text or coding systems other than SNOMED-CT to represent the analytes measured by their tests, however, may need to identify the corresponding LIDR entries for those tests by manually inspecting the LIDR resource or by building mapping tables between their coding system and the LIDR data model.  A similar dynamic applies to the ways that labs represent the specific instruments and/or test kits that they use vis-à-vis the way that instruments and test kits are represented within LIDR entries (for example, structured Unique Device Identifiers versus text descriptions).

In any case, for each test that a participating clinical lab performs and reports, the lab will ideally identify a single corresponding LIDR entry, which will then prescribe the correct LOINC code, units of measure, and other mandatory test descriptors that the lab should use when reporting the results of the test.

Use Case 4: Labs assign to their tests the clinically relevant subset of the corresponding LIDR entries, including LOINC codes

For each of its tests that a lab matches to a corresponding LIDR entry, the lab will (ideally) assign the LIDR-prescribed LOINC code and other descriptors to the representation of the test within its own laboratory information system.  The other descriptors include the unique identifier of the IVD instrument and/or test kit used to perform the test, the unit of measure or qualitative values used to report the results of the test, and any indicator of whether the test has been harmonized with similar tests performed by different laboratories.  In this manner, the lab will create a local, standardized description of each test that it performs, and this description will also be represented using the standard LIDR data model.

In certain cases, participating laboratories may not assign to their tests exactly the same descriptors as recommended in the corresponding LIDR entries.  Reasons for such divergence may include local coding practices that differ from the LIDR-recommended practices and which are difficult or impractical to change in the short run.  For example, a laboratory may currently use a LOINC code with a System (specimen) attribute of “XXX” even though the corresponding LIDR entry recommends a LOINC code indicating the specific System “Serum”.  A lab may do this because it has chosen to indicate the specimen in a separate field within its lab reports rather than within the LOINC code, itself.  In this case, the lab’s local description of that test will still be represented using the LIDR data model, but the value for the test’s LOINC code will differ from that recommended in the shared LIDR resource.  Similarly, a lab may choose to use a unit of measure for one of its tests that differs from the unit of measure recommended by the corresponding LIDR entry (e.g., “g/dL” vs. “mg/mL”), and its local LIDR entry will differ in that regard from the corresponding entry in the shared LIDR resource.  In this sense, the local LIDR entries are “descriptive” of the way that the lab is actually representing the test, as opposed to the “prescriptive” information in the shared LIDR entries, which IVD instrument vendors and LIDR administrators have specified to indicate how labs should represent the test.

Over time, as labs become aware of the standard LIDR-recommended representations of the tests that they perform and they have the opportunity to modify their processes and information systems to conform to those standard representations, the labs’ local (descriptive) LIDR entries will converge with the shared (prescriptive) LIDR entries, and all labs will converge on reporting the same tests identically.  In the meantime, however, it will remain important that labs correctly represent and communicate the manner in which they do, in fact, report their test results (even if it diverges from the LIDR-prescribed manner), so that recipients of the results are able to ascertain whether the results can be directly compared with those produced by other labs.

Use Case 5:  When reporting test results, labs include the clinically relevant subset of the corresponding LIDR entry for each test

When reporting a lab test result to an EHR for primary clinical use, each participating lab includes all of the required data elements from its local LIDR entry for the test that are important to the clinical interpretation of the test’s result. Beyond data elements that are already routinely reported, such as the unit of measure or LOINC code,  other data may include information about the test instrument and/or reagent kit used, details regarding the source of the sample tested, or an indicator of whether the test has been harmonized with similar tests at other labs).  This specific information is provided only when the information is important to the clinical interpretation of the test result for primary or secondary purposes.

The required data elements from each test’s LIDR entry could be transmitted alongside the test’s results in one of several ways.  First, a single, structured representation of the LIDR entry could be transmitted within one field of the test-result message (e.g., as an XML or JSON data structure).  Alternatively, only a unique identifier for and access path to the LIDR entry could be transmitted with each test result (e.g., as a URLs), which may then be used by recipients of the test results to retrieve the complete LIDR entry that corresponds to that test if and when needed.  Lastly, the individual data elements from the LIDR entry could be transmitted within separate fields of the lab result message.  However, the standards typically used for lab-result-reporting (such as HL7 v2 ORU messages and HL7 FHIR Observations resources) do not currently include fields for all of the individual data elements of LIDR entries.  For example, no field for a harmonization indicator or reagent-kit ID presently exists and may need to be added in future versions of these standards.  

Regardless of how they are transmitted, the LIDR-specified data elements that correspond to reported test results must be structured and coded in a standard way that receiving EHRs can parse and process to inform the correct interpretation, comparison, and aggregation of tests.  For example, EHRs may automatically assess and display the level of sensitivity and specificity for a test result based on the identity of the instrument and/or test kit that was used, as communicated in the LIDR entry.  Similarly, EHRs may automatically determine whether a patient’s test results from different labs can be compared or trended for decision-support purposes based on the instruments, units of measure, and/or harmonization indicators denoted in the tests’ corresponding LIDR entries.  In the absence of a standardized LIDR data model, these functions could not be automated and would require time-consuming and error-prone manual review of LIDR entries, which may not be feasible in busy clinical environments.

Use Case 6:  When downstream recipients share tests results for primary and secondary uses, they also include the clinically relevant data from the corresponding LIDR entries.

Although federal CLIA regulations require labs to include certain information with the test results that they report directly to ordering providers, the same is not true when those same results are electronically shared by those providers with other downstream systems, such as other providers’ EHRs, HIEs, data warehouses, population-health registries, public-health authorities, regulators, researchers, etc.  Although recent enhancements to EHR-certification requirements will require providers to include certain of this information when they share lab test results downstream[iii], the required information will not extend to all of the envisioned content in LIDR entries.  For example, the required information will not include unique identifiers for the devices and test kits used to generate the lab results, nor harmonization indicators for the tests that were performed.

As such, for at least certain tests, there will remain value in sending all of the designated data elements from the LIDR entry with each lab test result that is shared by one downstream recipient with another.  As discussed above, the detailed information in LIDR entries is important to correctly and safely interpret clinical test results for primary as well as secondary purposes, and this is true regardless of whether the recipient is the ordering clinician or another downstream party.

As with Use Case 5, however, the required data elements from each test’s LIDR entry could be transmitted alongside the test’s results in one of several ways (e.g., a single composite data structure, a set of distinct fields, or a pointer to a remote LIDR repository).   

Use Case 7:  A trusted certification body assesses whether labs’ reporting of test results conforms to the LIDR entries for those tests.

The purpose of the shared LIDR resource is to ensure that the recipients of every lab test result (whether a human or a computer) have sufficient and correct information to interpret that result and to determine whether it can be compared to and trended with the results of similar tests performed by other laboratories.   Achieving this goal will require diligence on the part of numerous parties.  The device and test-kit manufacturers who submit LIDR entries to the shared resource have a critical role to ensure those entries are correct and complete, as do the administrators of the LIDR resource charged with reviewing and curating the entries.  An equally important role, however, will fall to the clinical laboratories that use the LIDR resource.  These organizations will need to correctly match the tests that they perform to the tests’ corresponding entries in the shared LIDR repository, determine which of the test-coding recommendations specified in these (prescriptive) LIDR entries are currently supported by their lab, document within their own local (descriptive) LIDR entries the actual coding of all of the tests that they perform, report the results of those tests in conformance with their local LIDR entries, and include a copy of the required data elements from their local LIDR entry with each test result.  Further, the laboratories will need to update and maintain all of this information as it changes over time, including when their coding practices (as documented in their local LIDR entries) are updated to conform to the manufacturer’s recommended coding practices (as specified in the shared LIDR entries).

Given the complexity and scope of the laboratories’ tasks, it will be important to have a trusted third party continually assess the labs’ performance on these tasks and certify their progress towards the universal standardization of lab-result data.  Specifically, the trusted certification body (TCB) will assess clinical laboratories’ compliance with respect to two aspects of the LIDR process:

  1. Conformance of labs’ local LIDR entries with the prescriptive recommendations specified in the shared LIDR entries.  For each type of test that a lab performs, the TBC will assess the following:

    • Did the lab correctly match their local test to the corresponding entry in the shared LIDR resource?  If not, was that because the lab erred or because no corresponding LIDR entry yet exists?

    • If a corresponding entry in the shared LIDR resource exists for a local test, did the lab adopt the (prescriptive) contents of that entry in their entirety when creating its local (descriptive) LIDR entry for that test, or did the lab deviate from certain practices specified in the shared LIDR entry, such as the prescribed LOINC code, units of measure, value set, specimen-collection practices, etc.?  If the lab deviated, does it plan to come into conformance at a later time or is there a legitimate reason that the lab does not conform with the prescribed coding?

  2. Conformance of the labs’ reporting of test results with the descriptive specifications in the labs’ local LIDR entries.   For each type of test result that a lab reports, the TBC will assess the following:

    • When reporting lab test results, does the lab include all of the required data elements from the corresponding local (descriptive) LIDR entry with each reported result? 

    • For each data element from the local LIDR entry, does the lab encode the element in conformance with the LIDR data model?  For example, does the lab format the instrument UDIs per one of the agreed-upon standards (e.g., GS1, HIBCC, or ICCBBA)?  Does the lab encode the units of measure per the agreed-upon standard (e.g., UCUM)? 

    • When reporting lab test results, does the lab code the result data in conformance with the local LIDR entries corresponding to each test?  For example, if a local LIDR entry specifies a particular unit of measure or a set of qualitative values be used when reporting a test result, does the result, in fact, use that unit of measure or include a value only from that value set?

 


[i] Little RR, Rohlfing C, Sacks DB. The National Glycohemoglobin Standardization Program: Over 20 Years of Improving Hemoglobin A1c Measurement. Clin Chem. 2019 Jul;65(7):839-848.

[ii] Myers GL, Miller WG. The roadmap for harmonization: status of the International Consortium for Harmonization of Clinical Laboratory Results. Clin Chem Lab Med. 2018 Sep 25;56(10):1667-1672.

[iii] See United States Core Data for Interoperability (USCDI) version 4.   https://www.healthit.gov/sites/isa/files/2023-07/Final-USCDI-Version-4-July-2023-Final.pdf, pp. 17-19.(Accessed 8/15/2023).

 

(See also these more detailed use cases for particular stakeholders, workflows, and sub-domains: )

Scope

Minimal Viable Product (MVP) definition

 

LIDR Data Elements:

IVD Test Ordered: (LOINC) To identify the test ordered (e.g., high-sensitive troponin I, comprehensive metabolic panel, 24-hour urine creatinine) in a standardized manner regardless of local codes and naming conventions.

 

IVD Test Performed: (LOINC) To identify test analyte/observable in a standardized manner regardless of local codes and naming conventions.

 

Specimen information

o Specimen type (SNOMED CT®) at minimum

o Specimen source site (SNOMED CT®)

o Specimen source site topography (SNOMED CT®)

o Specimen collection method (SNOMED CT®)

 

Results

o Quantitative results need to include units of measure (UCUM)

o Qualitative result value set (SNOMED CT®)

 

Test kit identification – A single entry that conveys information on the manufacturer of the reagent kit and method. Not intended as an exhaustive catalog of regents used in the process of testing. (Unique Identification for the test kit could be UDI for FDA-approved tests or another unique identification system for other types of tests such as EUA and LDTs.) This can be in the package insert to allow for a guaranteed match between the test being set up in the laboratory and the entry in LIDR commercially available tests (not LDTs).

 

Equipment identification.- A single entry that conveys information on the manufacturer of the instrument from which the result is derived. Not intended to be an exhaustive catalog of instruments used in the process of testing.  (Unique identification for the instrument should be its UDI.)

 

Harmonization indicator for assays that have undergone successful manufacturer harmonization with calibration to an internationally certified and standardized material

Functional requirements

The functional requirements need to be understood in the context of the use cases and workflows described above. It is also important to remember that there are at least 6 types of transactions/information transfers/reference data access that are described and/or implied by the use cases described above. They are:

  1. Test request information and patient information sent from a clinical system to the LIS

  2. Information sent from the LIS to testing instruments or devices/test kits (potentially via middleware)

  3. Test results sent from testing instruments or devices back to the LIS (potentially via middleware), or results entered manually from results of a test kit

  4. Test results sent from the LIS back to the clinical system

  5. Test results shared with downstream systems, e.g. public health departments, disease registries, HIEs, and quality improvement and research databases, etc.

  6. Access to detailed test information in LIDR tables that is not sent in messages, but may be needed to support accurate use of the data for patient care, research or surveillance, for example understanding of targets or specific methods used, cross-reactivity observed in clinical trials etc.

The data elements required for each of these information transfers are different and need to be uniquely specified to prevent ambiguity or confusion in the transfer of information. Also note that the transactions between lab systems and instruments are not applicable when the testing is done using a test kit or an in lab procedure. The specific functional requirements of each of the 6 information transfers are specified below.

  1. Test requests from the clinical system must support the ability to send the following data elements in the request message

    1. A valid orderable LOINC code (24 hr Urine panel, basic metabolic panel, method specific orderables, etc.)

    2. Specimen type (SNOMED CT®) (when it is not part of the LOINC code definition)

    3. Specimen source (SNOMED CT®) (when appropriate, especially for microbiology)

    4. Specimen source site (SNOMED CT®) (when appropriate, especially for microbiology)

    5. Specimen source site topography (SNOMED CT®) (when appropriate, especially for microbiology)

    6. Specimen collection method (SNOMED CT®) (when appropriate, especially for microbiology)

    7. Pre-condition (fasting, exercising, pre or post treatment or event, peak or trough, and challenge information (if not part of the LOINC code)

    8. Information necessary for accurate assignment of reference range (necessary, but not within the MVP scope of LIDR)

      1. Patient age, sex and or sex organs, race, appropriate medications, known diagnoses, etc., and “asked at order questions and answers” such as pre or post puberty, pre or post menopausal, last menstrual period, post prostatectomy, family history, etc.

    9. Designation of a specific reference lab to be used (as appropriate)

  2. Information sent from LIS to instrument or device

    1. Instrument specific test code based on the appropriate LOINC code (sodium substance concentration in urine) from lookup in the LIVD table

  3. Information sent from instrument to LIS

    1. Instrument specific test code for the test that was done, or the LOINC code from the LIVD table that corresponds to the test that was done

    2. Result value: a number, a titer, a range, a SNOMED CT® code (for ordinal or nominal results), or text

    3. Result unit of measure (for quantitative results)

    4. Specimen error condition (specimen hemolyzed or clotted)

  4. Information sent from LIS to clinical system

    1. A valid resultable LOINC code (24 hr Urine panel, basic metabolic panel, method specific orderables, etc.)

    2. Specimen type (SNOMED CT®) (when it is not part of the LOINC code definition)

    3. Specimen source (SNOMED CT®) (when appropriate, especially for microbiology)

    4. Specimen source site (SNOMED CT®) (when appropriate, especially for microbiology)

    5. Specimen source site topography (SNOMED CT®) (when appropriate, especially for microbiology)

    6. Specimen collection method (SNOMED CT®) (when appropriate, especially for microbiology)

    7. Pre-condition (fasting, exercising, pre or post treatment or event, peak or trough, and challenge information (if not part of the LOINC code)Full resultable LOINC code including specimen and method

    8. Result value: a number, a titer, a range, or a SNOMED CT® code (for ordinal or nominal results), or text

    9. Result units of measure (for quantitative results)

    10. Result interpretation (high, low, abnormal, etc. as specified by HL7)

    11. UDI, UID, for instrument or test kit

    12. Patient specific reference range

    13. Reagent kit identification

    14. Threshold or cutoff

    15. Indication of screening or confirmation result

    16. Indication of compliance with testing requirements (SAMSA, etc.)

    17. Laboratory where test was performed, responsible pathologist, and other provenance information as defined by HL7

    18. Harmonization indicator (known equivalence across methods)

    19. Specimen error condition (specimen hemolyzed or clotted)

    20. Data absent reason

  5. Information from clinical system to downstream systems

    1. Any of the information in step #4 should be available to send to downstream systemNeed to specify the required information (fields, columns, in the LIDR table(s)) that must be present in each of the five transactions above. The information required is different for each situation

  6. Detailed test information used to support patient care, research or surveillance

    1. Any or all of the information available in the LIDR tables

It is assumed that all the test specific data needed for interoperable data exchange is contained in the LIDR database. The ideal workflow would be that the IVD manufacturer submits their data to FDA for approval (in structured format, so that all data needed to assign proper codes for test (LOINC), and results (SNOMED CT or UCUM) is available, along with other data that would be of interest for FDA post market monitoring and researchers). If during the submission process, it is noted that a NEW code is needed, the system could send a submission to the respective SDO, and include the timeline for needing the code back. Also, there could be NDAs in place with FDA and the SDOs to not share detail about the tests until FDA approves the new test, which will provide the new code before the new IVD test is approved for the market. At time of approval the data is forwarded to / made available in LIDR. This would put the review process within the FDA (or it could be a multi-agency team with NDA), but would ensure that all coding and the UDI is available at the time of FDA approval. A proposed workflow is shown in Figure 9.

Figure 9 LIDR content creation workflow. 1) and/or 2) IVD manufacturer submits content via LIVD or maybe via TINKAR (a possible FDA submission system in the future), and the submission is automatically checked for the correct format. IF OK go to 3. IF NOT OK - request fix from submitter. 3) LIDR captures metadata about the source (at minimum who and when). The submission causes an alert for review (if funded), otherwise it could be published with the source data visible. The LIDR team then reviews the submission for correctness in coding. If codes are incorrect, reach out to submitter. If codes are missing, collect all needed information and submit to SDO (A), and receive codes back from SDO (B). When the information is correct, publish in LIDR. 4) Lab reads LIDR information to guide lab test set up in their system. 5) Lab queries for new information.

It is assumed that the information needed to map from LOINC codes to instrument specific test codes is available in the LIVD table(s). The requirements for LIVD/LIDR files are as follows:

  1. Provide support for IVD vendors to upload their LIVD files as:

    1. an electronic FHIR catalog resource bundle (The current LIVD catalog is universal realm IG)

    2. a manual spreadsheet (Caveat: we have to validated for non-printable extended ASCII Characters that need to be either removed or replaced because these characters can and do play havoc with ETL processing between database warehouses and other electronic messaging formats)

    3. a manual .csv file

  2. Provide support for validation during upload (MVP)

    1. Validate format of the data in the upload (MVP)

    2. Validate codes referenced in the upload

  3. If an uploaded file validates then accept it into a holding area

    1. If the file does not validate then reject the file and send/display an error message (MVP)

    2. LOINC codes used for validation should come from the most current LOINC release or prerelease tables to get the most up-to-date status.

      1. If deprecated codes are being used, create error in the staging area and list suggested replacement codes. if none exist, then escalate to Regenstrief.

      2. Note: Deprecated/discouraged codes may be in IVD LIVD map if they have not maintained/updated their maps as they may not be deprecated/discouraged on that supported version, but are now.

    3. If discouraged codes are being used, create an error in the staging area. - New LOINC codes should be requested from Regenstrief if they don’t exist.

  4. Versioning

    1. Support for versioning of each LIDR entry where changes occur

    2. Support the ability for LIDR to contain different versions of code systems, including different versions of same code system

    3. Each instance of the mapping to a standard code must include the version it comes from

    4. Support subscription for alerts for new content based on manufacturer and / or analyte

    5. Support query for changes from a specified date

  5. UDI requirements

    1. Must support UDI entries in test kit UID or equipment UID columns (query against Global Unique Device Identification Database (GUDID))

    2. Support for multiple UDIs for each “result” where multiple reagents, device(s), etc. are used to produce a single test result and value

    3. Allow representations where UDIs are not yet available (i.e. when SARS-COV2-tests emerged

  6. There needs to be a QA process prior to publicaton to ensure we have

    1. Correct format of data submitted

    2. Correct content of data (reviewed by independent expert review (as determined by governance group) of all terminology mappings (MVP)) - if not performed, should be visible to consumer of the data

    3. Support manufacturer review if any changes are made in the independent expert review

    4. Resolve discrepancies with vendor

  7. There needs to be a defined process for correction after publication

    1. Support for submitting correction request

    2. Support for re-validation of content after new releases of LOINC, SNOMED CT, and UDI updates

    3. All LOINC, SNOMED CT, and UCUM codes will be validated for all test results and related information, including that the codes are active in the most current version of the terminologies.

  8. Support for notifications of updated content to users

  9. There must be support for a line item audit trail of all changes to LIVD and LIDR tables

  10. There must be support for role based user access restrictions (MVP)

  11. The environment for creation of LIVD and LIDR content must support workflow processes

  12. Support for automated access

    1. By manufacturer entire catalog only (first step)

    2. By instrument for a given manufacturer (part of one catalog)

    3. By test kit for a given manufacturer (part of one catalog)

    4. By instrument across more than one catalog (for instrument platforms that are open across multiple manufacturers

    5. If we have disease specific LIVD files, then access to those

    6. Modes of access that must be supported include:

      1. API calls

      2. User interface to support manual search and download including the able to search by test and manufacturers with appropriate filters such as laboratory subspecialty or section

    7. Specific query and tooling capabilities

      1. Tooling to help with selection of LOINCs and downloading just the mapping(s) that is/are needed (guided decision based on selection of result scale / allowable specimen / other factors described in comments)

      2. Capability to free text type in search bar by LIVD fields (instrument, assay,) and content therein

      3. Ability to filter both with inclusion and exclusion criteria for each LIVD field

        1. include IVD vendor Model XYZ, but exclude ABC, DEF, etc.

        2. include plasma (which includes both ser/plas and plasma), while excluding urine, blood, etc. assays

      4. Capability to produce a change/differential report. Compare LIVD maps from a vendor previously available in one year, version, compared to current/most recent year/version, etc.

      5. Capability for a lab user to search for maps by all their vendors and perform an export for their LIS.

      6. Capability for users to store settings of instruments, vendors, assays via user id so they don’t have to set up IVD search criteria each time.

      7. Capability for users to upload their maps/ instrument info in a tool to compare against vendor updates to know if any changes or no changes and maps are current.

    8. Any other needs specific to laboratory professionals that would improve ease of use

  13. Support for user feedback (MVP)

    1. Support email (MVP)

    2. FAQs (MVP)

    3. Helpdesk (this could be part of the implementation of SHIELD standards and should probably not be limited to just the LIVD file)

  14. Support the updated LIVD format for individual lines and associated mappings for all of the data items previously specified.

Roadmap for LIDR - Riki

This section aims to describe the phases of work to achieve the goal of LIDR from the current state over time.

With the overarching goal of being a reference to anyone who uses laboratory tests the approach to creating LIDR will have to account for two things:

  • Create the content

  • Create the means to make it accessible to everyone

The precursor to LIDR content are the LIVD files published by IVD manufacturers or curated for public health reporting purposes by APHL and CDC; the current central distribution mechanism is CDC’s webpage https://www.cdc.gov/csels/dls/livd-codes.html, where these files can be downloaded as excel spreadsheets.

Step 1 = (tool) 6 months time? = Improving upon the current status would be to convert all existing LIVD content from spreadsheet format to the FHIR JSON format (http://hl7.org/fhir/uv/livd/ ) and make that available on a FHIR server, ideally in a place where folks already go looking for LOINC (i.e. Regenstrief). As part of this process, develop a tool that can translate between excel and FHIR format. Ideally this should also include a prototype of add an open API for autoloading LIVD files to this FHIR server.

Step 2 = (content) 6 months time? Recruit participation of IVD vendors to expand the content available in LIVD format, work with FDA to structure content for existing 510k for population in LIDR and in addition we could crowdsource content based on published package inserts.

Step 3 = (content) Add new data elements to LIVD to create the LIDR format and validate we have needed attributes, which includes clinical context (example would be challenge tests, timed specimen), addtional test setting requirements (CLIA complexity)

Additional development will need both initial and continous funding to improve:

  • Content:

    • add new data elements to LIDR

    • Expand participation of IVD vendors

    • transform existing 510k data into structured format; alternatively crowdsource content based on published package inserts

    • ideally review content prior to publication

    • add additional content based on clinical context (example would be challenge tests, timed specimen)

  • Accessibility - develop tooling that:

    • Models the data elements in machine-readable format

    • Allows HIT systems (including terminology services) to access the content

    • Allows people to access the content

How to get involved

To learn more about SHIELD, please visit our confluence site here: and join our calls - the ALL SHIELD calls for monthly updates, the LIDR Working Group calls for weekly discussions. If you are an IVD vendor, consider using LIVD files and provide your content to help us grow LIDR content. For questions please contact us at SHIELDLabCodes@gmail.com.