Brainstorming Notes

  • List Implementation Needs/Preconditions/Assumptions

  • Define Timeline Milestones/Phases/Scope

    • List Goals/Achievements/Measures for each Milestone/Phase/Stage

    • Define metrics to know how you know you’ve met or missed the mark.

  • Define Interoperability

    • Semantic

    • Syntactic

    • What is the goal for each milestone/phase?

  • List code systems involved and whether precoordinated/post coordinated approach for mapping

    • Same test results may be mapped differently/different code systems depending on requirements in downstream systems. VRE per ELR to public health requirements maps in LIS and ELR message to Public health the Vancomycin antibiotic to LOINC, result value to resistant SCT qualifier code, and organism of Enterococcus faecalis to the SCT organism code. However EHRs receiving these data need to remap/restructure them (often with EHR based remapping table/interface to create CDA document for Healthcare Associated Infection (HAI) reporting to Vancomycin Resistant Enterococcus (VRE) precoordinated SCT organism code. So two different public health systems are receiving two differently structured and encoded lab results via two different messaging HL7 implementation guides.

  • Indicate which information/reporting systems/processes are involved/functionality needed/required in each

    • LIS

    • LIMS

    • EHR

    • Data Warehouses

    • Regulatory Systems (see below)

  • Indicate any gaps in information system vendor functionality needed to support lab data interoperability requirements/standards adoption, reporting, etc.

    • HHS COVID reporting requirements introduced need for test performers to indicate UDI info for first time.

      • Education was needed to explain what a UDI is, and where it is found.

      • Many COVID tests lacked UDIs/hadn’t been assigned yet, so unavailable.

      • For Lab Developed Tests (LDTs), which impacted a fair number of COVID tests, UDIs may not exist for reagents/tests developed in house. How to report for these cases. Also many different test kits, reagents, may be used.

      • Each UDI needs to be associated with test system validated and used for testing even when the test build in the LIS for the order and/or results were switched often in response to supply chain issues. One vendor system may be used one day/one batch of testing, then switched to another analyzer when supplies exhausted.

      • Messaging guidance was developed by the COVID stakeholders (HL7, PH, vendors, labs) as to what current functionality is available by both senders and receivers, and to determine how the content should look in messages.

        • Should get feedback on how this is working. I believe it would be more desireable long term to have a more fleshed out way to do this and information vendor functionality to collect, store and report this data. Recommend reviewing this at a future point. Do all downstream systems have functionality to support these data elements?

    • EHR vendor certification criteria/functionality has been a good thing to get vendors/implementers on same page. Also good to allow vendors to customize per their product/marketing requirements.

      • Issue:  Some vendors only look at certification requirements instead of entire workflow and design based upon only certification requirements, and some functionality may be missing, less than ideal, etc. How is ONC/other agencies assessing when requirements are met/not met? Dept of Justice investigations resulted in fines for some EHR vendors not meeting “intent” of requirements, which requirements to meet requirements. However, other systems are certified, but cannot meet all requirements. Solution: Increase audits? Information Blocking rules also address some of these aspects. Potential solution: more rigorous definitions/functional requirements/testing/auditing. With regard to lab data may be in parallel with lab interoperability findings from SHIELD pilots, etc to address gaps, etc.

    • Some EHR CPOE functionality (see below in workflow/step):

      • supports a list of terms that are common across all their customers, but not specific to any single performing laboratory. They may or may not correspond to the performing laboratory’s naming conventions. The provider facing name in these EHRs is mapped behind the scenes to the performing laboratory’s test order number so the receiving laboratory knows what the physician desires when the order message is received.

        • Issue: Since the naming functionality is not tied to the actual performing laboratory order menu, the provider may order lab test orders which even aren’t on the performing lab’s test menu. Patients have specimens that are collected and sent to the lab only to have the lab cancel the test as they don’t perform it. This is a patient quality and safety issue, along with an informatics quality and data governance issue. Solution: EHR functionality needs to support receipt of an electronic Directory of Service from each performing laboratory with which providers are contracted. EHR needs to consume/integrate test compendiums and serve them up for use by providers for ordering on each patient instance.

        • Issue: Many provider EHRs have functionality in the EHR to order a test from the EHR menu and then send the order request with the specimen to the performing lab. However, the EHR is not completely interfaced into the performing lab’s LIS/able to order the tests as named in the LIS. The laboratory has a team of data analysts that “translate and order” the physician order requests in their LIS to print off barcode labels and relabel specimens so their instruments/track systems, etc. can perform the testing in an automated fashion. This burdens on the laboratory to have to staff and provide this functionality that should be available in the ordering EHR systems (despite their MU functionality they have attested to and vendors have certified against).

      • Issue: Many EHRs have been designed around physician workflow, naming and needs and ancillary areas like the laboratory testing workflows are designed around physician needs, not laboratory interoperability needs. (Ongoing).

      • Issue: Health professionals, especially physicians often have a different notion of a laboratory test in their mind that the medical laboratory professional and/or pathologist. EHRs terms for ordering/resulting lab tests have been built around this customer base/these notions and why they are often different from performing laboratory terms for orders and results. They often lack specificity and test details that are important for distinguishing which can be comingled together, used together for clinical decision making or otherwise make a test different from a clinical decision making perspective.

        • The “burger analogy.” Laboratory menus are similar to fast food menus

          • A physician can only order tests that are on a specific laboratory’s menu (i.e. can’t order tacos at a burger joint)

          • Physicians often request generic lab orders, not specific names to a particular laboratory

          • EHRs designed by/for physicians

          • Physician wants a Burger. Burger is built in the EHR CPOE order menu. It is mapped behind the scenes to the #3 menu option at one lab/restaurant and the #5 menu option at another lab/restaurant. One may call it a “whopper” and the other calls it a “Big Mac” (Of course we know these are quite different tests/burgers with different features, prices, methods of cooking, calories, etc.) Yet they are mapped to the same burger results and comingled.

          • A related example may involve a cheeseburger. Physician wants a cheeseburger. Cheeseburger is built in the EHR CPOE order menu and mapped to each performing lab’s order # behind the scenes and sent in the message. Are the cheeseburgers the same at each lab/restaurant? Let’s look under the bun

            • One cheeseburger is flame grilled method and topped with “melted American cheese, crinkle cut pickles, yellow mustard, and ketchup on a toasted sesame seed bun.” (note multiple pickles, no onions, but on a sesame seed bun)

            • The other cheeseburger is made with a fried method and topped with “a tangy pickle, chopped onions, ketchup, mustard, and a slice of melty American cheese.” (note 1 pickle, and includes onions)

          • Taking this one a step further, let’s say the burger is included in a value meal with may include fries and/or onion rings. Let’s also say one is trying to LOINC encode these items. Please reference the slide below highlighting the challenges in the EHR and each laboratory’s LIS.

          •  

          • Note when a single EHR term is used and different terms and LOINCs from each laboratory as in the cheeseburger example in the figure, a choice may be needed as to which LOINC is mapped in the EHR (all EHRs may not have this functionality/issue if they can support each LOINC from each performing lab). The EHR may map to a more generic LOINC, one of the two LOINCs or even no LOINC as a result.

          • With the Burger example, there’s a similar issue.

          • With the onion rings example, laboratory a doesn’t have that test/item on the menu so no LOINC should be involved. The menu item in the EHR may only reflect the item from Lab B’s menu and it’s LOINC.

          • Our ideal interoperability example occurs with the fries (want fries with that?), where each lab and the EHR use the same terms, same LOINCs and there are many satisfied customers (physicians).

          • This simple example demonstrates many of these laboratory interoperability issues seen today.

          • •Major Issue:  EHR maintenance for order and result updates (initial menu build and updates) is often manual and huge burden

            •MU Stage 3 proposed rule included eDOS to require EHRs to download lab compendiums, but removed in final rule

            •ONC ACLA test calculator with practice costs: https://www.healthit.gov/techlab/edos_estimator.html

          • Solutions:

            • Allow/require laboratories to build standardized, encoded laboratory test compendiums to provide to all their customers. (major labs do this all a bit differently) (provider/sender) (Compendiums would also meet CLIA/accreditation specimen collection manual requirements. FHIR Service Catalog and HL7 V 2.51 eDOS specifications available currently)

            • Require EHRs to have functionality to automatically/dynamically consume (receive) test compendiums (including encoding, supporting performing lab’s test naming conventions in messages, and Ask at Order Entry Questions). Shouldn’t require much manual intervention and be able to perform updates quickly such as needed in a pandemic.

            • Require EHRs to use said laboratory test compendium integration functionality in CPOE systems so physician can place orders and respond to AOEs which also have required encoding with LOINC, (SCT for qualitative responses, specimen sources, etc.) sent in the order message that is interfaced with the performing lab (so it can be received without manual interventions).

            • Required LISs to be able to receive orders electronically (including encoding)

            • Would need to address where physician convenience orders are exploded into individual orders as some EHRs do it in the EHR side and in other cases, it occurs on the side.

  • Update of existing SHIELD Pilots

    • what are successes and lessons learned/improvements needed?

  • Indicate which stage/point in total testing process and beyond the requirement/goal/measure is needed/defined, etc. Is it in the:

    • lab compendium/test menu/directory of service/CLIA specimen collection manual (often online), ?

    • which is received/integrated/built in the EHR CPOE system test menu for provider orders? used to generate the test order sent to the performing lab LIS

    • which receives the EHR/other LIS order request and maps it to it’s own LIS built order/naming/mapping, creating assignments to LIS “station/workbench” for manual/automated tests where order and results placeholders are visible in LIS (lacking result values until analysis is performed)

    • which sends test (result) requests/requests for analyte/assay to be performed to each instrument/routing of specimens via automated tracks to each instrument, etc. using (IHE LAW), interfaces, lab middleware, specimen tracking systems, storage, processing, etc.

    • which also is received by instrument with specimen info indicating which test(s) is/are performed on the container (specimen, including any preprocessing creating aliquots or derivatives) when at the analyzer/bench.

    • IVD instrument performs testing/analyzes of container contents (specimen) and generates result values (direct/calculate), IVD instrument sends results and values (and pt info/container ID) across interface to middleware or LIS directly.

    • Manually performed test results, may be directly entered into LIS/EHR/POCT system (Point of care testing)

    • LIS may utilize several results as part of a panel to perform calculations/generate ratios/calculated results, etc. (list examples of calculated results)

      • calcium/creatinine ratio using calcium and creatinine result values from instruments.

      • 24 hr urine calcium rate uses Total volume (ask at order entry question provided by person collecting specimen or measured by laboratory and entered in this LIS result field with its own LOINC), hours of collection (ask at order entry question provided by person collecting or ordering specimen and entered in this LIS result field with its own LOINC), coupled with instrument calcium once received over interface in its own LIS result field with its own LOINC, and used to calculate the 24 hr urine calcium rate in its own LIS field (with own LOINC) and reported to EHR/downstream systems.

      • Interpretations.

        • may be pathology interpretation taking into account several test results and their values, such as with CAP Cancer Protocols (i.e. overall diagnosis, interpretation of biomarkers such as er/pr, or interpretations of special stains)

        • may be panel interpretations.

          • several biomarkers, molecular or genomics results, with overall disease, VUS, variant etc interpretation

          • several coag markers in thrombosis risk panel (factor V leiden, factor VII) with overall interpretation provided (hi, low, intermediate risk)

          • cardiac risk interpretation as part of lipid panel per clinical guidelines risk assessment reported as own result and result value (and encoded accordingly)

    • LIS results are manually or auto verified and released to downstream systems such as EHR and Public Health.

      • Note where LIS and EHR are the same vendor system with a shared database traditional HL7 messages are not used in the release process in “sending results from LIS to EHR.” Rather this is performed internally. Several vendors have this structure.

      • Verification triggers messaging to be created (often HL7 v 2.51, but may be 2.31) Unaware of any LISs with HL7 FHIR functionality (to create data, send, receive) yet. Recommend sticking to 2.51 since part of regulations.

      • messages sent to public via (per each jurisdiction’s requirements), often via ELR IG. many HL7 versions used and capabilities there in

      • messages sent to EHR via LRI IG (or not)

      • messages sent to data warehouses

    • EHR receives/consumes laboratory report of record (see HL7 EHR-S IG. not widely adopted nor required yet). Once remapping of lab data occurs in EHR database, then lab data are able to be utilized within the EHR for functionality such as:

      • results review by health professionals

      • displays

      • clinical decision support tools

      • reporting internally

      • reporting externally to

        • Public Health for electronic case reporting (eCR)

        • Public Health (NHSN) for HAI reporting)

        • Public Health Cancer Reporting (Hospital Cancer Registrar generated from EHR data including pathology/lab data, meds, treatments, etc. )

        • Clinical Quality Measure Reporting (more detail needed here @ Julia?)

        • FDA for device monitoring / regulatory reporting / recalls (more detail needed here)

        • HIEs

        • Research/Clinical Trials

        • other EHRs/systems for Transfers of Care (TOC) CDA including labs and LOINCs

    • Other downstream reporting system use cases on how they consume/utilize lab data (fill in)

      • Public Health eCR

      • Public Health ELR (including how reported to CDC/federal agencies for COVID response)

      • Public Health HAI NHSN

      • Public Health Cancer Registry Reporting

        • Hospital Registrar including pathology/laboratory data/genomics/tumor/biomarkers/ NAACCR data elements sent to central cancer registry (usually state public health)

        • Central Cancer Registry summarizes and sends to CDC NPCR or NIH SEER programs where summarized cancer data reported/utilized

      • Hospital Quality Measures (how they make it to CMS)

      • Physician Quality Measures (how it makes its way to CMS)

      • Payers/Insurance Companies/Medicare (for lab contracts, but also physician quality “assessments”)

      • Disease Registries

      •  

  •  

  • Proposal/Ideas

    • Phase 0: Assessment of current state

      • Information System Use

        • How many labs use an electronic system (LIS, LIMS, EHR, other portal) to build laboratory orders, results, values?

        • How many labs use paper reporting?

        • How many labs use fax/phone reports?

        • Goal: Get all those use non electronic reporting to electronic reporting.

          • How achieved? Policy? Incentives to purchase/install system? Funding to help labs build, etc.

          • How measured? Within a year, 50% increase with 100% increase in 2 years? What is the gap and how filled?

      • Information System functionality

        • Does Information system have basic lab/informatics/coding/messaging functionality?

          • Codesystems

            • LOINC, SNOMED CT,

            • Does information system support functionality for mapping lab orders and results to current LOINC releases and maintenance

            • Does information system support functionality for mapping qualitative result values, specimen types, specimen sources, organisms, collection procedures, etc. to latest (US or Intl) versions of SCT and maintenance

            • Other code systems: ICD, UCUM, CPT, etc.

            • Goal: Have all LIS/EHR/ information systems with code system support (by when)? How measured? (certification process like EHRs or other measure)?

          • Capability to build/support discrete orders and results and result values for all lab test info

            • Current State: Some LIS/LIMS vendors are unable to support creation of discrete orders and results for all lab data. Some have “workflow items” that are added on to support billing and documentation, but are unable to be mapped to LOINC or SCT as they are not built as orders, results, values. Others have capability for test orders, but they are “suppressed” and not released external to LIS/EHR. More common in microbiology and blood bank areas (i.e. micro antibody susceptibility panel orders, blood bank antigen testing). Will take time to get functionality, so may not be first phase of LIVD implementations.

            • Goal: get all lab data discretely captured, reported (origin, receipt in, sent out).

            • How achieved/measured? Vendor functionality (certification?) Measure of test data that is build discretely as orders/results

          • Discretely captured lab data (i.e. non text blob, pdf reports)

            • Current State: Fair number of lab results/report of record are still not discrete, especially those received from labs via pdf or fax or paper which are scanned/stored as pdfs. At best, these pdfs can support a single LOINC code (path report, lab report), are not very computer processable or semantically interoperable and information therein requires manual work/reading, etc.

            • Assess current state of how much lab data is not very usable as non discrete. Metrics/definitions needed. Assess what is discrete and not encoded (i.e. CAP Cancer Protocols, Genomics reports, some micro/esoteric testing). Assess what % is codified (i.e.) and not codified (individual reports or overall orders/results/values)

            • Goal: Work to make % improvements on discretely capturing and reporting data (some is discretely captured, but still reported as text blob/pdf) in all information systems. Goal : Work to make % improvements on encoding discrete data (precondition here).

            • Note: Won’t be 100%, but perhaps we can get to 70% overall. May want to assess by lab area (chem, hem, micro, blood bank, genomics, cytology, pathology, etc.) as will vary by section. Chem and Hem may get closer to 90% goal, but genomics might be 20% now (harder to get discrete and codify). Pathology may have good discrete coverage, but not as much with encoding (as work in progress). Goals should be realistically achievable. Also some lab data may never be encoded (i.e. no LOINC, not conducive).

      • Issue/Current State: Many LISs and EHRs have test (order and result) naming conventions that differ from the performing lab’s conventions. This contributes to inoperability when these data are shared. The same test may be built/named differently by different physician practices, EHR/LIS/LIMS vendors and combinations there in. If the same result from say LabCorp, Quest, Mayo, etc is named differently in different systems, when they are exchanged via HIEs, sent to public health, FDA, etc, how will end users know whether these tests are the same or different?

        • Most EHRs/LISs/LIMS receive the lab report of record and preserve a “snapshot” to meet CLIA/interface checks requiring it to be the same as the performing lab sent. However, most all receiving systems “remap” the data elements received to their system’s data dictionary and destroy the original message. When these data dictionary terms differ from the performing lab, the name changes occur. This is a huge contributor to variability in test naming conventions.

        • In some cases remapping of LOINCs may occur to a less specific LOINC when an EHR is receiving three lab results (that may not be exactly the same) maps all results/values to their internal database name that is a generic for all three results and may have a methodless LOINC to accommodate not building each out individually in their test data dictionaries. Do recognize physicians/downstream users may not care about the details/methods, etc. but they may be vital in other instances and not lose this valuable information provided by each lab.

        • Issue:  Who determines which order names are the same or different during build?  Errors occur when build folks don’t know differences between lab orders.  Often unable to tell full meaning of a lab test by name alone; need additional info like specimen, units, method, etc. as discussed on calls.

        • Issue:  Same lab result and data from one performing laboratory may be “translated” differently in different EHR systems (either due to different vendor functionality or due to different build naming conventions as LIS name often not used)

        • Another issue: How will Patient Generated Health Data (PGHD) lab data be stored in relation to lab generated data?

          • Patient generated continuous glucose monitor data from Abbott is now integrated into EHRs. Looks like this is a PDF. If this were discrete data, how is it integrated, encoded and kept distinct from laboratory performed data in all information systems?

          • Consider home/consumer performed COVID test data too. Many apps are available with test kits to record results or the patient can log into a portal according to FDA EUAs. However, how are patient results matched with AOEs, and other data, stored, and then messaged to public health, providers, and other entities (travel, employers, schools, etc) to show a negative COVID result value? How are these all encoded with LOINC and SCT?

            • Public health laws for ELR apply to labs and not consumers or IVD vendors. If messaged to a provider, EHRs aren’t yet able to receive FHIR results (which may be used in the apps/portals). If received by a provider then eCR reporting would apply to get the dat to public health. Work is ongoing in this area with the Waters Design a thon and COVID stakeholder group.

            • May wish to focus on COVID interoperability and use of LIVD as an early phase measure too. Collect data on test performer, their ability use of LIVD, ability to encode correctly, ability to exchange data with public health etc. Determine how many are using paper based reporting methods or the APHL, template. How many have a LIS? How many are using apps, or other information systems for reporting (ie. Walgreens and CVS using Epic EHR)? Where are gaps? What issues have arisen? Is public health receiving all the data? Are there data issues (missing specimen sources?)

            • Issue:  Some public health jurisdictions may accept lower requirements than MU ELR IG requirements as they don’t want technical requirements to be a barrier in ELR reporting. How many labs are able to message with HL7 2.51? how many only use 2.31 and thus aren’t able to send encoded specimen information as the SPM segment isn’t supported in HL7 v 2.31? Public health pre covid indicated no lab onboarding for ELR is lacking a terminology/encoding issue. That is why public health often requires code system/terminology onboarding early on in their onboarding process and won’t let labs progress until addressed.

            • With funding to aid public health infrastructure, could an assessment be made of current ELR, eCR reporting state (which standards folks are using, etc). Even if focused on COVID initially, would provide valuable information as to where labs are nationally (traditional and the newer COVID CLIA labs). Would help identify gaps/needs for longer term assessment/solutions. In concert with COVID interoperability questions above, may wish to assess if all labs are adopting the same standards for sending COVID results. Are they using LIVD LOINCs or other LOINCs? May provide good usability feedback on how folks are using LIVD standard for the LIVD expansion team to consider if there are usability issues. (How to improve LIVD). What lessons can be learned to apply for expanding to other lab areas?

        • Current State Assessment: How many lab results, orders preserve the exact naming convention from the performing lab and how many have any type of modification/permutation (identified per orderable or resultable test code)? Would LabCorp, Quest, Mayo, ARUP be able to assess via their interfaces (more than just interface checks) with lab and physicians, hospitals, public health, etc.?

        • May involve definition of source of truth/origin of lab order and result and value. Is a CBC with a Plt the same as without a plt even though named the same way. May involve external/objective source of truth definition.

        • Goal: Get EHR/Downstream systems/users to use the same naming convention for lab orders and results and not modify them. compare performing lab naming with downstream systems (EHR, Public Health, FDA systems) to measure current state and improvements.

    • Scoping

      • Laboratory Areas: Chemistry (including urinalysis), Hematology (including coag), Microbiology, Blood Bank, Pathology, Cytology, Genomics/Molecular, Histology?, Other subspecialties?

      • Recommend Chemistry and Hematology first, then automated Microbiology, Tumor/Biomarkers, Manual Microbiology, Pathology, Cytology, Blood Bank

    • Policy and Regulatory Aspects

      • Which regulatory policies impact lab coding/informatics/reporting (at source/by performing lab/entity)

        • Public Health ELR

        • COVID HHS reporting

      • Which regulatory policies impact lab quality/informatics/lab report of record, etc (impacting lab/source of truth)

        • CLIA

          • Issue:  CLIA has not kept pace with explosion in technology developments. Solution: Can CLIA folks review new technologies (i.e. FHIR) /proposed approaches to ensure changes not needed prior to implementations? (Karen Dyer consulted on ONC S&I Framework calls to ensure proposed approaches would be CLIA compliant.) Also training may be needed for inspectors. Updates to CLIA may be needed as we saw with COVID pandemic.

          • Note CLIA says what must be met, but not how. Originally paper based processes, but with electronic processes requirements still need to be met.

            • Using IVD vendor result value package insert requirements / naming for result values (mapping implications) LIVD impacts.

            • 493.1241 (e)  Requisition/test request in Laboratory Information System (LIS) must be transcribed/entered accurately.  (Does this include using same request name in EHR and LIS?)

            • 493.1291 Results and test report elements accurately sent to final report destination. 

              •Decimals in correct places, includes reference ranges, units, any interpretations, no truncated data, etc. 

              •Does this include exact test result name sent by performing laboratory in first downstream system?

            • Correct report errors and maintain duplicates of original and corrected reports

            • Issue:  Many outside laboratory are NOT familiar with CLIA requirements! 

        • Protecting Access to Medicare Act (PAMA) has significant negative impacts on laboratories from financial reimbursement aspect.

          •Issues:  Reduced staffing, facilities, and initiatives such as interoperability aspects (some labs don’t have time/budget to map to LOINC/upgrade standards, etc). Magnified when pandemic hit. Some labs have had to close.

        • Solution: Laboratories have been greatly burdened by PAMA and resulting insurance reductions in reimbursement for testing. Laboratory testing is about 2% of healthcare costs, delivering 70% of EHR information. It’s already cut thin. Additional reimbursement and funding is badly needed by laboratories across the country for daily operations and even more needed to adopt standards and requirements needed for interoperability.

           

        • Accreditation requirements (may be more stringent than CLIA alone): CAP, Joint Commission, AABB, etc,

      • Which regulatory polices require lab data coding for downstream uses

        • ONC & CMS: EHRs-HAI, Quality measures, ELR, etc

        • ONC & CMS: Meaningful Use, 21st Century Cures Act/Promoting Interoperability

          • MU:

            • Issue:  Was physician centric, so laboratory requirements are only as part of Eligible Hospital (EH)

              •Electronic Laboratory Reporting (ELR) to Public Health is required as part of EH, but performed in laboratory.

            • Issue: Many laboratories ineligible for MU incentives as not part of EH (i.e. independent/reference labs, govt/DOD (own programs), Physician Office Labs (POLS), blood bank/transfusion labs, etc.

            • Issue: EH hospital laboratories adopted standards like LOINC, SCT, HL7 2.51 messaging, but other labs voluntarily have met or not met at all (many still reporting on paper, Hl7 2.31), etc.

            • Recommendation: Provide $$ and assistance to aid laboratories to get to the same level of standards (i.e. LOINC, SCT, HL7 2.51 messaging) so all lab data can use the same standards to help promote interoperability. This includes Physician Office Labs (POLS), independent labs, blood banks/transfusion services, government (including public health/CDC/fed)/DOD labs

            • May wish to phase/scope to work on high volume testing first (high complexity labs), then work down to those with fewer tests such as PPM (physician performed microscopy) in segmenting by CLIA lab types.

        • MU Issue:  Significant burden on clinical laboratories to meet requirements indirectly for customer needs (i.e. physician/hospital quality measure reporting/MU incentives). Independent labs asked to send LOINC encoded results, etc.

        • MU Issue:  Laboratory outreach business may suffer if physicians take their business to labs which provide LOINC, SNOMED CT needed for their reporting of Clinical Quality Measures (CQMs).  Advantages for interoperable laboratories. This pendulum swung towards some labs initially and then back to others meeting requirements.

        • FDA (pre/post market regulations)

      • Non traditional lab testing settings:

        • Nursing homes (ie. POCT COVID)

        • Pharmacies (i.e. POCT clinics, COVID)

        • National Guard pop up sites (COVID)

        • Home/Consumer collected testing (performed in CLIA lab) (ie. COVID, drug testing, genomics testing)

        • Home/Consumer collected testing (performed by consumer w device) (i.e. COVID, home pregnancy test, home urine dipstick manual/device interp, home WBC)

  • For later phases (such as needs for FDA, etc.), it would help to understand requirements for assessing devices, test systems, etc. and interoperability needs. My understanding is much of the data is coming from the EHR (not the LIS) and coupled with meds, and other health data, and may have different interoperability needs.

  • Definitions (may need to move to glossary, but pertinent to scope).

    • Lab test order:  Placed by the physician to request a laboratory analysis on a patient specimen using Electronic Health Record (EHR) Clinical Provider Order Entry (CPOE) functionality.  May serve as the order requisition of record per Clinical Laboratory Improvement Amendments (CLIA).

      • Types

        •May be single order(able) such as a Calcium level with a single result(able) of Calcium level

        •May be a panel such as a CBC (Complete Blood count) with results of Hemoglobin, Hematocrit, Platelets, etc.

        •May be a reflexive order such as Urinalysis If (UA If), where a urinalysis is performed. Certain results values may trigger the automatic addition of a culture order to the specimen

        •May be a “tiered” order , profile, or convenience panel whereby a single EHR order (i.e. stroke panel), contains child panel orders (CBC, BMP, PT/INR), each with their own children

      • Terminologies. Non laboratory professionals often refer to this as a procedure, which includes methods of test collection, testing, radiology or cardiac procedures, etc.

        • CMS has some defined panel orders such as BMP, CBC tied to reimbursement, so may need to call these out/use same definitions.

        • Mapped to:

          • (precoordinated) LOINC order codes or both order/observation codes (for single orderable/resultables above), but not observation only codes (as those are for results) (There may be gaps or changes needed to existing LOINC codes if only an observation only LOINC exists for a test order).

          • Nordic Countries code system

          • Japanese code system

          •  

      • Messaging: Called Observation Request in Health Level 7 version 2.51 (HL7 v 2.51) in OBR-4 message field, Represented by FHIR Service Request.

    • Lab test result:  Detection or measurement of an analyte in a patient specimen.  Reported back to the provider in response to their order request.

      • Terminologies. Non laboratory professionals often refer to this as an observation.

      • Mapped to:

        • LOINC observation only or both order/observation codes (for single orderable/resultables above), but not order only codes (as those are for orders)

      • Messaging: Called Observation (or procedure) in the HL7 v 2.51 OBX-3 message field

        • Note some define observation as both the result and result value

      •  

    • Lab test result value:  Value or response of a lab test result.  Can be qualitative, quantitative, narrative, short answer such as an organism, etc. 

      • Terminologies. Some may call this the observation

      • Mapped to:

        • SCT qualifier value for ordinal/qualitative values

        • SCT organism code for organisms

        • quantitative results usually not mapped

      • Messaging:

        • Called Observation Value in the HL7    v 2.51 OBX-5 message field

        • Interpretations may be messaged in HL7 v 2.51 OBX-8 message field

        • Note one LIS vendor reports 50% of lab customers report antibiotic susceptibilities as mm in OBX 5 with Interpretation of Susceptible, Intermediate, Resistant in OBX 8 field, while other 50% report just the Interpretation in OBX 5 field

  • Terminology Interoperability Issues/Implementation Challenges

    • Code systems

      • LOINC

        • ○Great laboratory orders and results coverage

          ○Need laboratories to continue to request codes for Laboratory Developed Tests (LDTs), orders, emerging testing such as genetics and molecular diagnostics.  Good example was COVID and Zika tests had LOINCs created quickly.

          ○LOINCs not created for every combination of results in an order panel, so some gaps will remain

          ○LOINC doesn’t get into to lab instrument or kit detail.  This is where LIVD helps.

          ○To avoid combinatorial explosion LOINCs for genetic testing is modeled differently.  Education needed for all.

          ○Releases twice a year.  Best Practice is for laboratories (and all information systems using LOINC to take updates within 90 days of release. 

          ○Issue:  Laboratories may update every year or longer and use deprecated or discouraged codes impacting interoperability, reporting, analytics, etc.

          • If all information systems are not on same LOINC version, interface errors may occur. Example: Lab LIS updates to latest LOINC release, but downstream EHRs, data warehouses, etc do not. New LOINC codes sent in LIS messages (i.e. COVID) were not able to be used/processed in EHR and caused interface errors requiring manual intervention. Another example may be a lab using deprecated or discouraged LOINCs from a recent release.

          • Maintenance is a challenge for ALL users

          • Issue:  Now three official LOINC descriptions (short, long, or display name) required to be used per licensing.  Public Health jurisdictions require short or long name.  Some jurisdictions require only long name. Some information system vendors only support one name in their messages and databases.

          • LOINC names may change from release to release as LOINC editorial changes.  Vital to know description version. Issue: Some information systems do not store LOINC descriptions. Rather, they query Regenstrief for the latest description of a LOINC. As a result they may be using a different description that was mapped to historic lab data at the time. Discouraged and Deprecated LOINCs often do not have their descriptions updated by Regenstrief in later releases (editorial may reflect the release at the time the term was deprecated and not changes to similar terms made in later/more recent releases).

          • Issue: Some EHRs and LISs use the LOINC names for their test naming and resulting names and displays. LOINC names should NOT be used for displays, or naming in LIS/EHR, per best practices in the LOINC User Guide. They can be used behind the scenes for a human readable LOINC name for troubleshooting, in the database, in messages, etc. Major issues can result when Regenstrief updates the LOINC name and displays are dependent on these. Some reference labs have had issues with EHR customers and there are some LISs that have this poor usability.

          • Issue: Some apps (i.e. FHIR) using lab results are using LOINC names for displays. Some IT vendors are even directing developers using health functionality to do so, even though it’s advised against in the LOINC User Guide.

            • Education is needed for all users of LOINC to understand Best and Poor Practices to avoid implementation and interoperability issues

            • Given more lab data will originate in consumer focused apps, devices and products, a best practice path for developers is needed. It should address use of LOINC, CLIA (if applicable), SCT, etc for best practice implementations.

            • Term requests should also be available for these devices, apps, etc for results generated so they can have LOINC encoded results at the point of origination for their consumer lab data. Often different methods than traditional lab tests (and specimen if capillary or perhaps through the skin). Codes should reflect these test distinctions. Developers can request and use in the app to encode data behind the scenes.

            • Will be important to get encoded data showing vendor distinctions, especially if they are sent to the EHR and integrated as demonstrated by Abbott’s FreeStyle Libre continuous glucose monitor.

            • With Apple’s enhancements with lab data, do/will they have functionality to integrate more lab data in their devices, but also share with others (EHRs, LISs?)

          • Often these apps have less “real estate” to display, store, use lab data, resulting in issues with:

            • Reference ranges from the performing lab omitted or a generic reference range not from the performing lab used

            • Specimen type and/or source omitted.

            • Trending the same lab test together in displays, graphs, etc. How are same tests determined (an app, a non lab/non medical professional)? Could a Hemoglobin A1C and CBC Hemoglobin be trended together as they are both “hemoglobins”?

            • Scale/Units values reporting. For Temperature with units of degrees, are Fahrenheit values trended/displayed/comingled with Celsius values? 32 has vastly different meanings depending on which scale is used!

            • Pediatric vs Adult values, reference ranges, results

            • Interferences, issues with specimen quality. Investigations with consumer genetics testing has shown dog specimen’s were not detected as non human and results reported as though human specimen.

            • Consumer specimen collection/quality issues/test performance issues. How do we know a COVID swab wasn’t inserted into one’s ear? How do you know home pregnancy test wasn’t dropped in toilet?

            • Often these impact not only personal decision making, but public health and clinician decision making. If not regulated, captured like in a clinical lab, what are potential patient safety impacts with these implementations. Many more issues, but this is enough to get started.

    • SCT and HL7 terms differ from laboratory terminology

      • Observations (SCT & HL7) = laboratory results

      • Procedures (SCT & HL7) = laboratory orders and/or collection process, or laboratory results

      • Specimen (SCT & HL7) often referred to as Source in the laboratory (i.e. source of a culture)

      • Source (SCT & HL7) means where (on the body) a specimen was collected.  Referred to as Source site by laboratory professionals. 

      • A Specimen addresses What is collected?  And Source addresses Where is it collected?

      • Specimen is urine, blood, plasma, fluid, etc.

      • Source is head, neck, back, etc.  Likely not submitted to microbiology or the patient would be deceased.

      • Specimen and Source are often confused by laboratory professionals. Specimens built in Source dictionaries and vice versa. (Further education is needed to ensure the data at the origin is built correctly, mapped correctly and messaged correctly for use by downstream systems.)

      • Impacts include which terms are built in which dictionaries, to which codes they are mapped (correctly or incorrectly), how dictionary data is used in the LIS or EHR, and how items populate messages, are utilized in clinical decision support, queries, analytics, etc.

    • Laboratory Mapping for same item may vary by laboratory

      •Some map to generic, methodless LOINC codes and add method in another field (post coordinated approach)

      •Others map to detailed LOINC codes (pre coordinated approach Best Practice discussed at LOINC Meetings)

      •Issue:  Data queries on single LOINC may miss results encoded with other LOINC

      •Laboratory In Vitro Diagnostics (LIVD) will reduce this variability in many cases and negative impacts

      •Issue:  Laboratories not updating to latest release of code systems.  May have out of date codes or descriptions

      ○Laboratory interfaces may not have LOINC codes “turned on” so downstream systems receive them

      ○Issue:  No LOINC Police to ensure mappings are appropriate for lab orders and results

      •Would it be possible to require LOINCs for peer compared method Proficiency Testing to understand which LOINCs are used, if current, etc.? 

      •Would it be possible to use LIVD maps/validated maps to for Proficiency Testing as comparison?

      •Feedback to laboratories as to agreements/disagreements?

      • Benefit to gather data on current tests nationally and LOINCs used for them (and SCT for result values, organisms, etc.) Get data to inform the most common lab orders, results, LOINCs (identify gaps where LOINCs needed), etc.

      •