EDITING INSTRUCTIONS:
Option 1: Click on the pen (Edit) to make changes directly in the text:
use strikethrough to preserve changed text sections (don;t worry about tracking typos and don;t worry about formatting) - if you want to make sure people know you made the edits, add your initials (and if you want to pick a font color for your edits, you coudl do that, too). Click on Publish when done to save.
Option 2: Highlight the text you want to suggest a change for and use the bubble (in-line comment) feature; these changes will not be in the actual text, so may be a good option when your comment is more descriptive / explanatory, rather than creating direct content.
1 Accountability and Key Performance Indicators
The strategies and related action needed to transform the US laboratory system must be monitored to for accountability. Accountability requires the joint action of all stakeholders. This section provides a framework for to track milestones and metric, to analyses case studies, and to provide critical feedback to all the program to ensure that progress is being make towards laboratory interoperability. A set of key performance indicators have been created for this effort and approach to implementation this effort is offered.
A mixed method, participatory evaluation is proposed for SHIELD to support Key Performance (KLI) to hold the stakeholders accountable to implementation. An evaluation framework for the SHIELD Strategic Plan has adopted the models and approach of the Program Evaluation for Public Health Programs developed by the CDC.[i] This framework for evaluation is in broad use in public health and has been used at the FDA.[ii] The SHIELD Evaluation Framework is participatory and proposed as an on-going tool to support the long-term management of the SHIELD Strategic Plan. The framework adopted is multi-layered and mixed method. The first two steps (stakeholder engagement and program description) of the four steps proposed here are well described in other parts of this report.
The purpose of the proposed evaluation is multifold: to help guide and provide correction direction to the on-going implementation of the strategic plan, to help SHIELD and its supports to hold themselves accountable for investments made, and to provide information as part of the communication effort of SHIELD.
The framework for this evaluation effort is described as layered, participatory, and based on a logic model. A layered evaluation directions attention to a variety of the metrics and milestone from global goals (including improving patient safety), efforts of partners i.e., federal agencies and private sector partners, and programmatic efforts of SHIELD. The evaluation is participatory, meaning that it is on-going and will help re-direct implementation of the plan. To learn from the early stage of implementation and to accommodate changes in the environment the evaluation efforts will provide important to adjust and correct the future course of efforts. Traditional evaluation has happened at the end of a program. Participatory evaluation is integrated into the program management. A logic model is also used to organized evaluation around the transformation of laboratory process beginning with the manufacture, through clinical facilities, to the end users of the data.
Further, the evaluation is linked to specific components of the strategic plan to make accountability very specific to groups responsible for activities.
SHIELD Accountability planning is based on the CDC Introduction to Program Evaluation for Public Health Programs, a “how to” guide for planning and implementing evaluation activities (Figure 6). The first two steps have been accomplished as part of the early development of SHIELD and through the strategic planning process. Steps 3 and 4 are developed in this section. Steps 5 and 6 are referred to here to guide future activities.
1.1 Steps of SHIELD Evaluation
Engage Stakeholders
This section of the report pulls together stakeholder engagement efforts that have been done previously and provides the evaluation framework with an understanding of the value of SHIELD to stakeholders. An evaluation should provide evidence to stakeholders that value has been realized by building and using SHIELD. A description of stakeholder and the value they derive from SHEILD is descripted in section x of this Strategic Plan.
Describe the Program.
A logic model is proposed describing NEST with inputs, activities, outputs, and outcomes to be used to evaluate SHIELD building on the “Value Chain: data production and flow” diagram also used in xxx section of this report. Details and references describing the history and development of SHIELD are provided in Background Section of this document.
Focus the Evaluation Design
The evaluation design recommend for SHIELD is multi-layered and mixed method. The layers of the evaluation are set out in Figure 5. Within the base layer is the “logic model” called for by the CDC evaluation guide, which for SHIELD is the process by which harmonized coding is established as the expanded LIVD file and implemented at each step in the process by why laboratories generate data. Activities of other ecosystem partners must be evaluated.
Identification of Credible Evidence
Gather credible evidence to strengthen evaluation judgments and the recommendations that follow. These aspects of evidence gathering typically affect perceptions of credibility: indicators, sources, quality, quantity, and logistics. Metrics, milestones, and case studies to evaluation SHIELD are recommended. The information, data, and methods associated with the metrics and milestones are described.
Justification of results
Justify conclusions by linking them to the evidence gathered and judging them against agreed-upon values or standards set by the stakeholders. Justify conclusions based on evidence using these five elements: standards, analysis/synthesis, interpretation, judgment and recommendations.
Assuring the evaluation is used
Ensure use and share lessons learned with these steps: design, preparation, feedback, follow-up, and dissemination
1.2 Evaluation Design: Layers, Logic Model and Methods
The evaluation design recommend for SHIELD is multi-layered and mixed method. The layers of the evaluation are set out in Figure x. Within the base layer is the “logic model” called for by the CDC evaluation guide, which for SHIELD is the process by which harmonized coding is established as the expanded LIVD file and implemented at each step in the process by why laboratories generate data. Activities of other ecosystem partners must be evaluated.
1.2.1 Layers
The layers of the evaluation framework [RG1] [PG2] are presented in figure x above.
At the highest-level [RG3] [PG4] metrics, milestones and case studies have been identified that support evaluation of the board mission of SHIELD.
The External Impact layer focuses evaluation of specific stakeholder actives, federal agencies, industry and major clinical laboratories that SHIELD recommends action toward creating interoperability.
Internal Output involve actions for which SHIELD is responsible for.
Program outcomes are related to activities specific to the strategic plan and supportive of the broader strategic plan activities.
Figure 7. KPIs
gram Outcomes
1.2.2 Methods
The proposed mixed method evaluation will track KPI (metrics and milestones), use case studies, and conduct interviewers with those involved in implementation of this Strategic Plan.
Identification of Key Performances Indicators (KPI) and Credible Evidence
The Key Performances Indicators (KPI) and associated Credible Evidence are set out in Table x. The layers in Figure one is used to organize the KPI.
For this SHIELD Strategic Plan we have adopted key performance indicators (KPI) as the term to include metrics, milestones and case studies. KPI stands for key performance indicator, a quantifiable measure of performance over time for a specific objective. KPIs provide targets for teams to shoot for, milestones to gauge progress, and insights that help people across the organization make better decisions.
Table x
| KPI | Metric | Goal | Data Source |
Mission Impact |
|
|
|
|
|
|
|
|
|
External Impact |
|
|
|
|
|
|
|
|
|
Internal Impact |
|
|
|
|
|
|
|
|
|
Program Outcomes |
|
|
|
|
|
|
|
|
|
Figure 8 KPI by layers
1.2.3 Logic Model
The logic model for evaluation follows the clinical laboratory process and flow of data to users. To achieve interoperability the SHIELD strategic plan recommends action at each step. The Evaluation Framework provides a way to create accountability around each of these activities. The logic model for the proposed evaluation is the same model that describes the flow of laboratory date adopted in Section 1 of this document.
The logic model provides roles of various stackholders, sequences of activiites some of which are dependant, activities to build (and refine) infrastrucutre, and pilots in laboratory enterpirse that bring together the various pieces of the project for testing.
Figure 7 Logical Model (adaption of End-to-End Diagram[PG5] )
1.3 Conducting the Evaluation
The development of this system of KPI may best be done by a contract firm with experience in conducting evaluation. Different from third-party evaluation, this proposal for participatory evaluation should nonetheless be conducted by a group with the appropriate skills and some distance from programs to ensure objectivity. Annual written reports and quarterly meeting with each aspect of the SHIELD initiative are proposed.
The estimated cost of the evaluation over the 5 year period is $xxx. [RG6]
[i] CDC citation for framework https://www.cdc.gov/eval/framework/index.htm CDC Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide
[ii] End note on CDRH and NEST framework. Personal communication contact Gregory Pappas CDRH has used the CDC framework for the CDRH’s Regulatory Science Metrics project https://www.fda.gov/medical-devices/science-and-research-medical-devices/cdrh-regulatory-science-priorities. The CDC framework was also used to produce an evaluation plan for NEST, available upon request
[RG1]It would be good if this was based on some publish/known methodology… or are we creating our own methodology.
[PG2]This is all taken directly from the CDC. We cited that above
[RG3]Why is this the Highest Level?
[PG4]That is how the CDC arrangeds it. The broadest most general at the tope.
[PG5]M&E committee might collect all the KPI from the various strategies and put in a table. Look for coherence, and gaps. Look for ways to operationalized those efforts across strategics.
[RG6]Recommend that any/all cost information throughout document only be mentioned in one section, the budget section
[RG7]Uncertain what to do with this table.
[PG8]Leave it out for now.