Preventive Care - Appendix C

APPENDIX C:

Developing a Robust Measurement Strategy


 

FIGURE 25: DEVELOPING YOUR MEASUREMENT STRATEGY MILESTONES

Figure 25 illustrates the key milestones in the development of a robust measurement strategy.
 
Phmi Measurementstrategymilestones

 

FIGURE 26: DEFINITION AND EXAMPLES FOR MEASUREMENT STRATEGY MILESTONES

Figure 26 provides guidance on each of these milestones as you work to put in place a robust yet practical measurement strategy to improve outcomes for adolescents and adults with behavioral health needs.


Milestone

Definition and Examples

Example for Adults with Preventive Care Needs

Aim(s)

The overall goal(s) of the improvement effort. “What are we trying to accomplish?”

We often recommend sub-aims to focus your team on intermediate goals. You can develop data-informed specific, measurable, achievable, relevant, time-bound, inclusive and equitable (SMARTIE) goals focused on increasing adherence to specific AAP preventive care guidelines year over year among attributed patients or subpopulations of patients.

Identifying and addressing colorectal cancer screening for all patients who are due to increase the percentage of those screened from x% to y% by [insert date].

Example sub-aim: Increase the percentage of our Black patients who are due for colorectal cancer screening from 33% to 70% by December 2025.


Concept(s)

A general abstract notion (approach, thought, belief or perception) related to the aim(s) of focus.

Access to CRC screening options; patient attitudes about CRC screening; patient reminders; speed and efficiency of your workflows.

Measures

Specific objective ways to determine the extent to which an aim has been met or to determine if there has been improvement in the concepts of focus. Measures help us to answer the second question in the model for improvement: “How will we know that a change is an improvement?” Measures generally fall into one of three types:

  • Outcome measures: Measure the performance of the system(s) of focus and always relate directly to the aim(s). Outcome measures focus on the end results and offer evidence that changes are actually having an impact at the systems level.
  • Process measures: Pertain to the activities, steps or actions taken within the system(s) of focus that are believed to be most strongly related to improving the outcome(s) of focus. These measures help evaluate efficiency, effectiveness and consistency. Process measures are essential for understanding how well the system is working and can be early indicators of improvement.
  • Balancing measures: Look at a system from different directions and evaluate dimensions, such as the effects a change may have on other parts of the system. These also include ways of assessing unintended consequences further upstream or downstream.

See below for example outcomes, process and balancing measures.

For the following examples, we will examine a measure tied to patient satisfaction: wait time.

  • Outcome measure: Patient satisfaction with your care team support for those undergoing bowel prep for colonoscopy.
  • Outcome measure: Percent of patients undergoing colonoscopy who were supported by the care team during bowel prep who had insufficient prep, as per endoscopist.
  • Process measure: Percent of patients seen at the practice who were due for CRC screening for whom we identified and addressed CRC screening status.
  • Process measure: Percent of patients seen in at the practice who were due for CRC screening for whom we identified and addressed CRC screening status and who were either dispensed fecal testing, were scheduled for endoscopy or documented declines within two weeks of the visit.
  • Balancing measures: Given that we are focusing on cancer screenings: percentage screened for breast or cervical cancer and percentage of adults due for influenza and/or pneumococcal vaccines who received them at time of visit.

Operational definitions

A detailed description, in quantifiable terms, of what to measure and the steps to follow to measure it consistently each time and over time. The operational definitions help make the measure clear and unambiguous and often contain criteria for inclusion and exclusion and numerator/denominator.

Patient wait time operational definition: Refers to the total amount of time that a patient spends from the moment they arrive at the health center until they are seen by a healthcare provider or receive the intended healthcare service. The practice will track wait time for all patients with scheduled appointments.


Data collection plan

A detailed set of instructions that generally includes:

  • Who (specifically) will collect the data.
  • How (specifically) the data will be collected.
  • Where and how the data will be stored.
  • When the data will be collected.
  • How often (e.g., frequency) the data will be collected.

Patient wait time data collection plan:

Who (and how):

  • Receptionist notes the time of patient sign-in in the EHR.
  • Care team member opens the visit note, which automatically triggers the time of first contact directly with patient.

Where and how the data will be stored:

  • Data lives in patient record.
  • Data pulled into the daily wait time report in the EHR.

When the data will be collected:

  • All patient visits, scheduled and unscheduled, upon patient sign-in and first contact with the care team.

How often:

  • Every patient visit. For patients visiting with multiple care team members on the same day, only looking at wait times for the first visit.

Data collection

The process of collecting the agreed upon measures in accordance with the relevant operational definitions and the agreed upon data collection plan.

Patient wait time data collection:

Wait time data is collected semiautomatically when the receptionist notes on the EHR when the patient signs in and when the care team member opens the visit note.


Analysis and action

The process of analyzing the data, including instructions for the analysis and visualization of the data, disseminating the data to relevant parties, and using the data to track progress and guide improvement efforts.

Patient wait time analysis and action:

The process where the improvement team reviews data on wait times on a weekly basis and the care team and panel manager review the data subsequently once per quarter or every six months for ongoing monitoring.