MIPS: 2019 Scoring
By Featured article |
From 2018 to 2019, the Quality category for weight of the performance score decreased from 50 percent to 45 percent of overall score, while the Cost category increased from 10 percent to 15 percent of your MIPS score. There were no changes in category weights for Promoting Interoperability or Improvement Activities during that same time period. For those practices participating in an Alternative Payment Model (APM), there were no changes in the category weights.
Practices and eligible clinicians should also be aware that within the Cost category, they are being compared to all providers, not just ones within their specialty. The Centers for Medicare & Medicaid Services (CMS) will make some adjustments for region and specialty, as an example.
In order to earn the full score for the category, practices or eligible clinicians must still earn 100 points in this category, but the number of available measures (and points) has decreased. In the past year, practices could choose measures, but in 2019 all (five) measures must be either met or excluded.
In addition, for the first time, CMS will round scores to the nearest whole number. For instance, a measure receiving 31 percent will be rounded down to three points out of 10, whereas a measure reaching 38 percent will be rounded up to four points. Almost all of the measures will allow for exclusion, and when a measure is excluded, the points possible for that measure are reweighted to another measure – making it possible to still earn the full score for the category.
The Quality Reporting Engagement Group recommends trying to meet all measures rather than exclude. It may prepare the practice and clinicians for future scoring where exclusions are potentially not available. As MIPS progresses, the team anticipates that scoring will become more confusing, especially as weighting changes with measures.
Quality scores are based on benchmarks (typically made available by CMS in January of the performance year) and scoring is different for every measure in this category. If there are no benchmarks available, CMS will attempt to calculate benchmarks based on performance data submitted at the end of the year. The benchmarks are established using historical data from two years prior to the performance year. The performance score on one measure does not necessarily translate to the next measure’s points.
Quality scores and deciles vary based on the submission method – from claims, registry/Qualified Clinical Data Registry (QCDR) and electronic health record (EHR).
The Quality category uses decile scoring. For example (see tables below), if the performance is measured at 23.56 percent, the score for this measure would still receive a 3, as CMS awards a minimum score of 3 as long as data completeness and case minimums are met. If a performance score of 99.62 percent is met, the score may fall into a higher decile range (sometimes a 7) and the point score achieved could be between 7 – 7.9 points as long as there is no cap at 7. If another measure is topped out and there is a 7-point cap (using the same performance score), the point score will only reach 7.
The methodology used to score the Cost category is similar to that used for the Quality category – with decile scoring.
Currently, the Cost category score is calculated from as many measures that meet the case minimum, out of 10 possible measures, eight of which are episode-based measures. For example, if only one measure can be scored, the entire Cost category score will be based on performance on that singular measure. If there are multiple measures that meet the case minimum, an average score will be used to determine the category score. Lower attributed costs will equal a higher MIPS cost measure score.
CMS expects to have benchmarks established to be used for scoring the current episode-based cost measures in future performance years as data is continuously being collected.
The Quality Reporting Engagement Group, with decades of regulatory experience, can help any practice or eligible clinician with MIPS submissions – from deciding on measures and submission methods, to tracking those measures across the year or helping with the submission process. To learn how they can help your practice, email firstname.lastname@example.org.