A new report highlights that all links in the hospital reporting chain are tragically weak.
It is some pretty remarkable timing that last week I was looking at the dark underbelly of hospital reporting and recently a new statement was released regarding poor validation of hospital quality data reporting. The Department of Health and Human Services (HHS) Office of Inspector General (IG) just released their study on quality tracking of Centers for Medicare and Medicaid (CMS) investigations into suspicious data for hospital-associated infections (HAIs).
To say that the hospital reporting structure or processes is convoluted would be an understatement. Hospitals typically report to three main groups—public reporting agencies (which is a requirement for CMS for reimbursement purposes and sometimes state health departments depending on state reporting); private reporting agencies (hospital quality measuring agencies like Joint Commission, etc.); and prestige reporting outlets (to obtain national awards through groups such as US News & World Report, etc.).
CMS requires that data reported for incentive payment be done through the Centers for Disease Control and Prevention’s (CDC) National Healthcare Safety Network (NHSN). The CDC also utilizes this data for national reporting of general data (such as, the number of HAI’s per state, etc.). In a nutshell, hospitals have to report a certain amount of infections to CMS, which impacts the amount of reimbursement they end up getting. This data is reported through the NHSN system that also allows the CDC to use it to track antibiotic resistance, etc. This makes it a nice hub for hospital-associated infection data. Interestingly, CMS and CDC both depend on the quality of this reported information, which not only impacts hospital reimbursement, but also tracking hospital infection trends and potentially changing future reporting requirements. Therefore, the data used by CMS and CDC are only as good as what is reported, meaning that if a hospital is not performing quality surveillance for hospital-associated infections, or is choosing to only report partial data, CMS and CDC will have inaccurate information.
The recent IG finding report is particularly damning in that it highlights a darker reality to this reporting requirement. Despite the good intentions of CMS in their efforts to decrease HAI’s by tying cases to nonpayment, hospitals have been taking advantage of limited CMS validation. The report found that CMS failed to perform in-depth reviews of 96 hospitals that submitted suspicious data patterns in 2013 and 2014.
During their annual data evaluation, CMS is supposed to randomly select 400 participating hospitals and request samples of medical records to evaluate the clinical-process-of-care measures and HAI measures. Additionally, they are encouraged to look at a targeted sample of 200 additional hospitals based off a certain threshold, which would be if they failed validation the year before or submitted data after the CMS deadline. CMS has several selection criteria for this "targeted' sample, which includes "threshold-based criteria"—hospitals that fail to report half of their HAI's, late reporting, a new hospital, etc.—or, "analysis-based criteria"—abnormal or conflicting data patterns and a rapid change in data patterns. When CMS selects hospitals for this data validation for payment, they can use analytics that point to hospitals needing further review (those hospitals that are outliers in some measures).
Unfortunately, the IG report shows that CMS failed to use these measures when they did this targeted sample review in 2016 (which looked at data from 2013/2014). During this review, CMS only selected 49 hospitals and none of these hospitals were chosen from this analysis-based criteria (ie, they were not looking for those with aberrant data patterns or suspicious changes in reporting).
This practice goes against the 2015 "Joint Reminder on NHSN Reporting" that the CDC and CMS sent out regarding concerns of dishonest reporting like overculturing, underculturing, and adjudication. The CDC and CMS joint reminder, was an honest plea with hospitals to ensure accuracy and completeness of their reporting data. The statement highlighted the growing concerns over a new hospital practice of over-testing patients (overculturing) in an effort to identify infections as community-onset, instead of hospital-acquired, regardless if the test was potentially too sensitive. Overculturing can be dangerous in that some tests, such as polymerase chain reaction (PCR), are overly sensitive and pick up any microbial DNA, regardless if it is alive or dead. This can often result in a patient being treated with antimicrobials even if they do not have an active infection. Because of this ongoing issue, it is vital that CMS up their game in terms of validating hospital data and accuracy.
This study was released to draw attention to the growing concerns that hospitals are “gaming” the system and taking advantage of limited CMS validation. Indeed, a report from Kaiser Health News also draws attention to the lack of uniform standards for reviewing the data that hospitals report. This is especially relevant given that most CMS validation reviews are 1-2 years following the reporting and even the 400 hospitals reviewed only account for about 10% of the nation’s hospitals.
Simply put, there is no consistent method for double-checking the data that hospitals report, and frankly, there is a heavy incentive to report lower infections if that number impacts reimbursement or prestige. Overall, this latest report points to the tip of the iceberg in terms of shady hospital reporting practices and even worse, CMS response to validate such suspicious data.