Document Type


Publication Date



Introduction: Artificial intelligence-based modelling has created an opportunity to improve upon existing hospital readmission risk score systems by redefining priority and uncovering new criteria, but inherent systematic errors known as algorithmic bias can impact applicability. This study evaluated whether there is racial bias for unplanned readmission risk scores in a novel model prepared for the CMS AI challenge.

Methods: The study population provided by the CMS challenge included Medicare recipients from 2012 (unique beneficiaries n=1,667,362, total claims n=34,233,260). Risk scores for unplanned hospital readmissions were projected on the basis of clinical and demographic criteria, including age, sex, comorbidities, and prior hospitalizations. Evaluation of algorithmic bias on racial subgroups was estimated using Kernel Density Estimate (KDE) plot and Jensen-Shannon divergence, methods for visualizing probability density and describing similarity between probability distributions. The Jensen-Shannon distances for each race’s model forecasts were calculated between two racial groups and scaled relative to mean values within each individual racial group.

Results: Comparison of probability distributions between racial subgroups using Jensen-Shannon distances scaled relative to individual racial groups were 0 to 0.1 and depicted using KDE plot (Figure 1). At a predefined “high risk” model threshold of 0.1, false negatives (missed readmissions predictions) totaled 159,169 (FN rates of 0.36-0.51% for all racial groups). False positives (incorrect high-risk labels) totaled 5,861,737.

Discussion: Jensen-Shannon distances of 0-0.01 between racial groups and our KDE plot curves suggest no significant implicit algorithmic bias for racial subgroups with regards to readmission risk. Our threshold yields minimal false negatives at expense of greater false positives, a potentially justified trade-off in cases where costs of failure to identify high risk is greater than cost of intervention. The models prepared in this CMS AI challenge submission may be effective, with low false negative rate, for clinical use in readmission risk assessment for patients regardless of race.