The significant variation in compliance rates (85%, 75%, 100%) reported by different abstractors reviewing the same physician’s charts suggests inconsistency in how the data is interpreted or recorded. This points to a reliability issue among the abstractors.
Option A (Sampling selection): Sampling selection issues would affect whether the charts chosen are representative, but the question implies the same charts were reviewed, so sampling is not the issue.
Option B (Interrater reliability): This is the correct answer. The NAHQ CPHQ study guide states, “Interrater reliability refers to the consistency of data collection among different reviewers. Significant variation in results, such as differing compliance rates, indicates poor interrater reliability” (Domain 2). The conflicting results (85%, 75%, 100%) suggest abstractors are interpreting or applying the review criteria inconsistently, a common issue addressed through standardized training or clearer criteria.
Option C (Review tool validity): Validity ensures the tool measures what it intends to measure. While a poorly designed tool could contribute, the variation in results points more directly to inconsistent application (reliability) rather than the tool’s design (validity).
Option D (Data definition): Unclear data definitions could contribute to variability, but interrater reliability encompasses this issue, as it includes consistency in applying definitions. The primary problem is the abstractors’ inconsistent results.
CPHQ Objective Reference: Domain 2: Health Data Analytics, Objective 2.2, “Ensure data integrity and reliability,” emphasizes the importance of interrater reliability in maintaining consistent data collection. The NAHQ study guide notes, “Interrater reliability is critical for ensuring data accuracy in performance measurement, such as physician scorecards, and can be improved through training and standardized protocols” (Domain 2).
Rationale: The wide range of compliance rates indicates that abstractors are not consistently applying the review criteria, a hallmark of poor interrater reliability. Addressing this through training or clearer guidelines is essential for data integrity, as per CPHQ principles.
[Reference: NAHQ CPHQ Study Guide, Domain 2: Health Data Analytics, Objective 2.2., , , ]