Background: Clinical trials and outcomes studies often rely on nonphysicians to abstract complex data from medical records, but the reliability of these data are rarely assessed.
Methods: We used standardized charts of patients with non-Hodgkin lymphoma to assess the reliability of key clinical data elements abstracted by 6 clinical research associates (CRAs), 3 project staff, and 3 medical oncologists. We assessed reliability on 5 variables: MD-reported and rater-determined disease stage; International Prognostic Index (IPI; low-low intermediate, intermediate-high, high); Charlson comorbidity index score; and presence of any item from the Charlson index. Intraclass correlation coefficients (ICCs) of 0-0.20 were indicative of "slight", 0.21-0.40 indicated "fair", 0.41-0.60 indicated "moderate", 0.61-0.80 "substantial" and >0.80 "almost perfect" reliability.
Results: By outcome, the ICC (95% confidence interval) values for MD-reported stage, rater-determined stage, and IPI were 0.86 (0.67, 0.94), 0.82 (0.59, 0.93), and 0.80 (0.55, 0.92), respectively. In contrast, the ICC (95% confidence interval) of the Charlson score, or presence of any Charlson comorbidity item was 0.47 (0.03, 0.75) and 0.61 (0.23, 0.83), respectively. Reliability varied by rater group; no rater group was consistently more reliable than others.
Conclusions: Trained CRAs abstracted key clinical variables with a very high degree of reliability, and performed at a level similar to study trainers and oncologists. Elements of the Charlson index were less reliable than other data types, possibly because of inherent ambiguity in the index itself.
(c) 2008 American Cancer Society