Document Type : Original Article

Authors

1 Lecturer, Health Information Technology, School of Paramedical, Zahedan University of Medical Sciences, Zahedan AND PhD Student, Health Information Management, Tehran University of Medical Sciences, Tehran, Iran

2 Lecturer, Health Information Technology, School of Paramedical, Zahedan University of Medical Sciences, Zahedan, Iran

3 Assistant Professor, Statistics and Epidemiology, School of Health, Zahedan University of Medical Sciences, Zahedan, Iran

4 Lecturer, Health Information Technology, School of Paramedical, Bandarabbas University of Medical Sciences, Bandarabbas, Iran

5 Medical Records, Zahedan University of Medical Sciences, Zahedan, Iran 1392

Abstract

Introduction: Reliability of diagnoses coding is essential for the use of data in national and international
levels. The present study compared the reliability of diagnoses coding with 10th version of the
International Classification of Diseases (ICD-10) between the two groups of coders.
Methods: Two hundred and forty five priorly coded medical records from five public hospitals affiliated
to Zahedan University of Medical Sciences, Iran were re-coded by hospital coders and researcher in the
first half of 2011. Data collection was done through a checklist. Validity of this instrument was confirmed
by the experts. Intra-rater reliability (reliability of coder) and inter-rater reliability (reliability of prior
codes with researcher codes or current codes of coder with researcher codes) were assessed in different
level of diagnoses using Cohen’s Kappa.
Results: In the most cases, inter-rate reliability was almost perfect. Only in level of first three character of
principle diagnoses value of Kappa was moderate (K = 0.52). Furthermore, external reliability in most
cases was upper than moderate. Only reliability of codes dedicated by researcher and prior codes on
medical records in the fifth character of principle diagnosis was low (K = 0.18), the fourth character of the
first other diagnosis was moderate (K= 0.60), and also in chapter level of the third, the other diagnosis
were moderate (K= 0.54). Furthermore, Kappa value (between coder and researcher) for principle diagnosis
in the first three character was moderate (K = 0.47), and in the fifth character level was low (K = 0.18).
Conclusion: Reliability of diagnoses coding was appropriate in level of chapter and the fourth character
and was not appropriate in three levels of the first and fourth characters. This could be resulted from error
selection of principle diagnosis in multiple-coding and negligence of coder to level of specificity in
coding. Thus, implementation of in-service educational programs for coders seems essential

Keywords

1. Who Regional Office for the Western Pacific. Medical Records Manual: A Guide for Developing Countries.
Sterling, VA: Stylus Pub Llc; 2002.
2. Al-Shorbaji N. Health and Medical Informatics: Technical Chapter in Health Information Support, Regional
Office for the Eastern Mediterranean. Geneva, Switzerland: World Health Organization; 2001.
3. AHIMA. Statement on Consistency of Healthcare Diagnostic and Procedural Coding [Online]. 2007; Available from: URL:
http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_036177.hcsp?dDocName=bok1_036177/
4. Stausberg J, Lehmann N, Kaczmarek D, Stein M. Reliability of diagnoses coding with ICD-10. Int J Med Inform
2008; 77(1): 50-7.
5. Abdelhak M, Grostick S, Hanken MA, Jacobs E. Health Information: Management of a Strategic Resource -
ExaMaster. Philadelphia, PA: Elsevier - Health Sciences Division; 2001.
6. Leon-Chisen N. Coding and quality reporting: resolving the discrepancies, finding opportunities. J AHIMA 2007;
78(7): 26-30.
7. Misset B, Nakache D, Vesin A, Darmon M, Garrouste-Org, Mourvillier B, et al. Reliability of diagnostic coding
in intensive care patients. Crit Care 2008; 12(4): R95.
8. Foley MM, Garrett GS. The code ahead: Key issues shaping clinical terminology and classification. J AHIMA
2006; 77(7): 24-8, 30.
9. Harteloh P, de BK, Kardaun J. The reliability of cause-of-death coding in The Netherlands. Eur J Epidemiol
2010; 25(8): 531-8.
10. Practice brief. Managing and improving data quality (updated). J AHIMA 2003; 74(7): 64A-C.
11.D'Amato C, Bagshaw K, Blackford G, Fenton S, Hall T, Johnson K, et al. Collecting root cause to improve
coding quality measurement. J AHIMA 2008; 79(3): 71-5.
12.World Health Organization.Regional Office for the Western Pacific. Improving Data Quality: A Guide for
Developing Countries. Geneva, Switzerland: World Health Organization; 2003.
13. LeMier M, Cummings P, West TA. Accuracy of external cause of injury codes reported in Washington State
hospital discharge records. Inj Prev 2001; 7(4): 334-8.                                                                                                                   14. Lu TH, Lee MC, Chou MC. Accuracy of cause-of-death coding in Taiwan: types of miscoding and effects on
mortality statistics. Int J Epidemiol 2000; 29(2): 336-43.
15.Razavi D, Ljung R, Lu Y, Andren-Sandberg A, Lindblad M. Reliability of acute pancreatitis diagnosis coding in
a National Patient Register: a validation study in Sweden. Pancreatology 2011; 11(5): 525-32.
16.Winkler V, Ott JJ, Becher H. Reliability of coding causes of death with ICD-10 in Germany. Int J Public Health
2010; 55(1): 43-8.
17.Dalal S, Roy B. Reliability of clinical coding of hip facture surgery: implications for payment by results? Injury
2009; 40(7): 738-41.
18. Soberg HL, Sandvik L, Ostensjo S. Reliability and applicability of the ICF in coding problems, resources and
goals of persons with multiple injuries. Disabil Rehabil 2008; 30(2): 98-106.
19.Chiang MF, Hwang JC, Yu AC, Casper DS, Cimino JJ, Starren JB. Reliability of SNOMED-CT coding by three
physicians using two terminology browsers. AMIA Annu Symp Proc 2006; 131-5.
20.Clarke CA, Undurraga DM, Harasty PJ, Glaser SL, Morton LM, Holly EA. Changes in cancer registry coding for
lymphoma subtypes: reliability over time and relevance for surveillance and study. Cancer Epidemiol Biomarkers
Prev 2006; 15(4): 630-8.
21.Neale R, Rokkas P, McClure RJ. Interrater reliability of injury coding in the Queensland Trauma Registry. Emerg
Med (Fremantle) 2003; 15(1): 38-41.
22.Wamboldt FS, Price MR, Hume LA, Gavin LA, Wamboldt MZ, Klinnert MD. Reliability and validity of a
system for coding asthma outcomes from medical records. J Asthma 2002; 39(4): 299-305.
23.Nilsson G, Petersson H, Ahlfeldt H, Strender LE. Evaluation of three Swedish ICD-10 primary care versions:
reliability and ease of use in diagnostic coding. Methods Inf Med 2000; 39(4-5): 325-31.
24. Letrilliart L, Guiguet M, Flahault A. Reliability of report coding of hospital referrals in primary care versus
practice-based coding. Eur J Epidemiol 2000; 16(7): 653-9.
25. Sytema S, Giel R, ten Horn GH, Balestrieri M, Davies N. The reliability of diagnostic coding in psychiatric case
registers. Psychol Med 1989; 19(4): 999-1006.
26. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977: 159-74.
27.Hsia DC, Krushat WM, Fagan AB, Tebbutt JA, Kusserow RP. Accuracy of diagnostic coding for Medicare
patients under the prospective-payment system. N Engl J Med 1988; 318(6): 352-5.