Page 57 - Contributed Paper Session (CPS) - Volume 4
P. 57
CPS 2126 Dr. Rajkumari Sanatombi Devi et.al
Item analysis in the assessment of knowledge in
biostatistics among the postgraduate students of
a medical college in northeast India
Dr. Rajkumari Sanatombi Devi, Dr. V.K Mehta
Sikkim Manipal Institute of Medical Sciences, Sikkim Manipal University, 5 Mile, Tadong,
th
Gangtok, East Sikkim, Sikkim, Pin: 737102, India,
Abstract
The objectives of the study was to assess the quality of a multiple choice test
items based on knowledge on Biostatistics using difficulty and discriminating
indices and to find the correlation between the two indices. The test consisted
of 38 items having one correct response and three wrong answers. The test
was administered among the postgraduate medical students attending the
research methodology classes conducted by the Department of Community
medicine, SMIMS. Thirty-one items (81%) were within the acceptable range of
difficulty index (0.20 to 0.80) and 7 (18%) items were discarded due to easy (>
0.80) and very difficult items (> 0.20). Twenty-seven items (71%) were falls
within the acceptable range (0.20 to above 0.40) of the discriminating power
of the test and 7 items had poor discriminating power (> 0.20). Two items each
had 0 and negatives discriminating power. The mean (SD) score of the
difficulty index was 47.53% ± 20.96%) while for discriminating index, it was
0.28 ± 0.20. Using Pearson correlation formula, it was observed that the two
indices was strongly positively correlated which was significant at the 0.01 level
of significant (r = 0.52, P=0.001). Hence, a significant positive correlation was
observed between these two indices and the strength of correlation was
strong in the study.
Keywords
Difficulty index; Discriminating index; Correlation; Multiple choice questions
1. Introduction
The adequacy of a test –whatever its purpose depends upon the care of
with which the items of the test have been chosen (Garrett, 1966). Multiple
choice question is an efficient tool in identifying the strengths and weaknesses
in students, as well as providing guidelines to teachers on their educational
protocols (Tan & McAleer, 2008). Ebel (1972) “Item analysis indicates the
difficulty level of each item and discriminates between the better and poorer
examinees. Thus, it helps in selecting and retaining the best test items in the
final draft of the test rejecting poor items and also shows the need to review
and modify the items”. The difficulty of an item (problems or question) may
be determined in several ways but the number right, or the proportion of the
group which can solve an item correctly, is the “standard” method for
46 | I S I W S C 2 0 1 9