COMPARISION OF BINARY DIAGNOSTIC PREDICTORS USING ENTROPY

Authors

  • Kathare Alfred Department of Mathematics and Computer Sciences, University of Eldoret
  • Otieno Argwings Department of Mathematics and Computer Sciences, University of Eldoret
  • Kimeli Victor Department of Mathematics and Computer Sciences, University of Eldoret

DOI:

https://doi.org/10.29121/granthaalayah.v6.i1.2018.1652

Keywords:

Entropy, Binary Diagnostic, Predictors

Abstract [English]

The use of gold standard procedures in screening may be costly, risky or even unethical. It is, therefore, not admissible for large scale application. In this case, a more acceptable diagnostic predictor is applied to a sample of subjects alongside a gold standard procedure. The performance of the predictor is then evaluated using Receiver Operating Characteristic curve. The area under the curve, then, provides a summative measure of the performance of the predictor. The Receiver Operating Characteristic curve is a trade-off between sensitivity and specificity which in most cases are of different clinical significance. Also, the area under the curve is criticized for lack of coherent interpretation. In this study, we proposed the use of entropy as a summary index measure of uncertainty to compare diagnostic predictors. Noting that a diseased subject who is truly identified with the disease at a lower cut-off will also be identified at a higher cut-off, we substituted time variable in survival analysis for cut-offs in a binary predictor. We then derived the entropy of the functions of diagnostic predictors. Application of the procedure to real data showed that entropy was a strong measure for quantifying the amount of uncertainty engulfed in a set of cut-offs of binary diagnostic predictor.

Downloads

Download data is not yet available.

References

Adams, N. M., & Hand, D. J. (1999). Comparing diagnostic predictors when the misallocation costs are uncertain. Pattern Recognition, 32, 1139–1147. DOI: https://doi.org/10.1016/S0031-3203(98)00154-X

Akobeng, A.K. (2006). Understating diagnostic tests 1: Sensitivity, specificity and predictive values. Act Pediatric, 96, 338-341.doi:10.1111/j.1651-2227. 2006. 00180.x.

Bradley, P. A. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. The Pennsylvania State University, 30(7), 1145--1159. doi: 10.1016/ S0031-3203(96)00142-2.x.

Cheng, H., & Macaluso, M. (1996). Comparison of the accuracy of two tests with a confirmatory procedure limited to positive results. Epidemiology, 8, 104–106. DOI: https://doi.org/10.1097/00001648-199701000-00017

Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27(8), 861–874. doi:10.1016/j.patrec.2005.10.010.x

Fielding, A. H., & Bell, J. F. (1997). A review of methods for the assessment of prediction errors in conservation presence/absence models. Environmental Conservation, 24(1), 38–49. DOI: https://doi.org/10.1017/S0376892997000088

Griner, P., Mayewski, R. J., Mushlin, A. I., & Greenlan, P. (1981). Selection and interpretation of diagnostic tests and procedures: Principles and applications. Annals of Internal Medicine, 94, 557–592.

Halligan, S., Douglas, G. A., & Mallet, S.(2015). Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: A discussion and proposal for an alternative approach. European Radiology, 25(4), 932–939. doi: 10.1007/s00330-014-3487-0. x. DOI: https://doi.org/10.1007/s00330-014-3487-0

Hanley, A. J. (1989). Receiver Operating Characteristics (ROC) Methodology the State of the Art.

Critical Reviews in Diagnostic Imaging, 29(3), 307-335. doi: 10.1007/s00330-014-3487-0.x.

Hanley, J. A., & McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1), 29–36. doi: 10.1148/7063747.x

Kumar, R., & Indrayan, A. (2011). Receiver operating characteristic (ROC) curve for medical researchers. Indian Pediatrics, 48(4), 277–287. DOI: https://doi.org/10.1007/s13312-011-0055-4

Lobo, J. M., Jiménez-Valverde, A., & Real, R. (2008). AUC: A misleading measure of the performance of predictive distribution models. Global Ecology and Biogeography, 17(2), 145–151. doi: 10.1111/j.1466-8238.2007.00358.x. DOI: https://doi.org/10.1111/j.1466-8238.2007.00358.x

Pepe, M. S. (2003). The Statistical Evaluation of Medical Tests for Classification and Prediction (1st ed.). United Kingdom: Oxford University Press.

Pepe, M.S. (2000) Receiver operating characteristic methodology. Journal of the American Statistical Association, 95, 308–311. DOI: https://doi.org/10.1080/01621459.2000.10473930

Rothman, K. J. (1986). Modern Epidemiology. Boston; Brown and Company.

Swets, J. A., Dawes, R. M., & Monahan, J. (2000). Better decisions through science. Scientific American, 283(4), 82–87. DOI: https://doi.org/10.1038/scientificamerican1000-82

Thompson, M. L., & Zucchini, W. (1989). On the statistical analysis of ROC curves. Statistics in Medicine. 8(10), 1277-1290. DOI: https://doi.org/10.1002/sim.4780081011

Walter, S. D. (2005). The partial area under the summary ROC curve. Statistical Medicine, 24(13), 2025–2540. doi: 10.1002/sim.2103.x.

Zweig, M. H., & Campbell, G. (1993). Receiver-operating characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine. Clinical Chemistry. 39(4), 561–577. DOI: https://doi.org/10.1093/clinchem/39.4.561

Downloads

Published

2018-01-31

How to Cite

Alfred, K., Argwings, O., & Victor, K. (2018). COMPARISION OF BINARY DIAGNOSTIC PREDICTORS USING ENTROPY. International Journal of Research -GRANTHAALAYAH, 6(1), 440–447. https://doi.org/10.29121/granthaalayah.v6.i1.2018.1652