• Logo
  • HamaraJournals
27

Views

Presumptive diagnosis of cutaneous leishmaniasis

, and

Abstract

Introduction: Cutaneous Leishmaniasis is a neglected tropical disease caused by a parasite. The most common presumptive diagnostic tool for this disease is the visual examination of the associated skin lesions by medical experts. Here, a mobile application was developed to aid this pre-diagnosis using an automatic image recognition software based on a convolutional neural network model.

Material and Methods: A total of 2022 images of cutaneous diseases taken from 2012 to 2018 were used for training. Then, in 2019, machine learning techniques were tested to develop an automatic classification model. Also, a mobile application was developed and tested against specialized human experts to compare its performance.

Results: Transfer learning using the VGG19 model resulted in a 93% accuracy of the classification model. Moreover, on average, the automatic model performance on a randomly selected skin image sample revealed a 99% accuracy while, the ensemble prediction of seven human medical expert’s accuracy was 83%.

Conclusion: Mobile skin monitoring applications are crucial developments for democratizing health access, especially for neglected tropical diseases. Our results revealed that the image recognition software outperforms human medical experts and can alert possible patients. Future developments of the mobile application will focus on health monitoring of Cutaneous Leishmaniasis patients via community leaders and aiming at the promotion of treatment adherence.

INTRODUCTION

Cutaneous Leishmaniasis (CL) is considered a neglected disease causing thousands of deaths annually in tropical and subtropical countries [1]. This disease produces skin lesions which can sometimes persists for years. Specialized laboratory techniques to diagnose CL include direct parasitological examination and/or indirect testing with serology and molecular diagnostics [2]. However, CL is endemic to developing countries where healthcare budget is limited. Additionally, CL is caused by protozoa transmitted in inaccessible rural areas away from health centers. Therefore, inaccessibility and cost of the diagnosis are major barriers for the control of the disease. Currently, the presumptive diagnosis of CL in rural areas is accomplished by community leaders who conduct a paper-based survey to determine the risk of infection. The survey consists of a series of questions based on the visual inspection of patients. Therefore, a computer-aided application that uses visual characteristics of the skin lesions could alert potential CL patients to attend a clinic for diagnosis and treatment. Here, we present a mobile application able to classify skin lesion images using a model based on convolutional neural networks (CNN). The mobile application allows users to take photographs of the affected area and to analyze the associated visual characteristics. Finally, the application produces a numerical value that indicates the probability of infection. The aim of the application is to perform a non-invasive and remote presumptive diagnosis of CL that can impact patients’ treatments in remote and secluded regions where the disease is endemic [1]. To our knowledge, no other automated digital diagnosis for the CL disease exists. Our aim was to eliminate expert’s bias by relying solely on image classification of the wound using computer vision algorithms [3, 4]. In fact, for CL the only m-health applications to date were designed using simple prediction rules based on expert’s visual inspections [5] or based on self-care visual examination [6, 7]. This novel technology has the potential to improve early disease detection and appropriate treatment [8, 9] without introducing human perceptual errors. Moreover, the mobile application can perform a presumptive diagnosis of CL without the need of specialized personnel involved which can widen the impact of the technology intervention in secluded areas [10, 11].

MATERIAL AND METHODS

As stimuli dataset, 2022 images were compiled and classified in different classes of cutaneous diseases, such as CL, melanoma, Hansen disease and other cutaneous diseases which are commonly mistaken for CL. The images were taken with different devices, in different lighting conditions and from different distances and angles from 2012 to 2018. To minimize the heterogeneity and size of the image dataset, transfer learning in a pre-trained CNN model was applied.

Predictive model

Fig 1 shows a schematic of the transfer learning methodology. A fully trained CNN model for task A is used to train a new model to perform task B. However, a CNN requires a large amount of labeled data to train effectively. Transfer learning uses a pre-trained model and extracts the first internal layers (which represent generic mid-level feature detection). Then, frozen weights of the first convolutional layers are connected to a layer that is specifically trained to perform task B using back propagation. With this methodology, a model can be trained using fewer images and achieve high performance [12]. Therefore, transfer learning using the VGG19 model [13] was implemented and fine-tuned.

FIH-10-75-g001.jpg

Fig 1

Transfer Learning Schematic

Mobile application

A mobile application was developed using the model described in the previous section. The application allowed users to take pictures of skin lesions of patients in remote areas with low healthcare coverage and no internet access. Fig 2 shows a schematic of the mobile application three main functionalities: A module containing simple and clear instructions on how to use the application, a photographic storage module that enabled organized record keeping, and an offline prediction module that used the trained model to predict the probability of CL infection. Six community leaders tested the mobile application for perceived engagement which revealed high focus attention, high perceived usability and high satisfaction.

FIH-10-75-g002.jpg

Fig 2

Mobile Application Schematic

RESULTS

Using a stratified cross validation, our trained model achieved 93% accuracy. The accuracy measure was calculated using Equation 1. Moreover, the sensitivity and specificity of the model was 80% and 96 %, respectively. The sensitivity, or hit rate, is defined as the ability of the model to predict true positives and was calculated using Equation 2. Therefore, our model was able to classify correctly 80% of all images with CL given that the patient had the disease and only by using image characteristics. High sensitivity is important for ruling out disease when there is a negative result which is particularly important for a presumptive diagnosis of CL. On the other hand, the specificity of a model is the ability to predict true negatives (Equation 3). Therefore, in this case, the model was able to correctly classify 96% of the images associated with non-CL lesions, the true negatives. High specificity is necessary to rule out the disease when the classification result is positive. Our results shows that both statistical measure of the performance of the model, sensitivity and specificity, are high showing a high diagnostic power.

Accuracy=TruePositives+TrueNegativesTotalPopulation       (1)

Sensitivity=TruePositivesTruePositives+FalseNegatives       (2)

Specificity=TrueNegativesTrueNegatives+FalsePositives      (3)

Seven experienced medical doctors specialized in CL diagnosis were recruited to test their ability to classify correctly the disease using 100 images selected randomly from the dataset as visual stimuli. Each image was randomly presented to each doctor for them to classify. The automatic model performance on the selected sample revealed a 99% accuracy while, as an ensemble, the human expert’s accuracy was 83% (Table 1). Indeed, the machine learning model outperformed the ensemble of doctors with both higher sensitivity and specificity. For the AI model, the sensitivity was 99% against 83% for the medical experts. Similarly, the AI model specificity was 83% compared to 50% for the ensemble of medics.

Table 1

Confusion matrix for the results of the AI model compared with medical experts

AI Model Medical Experts
True Condition True Condition
P N P N
Predicted
Condition
P 75 4 62 12
N 1 20 13 12

Discussion

Experimental results revealed that the predictive model developed to classify CL images based on mobile pictures had an outstanding performance when comparing to medical expert performance. This type of technological tool can aid medical personnel to support their decision when advising patients to perform more accurate but invasive tests for diagnosis using specialized equipment. Moreover, the subjective nature of medical visual perception is eliminated when using a technological tool. Indeed, mobile health applications provide unprecedented access to health information to a large population. In particular, neglected diseases, such as CL, may benefit from technological advances that widen the reach of health interventions by including patients from remote and poor areas [9]. In addition, these technological advancements may induce an increase in self-monitoring for disease treatment and prevention [6]. Finally, the model is limited to the classification of CL particular visual characteristics. However, the same technique may be used for other cutaneous diseases using a different set of skin training images. By promoting the usage and training of different machine learning models for several skin diseases, a more accurate and complete public health classification tool may be developed [14]. For example, computer aided diagnostic tools using deep learning image classification have been designed for classification tasks for the most common cutaneous diseases such as melanoma [15], psoriasis [16] or eczemas [17, 18]. However, as a neglected tropical disease, CL has not been included in these modelling efforts. Our results revealed that using transfer learning with a pre-trained CNN model achieved high sensitivity and specificity comparable to state-of-the-art performance for other skin conditions [18].

Conclusion

Although early diagnosis is essential for treatment of CL patients, this disease is endemic in inaccessible rural areas away from health centers. Therefore, a mobile technology may be used as a presumptive tool to guide possible CL patients to seek specialized medical care if needed. A novel mobile application that used a machine learning model that classified images was developed and tested. Experimental results showed that the application outperformed medical experts’ ability showing the great potential for technology as an aid for medical personnel in underserved communities.

AUTHOR’S CONTRIBUTION

All authors contributed to the literature review, design, data collection and analysis, drafting the manuscript, read and approved the final manuscript.

CONFLICTS OF INTEREST

The authors declare no conflicts of interest regarding the publication of this study.

FINANCIAL DISCLOSURE

No financial interests related to the material of this manuscript have been declared.

References

1. Ramírez JD, Hernández C, León CM, Ayala MS, Flórez C, González C. Taxonomy, diversity, temporal and geographical distribution of Cutaneous Leishmaniasis in Colombia: A retrospective study. Sci Rep 2016;6(1):28266.
2. Rosales-Chilama M, Gongora RE, Valderrama L, Jojoa J, Alexander N, Rubiano LC, et al. Parasitological confirmation and analysis of Leishmania diversity in asymptomatic and subclinical infection following resolution of cutaneous Leishmaniasis. PLOS Negl Trop Dis 2015;9(12):e0004273.
3. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60–88.
4. Oliveira RB, Papa JP, Pereira AS, Tavares JMRS. Computational methods for pigmented skin lesion classification in images: Review and future trends. Neural Computing and Applications 2018;29:613–36.
5. Rubiano L, Alexander NDE, Castillo RM, Martínez ÁJ, Luna JAG, Arango JD, et al. Adaptation and performance of a mobile application for early detection of cutaneous leishmaniasis. PLoS Negl Trop Dis 2021;15(2):e0008989.
6. Nadri K, Sajedi H, Sayahi A, Shahmoradi L. Designing a mobile-based self-care application for patients with cutaneous Leishmaniasis: An effective step in patients’ self-care and participation. Front Health Inform 2020;9(1):29.
7. Nadri K, Shahmoradi L, Sajedi H, Salehi A. Content for self-care app for patients with cutaneous leishmaniasis: Designing a mobile-based self-care app for patients with cutaneous leishmaniasis. Health Policy and Technology 2021;10(1):87–94.
8. Kobets T, Grekov I, Lipoldova M. Leishmaniasis: Prevention, parasite detection and treatment. Curr Med Chem 2012;19(10):1443–74.
9. Bourouis A, Zerdazi A, Feham M, Bouchachia A. m-Health: Skin disease analysis system using smartphone’s camera. Procedia Computer Science 2013;19:1116–20.
10. Kirtava Z, Shulaia T, Kiladze N, Korsantia N, Gogitidze T, Jorjoliani D. e-Health/m-Health services for dermatology outpatients screening for skin cancer and follow-up. International Conference on e-Health Networking, Applications and Services IEEE;:2016.
11. Moghaddasi H, Mehdizadeh H. Mobile Health for Diagnosis and Management of Skin Lesions. Journal of Health and Biomedical Informatics 2016;3(2):155–65.
12. Weiss K, Khoshgoftaar TM, Wang D. A survey of transfer learning. Journal of Big Data 2016;3(1):9.
13. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Comput Biol Learn Soc 2015;2015:1–14.
14. Mahbod A, Schaefer G, Wang C, Ecker R, Ellinge I. Skin lesion classification using hybrid deep neural networks. International Conference on Acoustics, Speech and Signal Processing IEEE;:2019.
15. Sultana NN, Puhan NB. Recent deep learning methods for melanoma detection: A review. International Conference on Mathematics and Computing Springer;:2018.
16. Yu K, Syed MN, Bernardis E, Gelfand JM. Machine learning applications in the evaluation and management of Psoriasis: A systematic review. J Psoriasis Psoriatic Arthritis 2020;5(4):147–59.
17. Liu Y, Jain A, Eng C, Way DH, Lee K, Bui P, et al. A deep learning system for differential diagnosis of skin diseases. Nat Med 2020;26(6):900–8.
18. Thomsen K, Christensen AL, Iversen L, Lomholt HB, Winther O. Deep learning for diagnostic binary classification of multiple-lesion skin diseases. Front Med (Lausanne) 2020;7:574329.

This display is generated from Gostaresh Afzar Hamara JATS XML.

Refbacks

  • There are currently no refbacks.