Presumptive diagnosis of cutaneous leishmaniasis, and
Introduction: Cutaneous Leishmaniasis is a neglected tropical disease caused by a parasite. The most common presumptive diagnostic tool for this disease is the visual examination of the associated skin lesions by medical experts. Here, a mobile application was developed to aid this pre-diagnosis using an automatic image recognition software based on a convolutional neural network model.
Material and Methods: A total of 2022 images of cutaneous diseases taken from 2012 to 2018 were used for training. Then, in 2019, machine learning techniques were tested to develop an automatic classification model. Also, a mobile application was developed and tested against specialized human experts to compare its performance.
Results: Transfer learning using the VGG19 model resulted in a 93% accuracy of the classification model. Moreover, on average, the automatic model performance on a randomly selected skin image sample revealed a 99% accuracy while, the ensemble prediction of seven human medical expert’s accuracy was 83%.
Conclusion: Mobile skin monitoring applications are crucial developments for democratizing health access, especially for neglected tropical diseases. Our results revealed that the image recognition software outperforms human medical experts and can alert possible patients. Future developments of the mobile application will focus on health monitoring of Cutaneous Leishmaniasis patients via community leaders and aiming at the promotion of treatment adherence.
Cutaneous Leishmaniasis (CL) is considered a neglected disease causing thousands of deaths annually in tropical and subtropical countries . This disease produces skin lesions which can sometimes persists for years. Specialized laboratory techniques to diagnose CL include direct parasitological examination and/or indirect testing with serology and molecular diagnostics . However, CL is endemic to developing countries where healthcare budget is limited. Additionally, CL is caused by protozoa transmitted in inaccessible rural areas away from health centers. Therefore, inaccessibility and cost of the diagnosis are major barriers for the control of the disease. Currently, the presumptive diagnosis of CL in rural areas is accomplished by community leaders who conduct a paper-based survey to determine the risk of infection. The survey consists of a series of questions based on the visual inspection of patients. Therefore, a computer-aided application that uses visual characteristics of the skin lesions could alert potential CL patients to attend a clinic for diagnosis and treatment. Here, we present a mobile application able to classify skin lesion images using a model based on convolutional neural networks (CNN). The mobile application allows users to take photographs of the affected area and to analyze the associated visual characteristics. Finally, the application produces a numerical value that indicates the probability of infection. The aim of the application is to perform a non-invasive and remote presumptive diagnosis of CL that can impact patients’ treatments in remote and secluded regions where the disease is endemic . To our knowledge, no other automated digital diagnosis for the CL disease exists. Our aim was to eliminate expert’s bias by relying solely on image classification of the wound using computer vision algorithms [3, 4]. In fact, for CL the only m-health applications to date were designed using simple prediction rules based on expert’s visual inspections  or based on self-care visual examination [6, 7]. This novel technology has the potential to improve early disease detection and appropriate treatment [8, 9] without introducing human perceptual errors. Moreover, the mobile application can perform a presumptive diagnosis of CL without the need of specialized personnel involved which can widen the impact of the technology intervention in secluded areas [10, 11].
MATERIAL AND METHODS
As stimuli dataset, 2022 images were compiled and classified in different classes of cutaneous diseases, such as CL, melanoma, Hansen disease and other cutaneous diseases which are commonly mistaken for CL. The images were taken with different devices, in different lighting conditions and from different distances and angles from 2012 to 2018. To minimize the heterogeneity and size of the image dataset, transfer learning in a pre-trained CNN model was applied.
Fig 1 shows a schematic of the transfer learning methodology. A fully trained CNN model for task A is used to train a new model to perform task B. However, a CNN requires a large amount of labeled data to train effectively. Transfer learning uses a pre-trained model and extracts the first internal layers (which represent generic mid-level feature detection). Then, frozen weights of the first convolutional layers are connected to a layer that is specifically trained to perform task B using back propagation. With this methodology, a model can be trained using fewer images and achieve high performance . Therefore, transfer learning using the VGG19 model  was implemented and fine-tuned.
A mobile application was developed using the model described in the previous section. The application allowed users to take pictures of skin lesions of patients in remote areas with low healthcare coverage and no internet access. Fig 2 shows a schematic of the mobile application three main functionalities: A module containing simple and clear instructions on how to use the application, a photographic storage module that enabled organized record keeping, and an offline prediction module that used the trained model to predict the probability of CL infection. Six community leaders tested the mobile application for perceived engagement which revealed high focus attention, high perceived usability and high satisfaction.
Using a stratified cross validation, our trained model achieved 93% accuracy. The accuracy measure was calculated using Equation 1. Moreover, the sensitivity and specificity of the model was 80% and 96 %, respectively. The sensitivity, or hit rate, is defined as the ability of the model to predict true positives and was calculated using Equation 2. Therefore, our model was able to classify correctly 80% of all images with CL given that the patient had the disease and only by using image characteristics. High sensitivity is important for ruling out disease when there is a negative result which is particularly important for a presumptive diagnosis of CL. On the other hand, the specificity of a model is the ability to predict true negatives (Equation 3). Therefore, in this case, the model was able to correctly classify 96% of the images associated with non-CL lesions, the true negatives. High specificity is necessary to rule out the disease when the classification result is positive. Our results shows that both statistical measure of the performance of the model, sensitivity and specificity, are high showing a high diagnostic power.
Seven experienced medical doctors specialized in CL diagnosis were recruited to test their ability to classify correctly the disease using 100 images selected randomly from the dataset as visual stimuli. Each image was randomly presented to each doctor for them to classify. The automatic model performance on the selected sample revealed a 99% accuracy while, as an ensemble, the human expert’s accuracy was 83% (Table 1). Indeed, the machine learning model outperformed the ensemble of doctors with both higher sensitivity and specificity. For the AI model, the sensitivity was 99% against 83% for the medical experts. Similarly, the AI model specificity was 83% compared to 50% for the ensemble of medics.
Experimental results revealed that the predictive model developed to classify CL images based on mobile pictures had an outstanding performance when comparing to medical expert performance. This type of technological tool can aid medical personnel to support their decision when advising patients to perform more accurate but invasive tests for diagnosis using specialized equipment. Moreover, the subjective nature of medical visual perception is eliminated when using a technological tool. Indeed, mobile health applications provide unprecedented access to health information to a large population. In particular, neglected diseases, such as CL, may benefit from technological advances that widen the reach of health interventions by including patients from remote and poor areas . In addition, these technological advancements may induce an increase in self-monitoring for disease treatment and prevention . Finally, the model is limited to the classification of CL particular visual characteristics. However, the same technique may be used for other cutaneous diseases using a different set of skin training images. By promoting the usage and training of different machine learning models for several skin diseases, a more accurate and complete public health classification tool may be developed . For example, computer aided diagnostic tools using deep learning image classification have been designed for classification tasks for the most common cutaneous diseases such as melanoma , psoriasis  or eczemas [17, 18]. However, as a neglected tropical disease, CL has not been included in these modelling efforts. Our results revealed that using transfer learning with a pre-trained CNN model achieved high sensitivity and specificity comparable to state-of-the-art performance for other skin conditions .
Although early diagnosis is essential for treatment of CL patients, this disease is endemic in inaccessible rural areas away from health centers. Therefore, a mobile technology may be used as a presumptive tool to guide possible CL patients to seek specialized medical care if needed. A novel mobile application that used a machine learning model that classified images was developed and tested. Experimental results showed that the application outperformed medical experts’ ability showing the great potential for technology as an aid for medical personnel in underserved communities.
All authors contributed to the literature review, design, data collection and analysis, drafting the manuscript, read and approved the final manuscript.
CONFLICTS OF INTEREST
The authors declare no conflicts of interest regarding the publication of this study.
No financial interests related to the material of this manuscript have been declared.