Skip to content

Our manuscript, "Interpretation of the Outputs of Deep Learning Model Trained with Skin Cancer Dataset" was published as a letter article in the Journal of Investigative Dermatology today (https://www.jidonline.org/article/S0022-202X(18)31992-4/fulltext).

When we train a CNN model, we somtimes get a disappointing Top-1 accuracy. I also suffered this problem and I did not understand exactly what was wrong at that time. When my early version of the 12DX paper was reviewed in JAMA dermatology 2 years ago, the biggest reason for rejection was the low Top-1 accuracy.


However, unlike general object recognition studies, it is very difficult to determine medical research results with Top-1 accuracy, and it is important that the AUC can be high even with a low Top-1 accuracy. If you look carefully, most of medical AI researches have used AUC rather than Top-(n) accuracy.

Because of small and imbalanced training data in medical researches, the analysis of each class as Top-(n) accuracy is inadequate (but the mean Top-(n) of all classes is meaningful). Top-(n) accuracy of each classes vary whenever we repeat the training of CNN with imbalanced dataset. Therefore, we should see the corrected value while using thresholds of each classes, that is ROC curve.

With the AUC results, we published "Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm" (https://www.jidonline.org/article/S0022-202X(18)30111-8/fulltext)

There was a debate that my 12DX algorithm is not sensitive (low top-1 accuracy) with the ISIC dataset (Automated Dermatological Diagnosis: Hype or Reality?; https://www.jidonline.org/article/S0022-202X(18)31991-2/fulltext).

 

There was an additional problem as well as the Top accuracy problem.

When we analyze a clinical image, "the problem of judging whether it is melanoma or not" is easier than "the problem of matching the type of cancer".

Analyzing the output of the AI ​​(CNN) model is equivalent to "the problem of matching the type of cancer", and analyzing the ratio of output is proper if we want to analyze the problem of judging "whether cancer or not".

We interpreted the ratio of melanoma output and nevus output rather than using melanoma output alone.

RATIO (Melanoma Index) = melanoma output / (melanoma output + nevus output).

The clinical image of skin cancer consists of a nodular lesion and a background. If you want to concentrate on only the lesion, we need to analyze it with RATIO as above to get more accurate results.

In the attached photograph, (b) is "matching what cancer is" and (a) is judging "whether it is cancer or not".

We made web-DEMO (http://dx.medicalphoto.org), and we have made it possible to show what conclusions are coming up depending on the Top-5 output and how it is interpreted.

 



번호 제목 글쓴이 날짜 조회 수
156 구글 넥서스(nexus) 4 주문하는법 WHRIA 2012.12.14 7491
155 학생 WHRIA 2007.09.09 7501
154 작전 [4] WHRIA 2009.06.23 7515
153 우분투에서 NETBIOS 설정 WHRIA 2012.04.02 7524
152 Ambition vs Passion WHRIA 2007.08.08 7574
151 whria.net 홈페이지에 음악 삽입 WHRIA 2013.01.17 7590
150 deskzoom file WHRIA 2012.05.22 7596
149 whria.net 복구 WHRIA 2012.12.09 7642
148 gomphoto 1.0.1 file WHRIA 2013.01.10 7643
147 frame WHRIA 2008.12.11 7675
146 Face It. WHRIA 2007.08.09 7678
» Interpretation of the Outputs of Deep Learning Model Trained with Skin Cancer Dataset [1] WHRIA 2018.06.02 7678
144 Minor update WHRIA 2007.08.09 7685
143 추석 WHRIA 2007.09.25 7689
142 토미그린 (tomygreen) 해외 핸드폰 (nexus 4) 에 설치하기 file WHRIA 2013.04.03 7700

Powered by Xpress Engine / Designed by Sketchbook

sketchbook5, 스케치북5

sketchbook5, 스케치북5

나눔글꼴 설치 안내


이 PC에는 나눔글꼴이 설치되어 있지 않습니다.

이 사이트를 나눔글꼴로 보기 위해서는
나눔글꼴을 설치해야 합니다.

설치 취소