A SKIN CANCER CLASSIFICATION MODEL INCORPORATING A DISCRIMINATOR INTO THE LADDER NEURAL NETWORK ARCHITECTURE

Authors

DOI:

https://doi.org/10.20535/kpisn.2025.2.320063

Keywords:

напівкероване навчання, сходові нейронні мережі, дискримінатор, класифікація ракових захворювань шкіри, HAM10000.

Abstract

Background. Semi-supervised learning (SSL) is one of the most promising areas of deep learning, especially for tasks where label acquisition is a complex and expensive process, such as skin cancer classification. Existing approaches, such as Adversarial Autoencoders (AAE) and Ladder Networks (LN), effectively use unlabeled data, but have limitations in reconstruction and regularization accuracy.

Objective. Development and study of a model for classifying skin cancers based on the integration of a discriminator into the architecture of ladder neural networks.

Methods. The developed model combines the regularizing properties of ladder neural networks with the use of a discriminator-based loss function that evaluates the quality of image reconstruction based on their structure, shape, and key visual features. The experiments were conducted on the HAM10000 dataset with different ratios of labeled and unlabeled data (30%, 10%, 5%).

Results. The experiments showed that the proposed model improved the -score for the malignancy class by 4% compared to the baseline ladder network with 5% labeled data. With 30% labeled samples, the  score reached 74.8%, which is only 1% less than the fully supervised model. The relative indicator  at 5% labeled data was 0.939, exceeding the similar coefficient for STFL (0.877), which confirms the effectiveness of using unlabeled data in the proposed model.

Conclusions. The proposed model improves on existing semi-supervised learning methods (Ladder Networks, Adversarial Autoencoders) by providing high efficiency of using unlabeled data to regularize the encoder latent space in the task of skin cancer classification. Prospects for further research include using a discriminator to compare latent spaces and improving the Reconstruction Cost function to expand its application to other medical image analysis tasks

References

A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, "Adversarial autoencoders," arXiv preprint arXiv:1511.05644, 2015. [Online]. Available: https://arxiv.org/abs/1511.05644. https://doi.org/10.48550/arXiv.1511.05644

I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Advances in Neural Information Processing Systems, vol. 27, 2014, pp. 2672–2680.

R. Tachibana, T. Matsubara, and K. Uehara, "Semi-supervised learning using adversarial networks," in 2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), Okayama, Japan, 2016, pp. 1–6. https://doi.org/10.1109/icis.2016.7550881

A. Gogna and A. Majumdar, "Semi supervised autoencoder," in Neural Information Processing, 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16-21, 2016, Proceedings, Part II 23, Springer International Publishing, 2016. https://doi.org/10.1007/978-3-319-46672-9_10

M. Pazeshki, L. Fan, F. Brakel, A. Courville, and Y. Bengio, "Deconstructing the Ladder Network Architecture," in Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 2016, pp. 2368-2376.

A. Rasmus, H. Valpola, and M. Honkala, "Semi-Supervised Learning with Ladder Networks," in Advances in Neural Information Processing Systems, vol. 28, 2015, pp. 3546–3554.

D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, "Semi-supervised learning with deep generative models," in Advances in Neural Information Processing Systems, vol. 27, 2014, pp. 3581–3589.

Published

2025-07-21