Benchmarking PathCLIP for Pathology Image Analysis.

Researchers

Journal

Modalities

Models

Abstract

Accurate image classification and retrieval are of importance for clinical diagnosis and treatment decision-making. The recent contrastive language-image pre-training (CLIP) model has shown remarkable proficiency in understanding natural images. Drawing inspiration from CLIP, pathology-dedicated CLIP (PathCLIP) has been developed, utilizing over 200,000 image and text pairs in training. While the performance the PathCLIP is impressive, its robustness under a wide range of image corruptions remains unknown. Therefore, we conduct an extensive evaluation to analyze the performance of PathCLIP on various corrupted images from the datasets of osteosarcoma and WSSS4LUAD. In our experiments, we introduce eleven corruption types including brightness, contrast, defocus, resolution, saturation, hue, markup, deformation, incompleteness, rotation, and flipping at various settings. Through experiments, we find that PathCLIP surpasses OpenAI-CLIP and the pathology language-image pre-training (PLIP) model in zero-shot classification. It is relatively robust to image corruptions including contrast, saturation, incompleteness, and orientation factors. Among the eleven corruptions, hue, markup, deformation, defocus, and resolution can cause relatively severe performance fluctuation of the PathCLIP. This indicates that ensuring the quality of images is crucial before conducting a clinical test. Additionally, we assess the robustness of PathCLIP in the task of image-to-image retrieval, revealing that PathCLIP performs less effectively than PLIP on osteosarcoma but performs better on WSSS4LUAD under diverse corruptions. Overall, PathCLIP presents impressive zero-shot classification and retrieval performance for pathology images, but appropriate care needs to be taken when using it.© 2024. The Author(s) under exclusive licence to Society for Imaging Informatics in Medicine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *