Show simple item record

dc.contributor.authorKose, Kivanc
dc.contributor.authorBozkurt, Alican
dc.contributor.authorAlessi-Fox, Christi
dc.contributor.authorGill, Melissa
dc.contributor.authorLongo, Caterina
dc.contributor.authorPellacani, Giovanni
dc.contributor.authorDy, Jennifer G
dc.contributor.authorBrooks, Dana H
dc.contributor.authorRajadhyaksha, Milind
dc.date.accessioned2022-11-02T16:46:16Z
dc.date.available2022-11-02T16:46:16Z
dc.date.issued2020-10-07
dc.identifier.citationKose K, Bozkurt A, Alessi-Fox C, Gill M, Longo C, Pellacani G, Dy JG, Brooks DH, Rajadhyaksha M. Segmentation of cellular patterns in confocal images of melanocytic lesions in vivo via a multiscale encoder-decoder network (MED-Net). Med Image Anal. 2021 Jan;67:101841. doi: 10.1016/j.media.2020.101841. Epub 2020 Oct 7. PMID: 33142135; PMCID: PMC7885250.en_US
dc.identifier.eissn1361-8423
dc.identifier.doi10.1016/j.media.2020.101841
dc.identifier.pmid33142135
dc.identifier.urihttp://hdl.handle.net/20.500.12648/7835
dc.description.abstractIn-vivo optical microscopy is advancing into routine clinical practice for non-invasively guiding diagnosis and treatment of cancer and other diseases, and thus beginning to reduce the need for traditional biopsy. However, reading and analysis of the optical microscopic images are generally still qualitative, relying mainly on visual examination. Here we present an automated semantic segmentation method called "Multiscale Encoder-Decoder Network (MED-Net)" that provides pixel-wise labeling into classes of patterns in a quantitative manner. The novelty in our approach is the modeling of textural patterns at multiple scales (magnifications, resolutions). This mimics the traditional procedure for examining pathology images, which routinely starts with low magnification (low resolution, large field of view) followed by closer inspection of suspicious areas with higher magnification (higher resolution, smaller fields of view). We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions, an extensive dataset for this application, collected at four clinics in the US, and two in Italy. With patient-wise cross-validation, we achieved pixel-wise mean sensitivity and specificity of 74% and 92%, respectively, with 0.74 Dice coefficient over six classes. In the scenario, we partitioned the data clinic-wise and tested the generalizability of the model over multiple clinics. In this setting, we achieved pixel-wise mean sensitivity and specificity of 77% and 94%, respectively, with 0.77 Dice coefficient. We compared MED-Net against the state-of-the-art semantic segmentation models and achieved better quantitative segmentation performance. Our results also suggest that, due to its nested multiscale architecture, the MED-Net model annotated RCM mosaics more coherently, avoiding unrealistic-fragmented annotations.en_US
dc.language.isoenen_US
dc.relation.urlhttps://www.sciencedirect.com/science/article/abs/pii/S136184152030205Xen_US
dc.rightsCopyright © 2020. Published by Elsevier B.V.
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectDermatologyen_US
dc.subjectIn vivo segmentationen_US
dc.subjectMelanocytic lesionen_US
dc.subjectReflectance confocal microscopyen_US
dc.subjectSemantic segmentationen_US
dc.titleSegmentation of cellular patterns in confocal images of melanocytic lesions in vivo via a multiscale encoder-decoder network (MED-Net).en_US
dc.typeArticle/Reviewen_US
dc.source.journaltitleMedical image analysisen_US
dc.source.volume67
dc.source.beginpage101841
dc.source.endpage
dc.source.countryUnited States
dc.source.countryUnited States
dc.source.countryNetherlands
dc.description.versionAMen_US
refterms.dateFOA2022-11-02T16:46:16Z
dc.description.institutionSUNY Downstateen_US
dc.description.departmentPathologyen_US
dc.description.degreelevelN/Aen_US
dc.identifier.journalMedical image analysis


Files in this item

Thumbnail
Name:
Publisher version
Thumbnail
Name:
nihms-1642941.pdf
Size:
1.088Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record

Copyright © 2020. Published by Elsevier B.V.
Except where otherwise noted, this item's license is described as Copyright © 2020. Published by Elsevier B.V.