文章全名:《A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images》
  论文译名:《针对组织切片中上皮细胞和间质区细胞的分类和分割的深度卷积神经网络》
  文章来源:http://www.sciencedirect.com/science/article/pii/S0925231216001004

  相似论文:《结合卷积神经网络和超像素聚类的细胞图像分割方法》
  文章来源:http://kns.cnki.net/KCMS/detail/51.1196.TP.20170614.1318.098.html?uid=WEEvREcwSlJHSldRa1FhcEE0NXh1akkwNGNjMXVvQU9sekpLYXRrTkpzbz0=$9A4hF_YAuvQ5obgVAqNKPCYcEjKensW4ggI8Fm4gTkoUKaID8j8gFw!!&v=MTk2NTA3U1pMRzRIOWJNcVk1QlpPcDdZdzlNem1SbjZqNTdUM2ZscVdNMENMTDdSN3FlWU9kdkZ5emxVTHJJSlZZPUx6

  写在前面:本博文包括对于文章的题目翻译在内,都是基于自己的理解和在网络上查找的相关资料,所以不敢保证自己对于文章的理解十分正确,如果错误的地方还请及时指正。

原文翻译部分

  • 0. Abdtract
    Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.

  • 译文
    上皮细胞(EP)和间质区(ST)是切片图像中的两类组织。在发展分析肿瘤微环境的计算机系统时,对于EP和ST的自动分类和分割是十分重要的。在本文中提出一个一个基于特征学习的深度卷积神经网络,用它来自动分类或分割癌组织微阵列数字图像中的EP 和 ST区域。现行将区域分为两个部分的方法是基于手工特征表示的,比如说颜色、纹理、局部二值模式(LBP)。与包含目标独立表示的基于方法的手工特征相比,DCNN是一种端到端的特征提取,这种特征以数据驱动的方式直接从EP 和 ST区域的原始像素值中学习得到。这些高层次的特征对于对于构建一个区分两种组织的有监督分类器是十分重要的。在这篇论文中我们将基于模型的DCNN与基于方法的三种手工特征在两种不同的数据上分别进行了比较,其中一种数据157张H&E染色的乳腺癌,另一种包含1376免疫组织化学染色的结直肠癌(意味着这两组数据不仅仅染色的方式不同,包含的癌细胞的种类也是不相同的)。本文所展示的基于特征学习方法的DCNN在H&E染色的(荷兰癌症研究所NKI和温哥华总医院VGH)的两个数据集,还有 IHC染色的数据这总三个数据集上分类的F1值分别为85%, 89%, 100%,准确率为84%, 88%, 和 100%,马修斯相关系数(MCC)为86%, 77%, 100%。EP 和 ST区域的分类结果,我们的DNN的方法比三种的传统方法要好。

  • 1. Introduction

  • 2. Previous works
    前三段略
    Building on these approaches, in this work, we present a patch based DCNN approach for distinguishing epithelial and stromal compartments within H&E images of breast cancers [8]. Each histologic image is first represented by thousands of cropped sub-images. Two different approaches involving the use of superpixel (SP) and a fixed-size square window is used to generate sub-images from H&E and IHC stained images, respectively. Different from color or intensity based features, such as LBP [19] and texture [6], our approach employs architectural features of atomic regions in the tumor and stroma for tissue classification. The DCNN based feature learning is applied to two classifications of EP and ST patches on (1) IHC stained histologic images of colorectal cancer and (2) on H&E stained images of breast cancer. For simplicity, throughout this paper, we use two different terms “Classification” and “Segmentation” to represent the two different applications, respectively. The classification of EP and ST patches of IHC stained images is an easier task which aims to assign a single label to the respective patch. Segmentation of EP and ST regions is more difficult since it aims to detect the regions of interest (ROIs) and then assign a label to each corresponding ROI. For the classification task, we employed a fixed-size SW to extract candidate sub-images defined via a sliding window scheme. These are then fed to the DCNN for training the network. The flowchart for the classification framework with DCNN is shown in Fig. 2(g)–(k). As the separation of the epithelial and stromal regions from H&E images is a more difficult task, we firstly employ a superpixel based scheme to over-segment the image into atomic regions. Then the atomic regions are resized into fixed-size square images, prior to feeding them to a DCNN for feature learning.

  • 译文
    基于上述的这些研究,在我们的论文中提出了一个基于DCNN方的补丁法在使用H&E染色的乳腺癌图像中区分EP和ST两个部分。每一个组织图像首先用数千个被切割的子图象进行标识。这里分别使用超像素和固定尺寸的正方形窗两种方法从H&E 和 IHC 染色的图像中获得子图象。与基于特征的颜色和强度不同,如LBP特征和纹理特征,我们的方法利用在肿瘤和基质间微小部分的结构特征来进行组织分类。基于特征学习的DCNN根据IHC染色的结直肠癌切片图像和H&E染色的乳腺癌图像对EP和ST两种细胞进行分类。为了简化,在这片文章中,我们使用两个不同的项目“分类”和“分割”来分别代表两个不同的应用。基于IHC染色图像(有之前可知是通过固定尺寸的正方形穿裁剪出来的)的ST和EP小块分类是一个比较简单的任务,因为在这个任务中只需要给每一小块一个标签即可。而对于EP和ST部分的分割是更为困难的,因为它需要检测感兴趣区域(ROIS),然后对每一ROI分配一个标签。对于分类任务,我们利用一个固定的大小的正方形窗口来提取通过滑动窗口得到的候选子图。这些然后被送到DCNN中来训练网络。基于DCNN的分类框架的流程如图2的(g)-(k)所示。因为从H&E图像中分割出EP和ST部分是一个更为困难的任务,我们首选利用超像素策略将图片分为更微小的区域。

  • fig 2
    The illustration of DCNN+SMC approach for Epithelial and Stromal segmentation and classification for H&E (a–f) and IHC (g–k) stained histologic images. The original H&E (a) and IHC (g) stained images are over-segmented into sub-images using a SLIC (b) and fixed-size square window based approach (h), respectively. An exemplar patch (c) is resized into smaller 50 × 50 sub-images (d). The sub-images (d and i) are then fed to a DCNN (e and j) for segmentation and classification of epithelial and stromal regions, shown in panels (f) and (k), respectively. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)


    fig 2

  • 译文
    使用DCNN和Softmax输出对基于H&E (a–f) 和 IHC (g–k)染色的切片图像中EP和ST细胞进行分割和分类的图解。原始的被H&E染色的图像(a)和被IHC染色的切片图像 (g),通过被SLIC(b)和固定尺寸的正方形窗(h)分别被分割为子图象。一个样本块(c)被所缩放更小的50×50大小的子图象(d)。然后子图象(d和i )被送进DCNN(e 和j)来分别对EP和ST图像进行分割和分类,如(f)和(k)所示。(为了解释在这个图片区域中颜色的引用,请读者参考这篇文章的网络版本)

  • 原文
    The rest of this paper is organized as follows. A detailed description of DCNN is presented in Section 3. The experimental setup and comparative strategies are presented in Section 4. The experiment results and a discussion of the results are reported in Section 5. Concluding remarks are presented in Section 6.

  • 译文
    本篇文章的剩余部分安排如下。对于DCNN的细节描述在第三部分展示。实验的设置和对比策略将在第四部分展示。实验的结果以及对于结果的讨论将在第5部分讨论。评价在第6部分。

  • 3. Methods


    fig 1

  • 3.1. The deep convolutional neural networks (DCNN)
    介绍这个DCNN是又两个卷积层,两个池化层,两个全连接层和一个输出层构成的。

  • 3.2. The convolutional layer
    介绍卷积层的配置,第 ll 层有 dlhd^{l}_{h} 个线性卷积核,每个线性卷积核的大小为 ml×mlm^{l}×m^{l}。

  • 3.3. The max-pooling layer

  • 3.4. Output layer: softmax classifier

  • Table 2


    用于训练和测试的图像的数量,用两种超像素方法得到训练和测试图像块的数量,还有使用和SW的到的图像块的数量。

  • 3.5. Generating training and testing samples

    译文
    表格 2 展示了用于评价本论文工作的数据集 D1 和数据集 D2 。其他关于数据集D1和数据集D2的说明见4.1部分。
    为了分割数据集D1中的EP和ST组成部分,子图象是通过两种超像素方法获得的。代表D1 和 D2 的用来训练和测试的图像,首先通过超像素方法及进行过度分割(实际上我觉得此处应该没有D2)。接下来每一个图像通过双边插值方法放缩至50 × 50的大小。从数据集 D1 中获得子图象的过程如图2的(a)-(d)所示。我们使用了Ncut(归一化切割)和SLIC (简单线性迭代聚类)两种方法进行归一化。我们所执行的 代码来自[20,23]。为了进行比较,在数据集 D1 中的图片我们也是用了固定大小的滑动窗口。对于D1 中的图像,我们使用滑动窗口将其过度分割为 50 × 50 大小的子图象。滑动窗口从左上角到右下角逐行截取图像,步长为25个像素。利用边界填充解决边界效应的问题。
    为了对D2中的组织图像进行分类,我们使用了文献[19]中的方法,通过滑动窗口将图像分割为 80 × 80 的正方形图像块。如果在D2数据中一样,步长为 40 × 40(这里的一样应该指的是所用的步长为被分割图像块的一半)。同样采用边界填充的方法避免边界效应的产生。从数据集 D2 中产生子图象的方法如图2(g)-(i)所示。
    D1和D2的训练集的子图象是用来训练和优化DCNN和用来对比的模型的;而测试集中的子图象是用来定性和定量评价的。

  • 4. Experimental Setup
    In order to show the effectiveness of the approach, the DCNN and comparative models are
    qualitatively and quantitatively evaluated on D1 and D2, respectively.

  • 译文
    为了展示我们方法的效果,DCCN和比较的模型分别定量和定性地在数据集 D1 和 D2 上进行评估

  • 4.1. Data set
    4.1.1. Data set 1 (D1)—This data set was downloaded via the links provided in [4]. The data was acquired from two independent cohorts: Netherlands Cancer Institute (NKI) and Vancouver General Hospital (VGH). It consists of 157 rectangular image regions (106 NKI, 51 VGH) in which Epithelial and Stromal regions were manually annotated by pathologists. The images are H&E stained histologic images from breast cancer TMAs. The size of each image is 1128 × 720 pixels at a 20 × optical magnification.

  • 译文
    4.1.1. 数据集D1。这个数据集可以通过[4]中的链接进行下载。这个数据是来自于两个独立的组织:荷兰癌症研究所(NKI)和温哥华总医院(VGH)。它包含157个矩形图像区域(106NKI和51VGH),在这些图片中EP和ST细胞被病理学家手工标注了。图像是来自癌组织阵列被H&E侵染的组织细胞。被光学放大20倍之后图像的尺寸为1128 × 720。

  • 原文
    4.1.2. Data set 2 (D2)—This data was downloaded from the links provided in [19]. The data was originally acquired at the Helsinki University Central Hospital from 1989 to 1998. D2 comprises 27 TMAs of colorectal cancer that were stained with epidermal growth factor receptor (EGFR) antibody and hematoxylin counterstain. The slides were digitized with a whole slide scanner under 20 × magnification. For the study, a total 1377 rectangular tissue samples of (826 EP and 451 ST) were chosen from 643 tumor cores. The tissue samples had been previously manually labeled as EP or ST by expert pathologists. The size of the annotations varied between 93 and 2372 pixels in width and 94-2373 in pixel height. As Table 2 shows, the image patches in both D1 and D2 were approximately evenly divided into training and testing subsets.

  • 译文
    4.1.2. 数据集D2。这个数据集可以通过[19]中的链接进行下载。这些数据来自于赫尔辛基大学中心医院1989至1998的数据。这个数据集包括27个结直肠癌的癌组织阵列,通过IHC处理。数据通过光学放大20倍之后,在数字化得到切片。为了研究,从643个肿瘤核中选取了总共1377个矩形的采样(826个EP和451个ST)。这些组织已经被病理学专家事先手工标记为EP或者ST部分了。注释的大小其宽度在(93 , 2372)变化,高度在(94 , 2373)变化。如表格2所示,在数据集D1和D2中图片块大致被等分为训练集和测试集两个子集。

  • 4.2. Training the DCNN
    We used a coarse-to-fine sweep approach [27] to choose hyper-parameters for the DCNN. Our approach begins with a coarse setting (wide hyperparameter ranges, training only for 1–5 epochs), to more fine tuned settings (narrow ranges, training with many more epochs). The training procedure is based on the CAFFE framework [15].

  • 译文
    使用了由粗到精的扫描方法选择DCNN的超参数。(具体过程没有详细说明),是在 caffe 框架下实现的。

  • 4.3. Parameter setting
    The flowchart illustrated in Fig. 2 is applied to both the comparative and DCNN-based approaches for tissue segmentation and classification. Note however that the different approaches differ in terms of the mechanism for feature extraction.

  • 译文
    在图 2 中的流程图被同样应用到了基于DCNN和与其对比的组织分类和分割问题。然而值得注意的是,不同的方法中的根据特征提取机制是不同的。
    图1中DCNN网络的参数设置如下。第一个卷积层有20个滤波器,第二个卷积层有50个滤波器,滤波器的大小均为 5 ×5 的,第一个卷积层的输入是 32 × 32 的,第一个池化层的输入是 28 × 28 的,第二个卷积层的输入是 14 × 14 的,第二个池化层的输入是 10 × 10 的.第二个池化层的输出为5 × 5。在池化层的池化操作是是以 2 × 2 的邻域进行操作的。在第一个全链接层(L5),50个尺寸为 5 × 5 的特征映射被连接到该层的500个神经元上。这 500 个神经元又被全连接到 L6 层的 100 个神经元上。在输出层,100个神经元被全连接到输出层上。
    我们使用贪婪逐层逼近的方法,通过以顺序的方式训练每一层来训练我们的 DCNN 。训练的SMC分类器产生了一个基于等式1 的分类器。基于等式 2,对每一个输入的区域判断是EP组织,还是ST组织。

  • 4.4. Implementation of DCNN and SVM

  • 译文
    所有的实验执行在一个(内核为3.4 HZ ,运行内存为 16 GB)的电脑上,使用的显卡是Quadro 2000 NVIDIA (这其实是一种主要用于图形显示的显卡)。使用的软件是MATLAB2014a。在运行的过程中,如图 1 所示,我们使用了两个卷积层,两个池化层,两个全连接层和一个输出层。卷积层使用 5 × 5 的滤波器,池化层使用 2 × 2 的滤波器。
    我们使用LIBSVM。我们是用高斯核作为内核,并且采用10折交叉验证确定高斯核的参数。

  • 4.5. Experimental design

  • 译文
    在图 2 的 (b) 中,有着红色轮廓的区域代表着使用超像素算法的到的原子区域。因为DCNN需要大小均匀的子图象作为输入,所以原子区域被 resize 到 50 ×50 的正方形图像块,随后输入到DCNN中用于模型的训练,分割和分类。在所有的原子区域应用这个流程,通过DCNN和SMC分类器,整个的输入图像被分类为EP组织或者ST组织。对于数据集 D2,最后的对于 IHC 的图像块分类是通过在每一个图像块中所有子图象的置信值的平均值决定的。
    对于基于SW的方法,每一个图像的像素经常被分割两次,最后的分割结果被确定为两个分割结果中最有可能的结果。为了避免评估过程中的偏差,我们在D2数据集中使用相同的尺寸大小(80 × 80 )来获得子图象。

  • 4.6. Comparative strategies

    1. 比较DCNN与传统特征提取之间的效果区别
    2. 比较不同的超像素方法对于DCNN模型的影响
    3. 比较不同的分类器对于DCNN模型的影响
    4. 比较滑动窗口尺寸对于DCNN模型的影响
  • 5. Experimental results and discussion

  • fig 3
    Segmentation of epithelial (red) and stromal (green) regions on a tissue image (a) using the different segmentation approaches on D1. (b) The ground truth (a) annotations of stromal and epithelial regions by an expert pathologist. The classification results are shown for DCNN-Ncut-SVM (c), DCNN-Ncut-SMC (d), DCNN-SLIC-SVM (e), DCNN-SLIC-SMC (f), DCNN-SW-SVM (g), DCNN-SW-SMC (h), and Color-SW-SVM (i), respectively. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

    使用不同的方法对数据D1中的组织进行分割,红色的分割区域是(EP细胞),绿色的分割区域是(ST细胞)。在上面的每张图中标注了所用的方法。(所以应该注意到,这里使用D1的数据仅仅进行了分割)
  • fig 4
    The probability maps rendered by the different DCNN based approaches (Columns 3 and 4) and [19] (in Column 2) for classifying EP ((a)–(e) in the left block, Column 1) and ST ((f)–(k) in the right block, Column 1) patches on D2. The false-color (defined by the heat map (l)) of sub-images in Columns 2–4 reflect the confidence score in predicting them as EP/ST regions via Linda [19], DCNN+SVM, and DCNN+SMC, respectively. The various colors in the heat map (l) correspond to the predicted confidence scores (red=EP with 100% likelihood and blue=ST with 100% likelihood). (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

    基于不同的分类器的DCNN模型(图像中的第三列和第四列)和基于 Linda(图像的第二列) 的模型对于位于左侧来自D2的EP细胞进行和ST细胞进行分类。定义在热力图(L)中的,位于第2-4列的伪色图分别反映了利用Linda、DCNN+SVM和SDCNN+SMC三种方法将这些细胞预测为EP或ST细胞的置信值。在热力图中不同的颜色代表着不同的置信值(红色代表是EP细胞的可能性为100%,蓝色代表是ST细胞的可能性为100%)。为了解释此图标题中对颜色的引用,读者参考了本文的Web版本。(我猜可能印刷出来的是黑白的吧。)

  • 5.1. Qualitative results
    The qualitative segmentation results of the DCNN (Fig. 3(c)–(h)) and color feature extraction based (Fig. 3(i)) models for a histological image in D1 (Fig. 3(a)) are shown in Fig. 3. In Fig. 3(b)–(i), green and red regions represent epithelial and stromal regions that were accurately segmented with respect to the pathologist determined ground truth (Fig. 3(b)). The black areas in Fig. 3(b)–(i) were identified as background regions and hence not worth computationally interrogating. As the qualitative results in Fig. 3(c)–(h) suggest, the segmentation results from the SP-based methods are visually different from the SW-based methods which were prone to producing zigzag boundaries while the SP-based methods produced natural boundaries. Although SP-based methods produced erroneous boundaries as well, the errors appeared more subtle and less egregious, possibly since the superpixel based algorithms represent a natural partitioning of visual scenes. The results of pixel-wise classification on EP (Fig. 4(a)–(e)) and ST (Fig. 4(f)–(k)) patches in D2 are shown in Fig. 4.
    In Fig. 4(a)–(k), the colors in the heat map (i) correspond to the predicted confidence scores (red=EP with 100% and blue=ST with 100%). The results in Fig. 3 appear to suggest that DCNN based models outperform handcrafted feature extraction based models. Also, DCNN+SMC appears to outperform DCNN+SVM.

  • 译文
    使用DCNN模型(图3中的(c)-(h))基于模型的颜色特征提取(图3中的(i))对数据集D1(图3(a))分割结果的定性评价在图3中展示。在图 3 的 (b) - (i) 中绿色和红色的区域根据病理学家的真实标记值 (Fig. 3(b))而被准确分割。Fig. 3(b)–(i)中的黑色部分是被检测为背景部分,因此不值得被计算查询。正如Fig. 3(c)–(h)中定性的结果所示,基于超像素方法的分割结果与基于滑动窗口的结果往往是不同的,因为基于滑动窗口的方法产生边界的同时会有产生锯齿形状的趋势。虽然基于超像素的方法也会产生错误的边界,但是这样的错误往往是更微小的以及更加不过分的,其可能的原因是基于超像素的算法代表一种对于视觉场景的自然分割。在数据集 D2 的EP(Fig. 4(a)–(e))和ST(Fig. 4(f)–(k))的图像块上对于像素的分类如图4所示。在Fig. 4(a)–(k),在热力图(i)中的颜色代表着预测的置信值(红色代表预测其100%为EP细胞,蓝色代表预测其100%为蓝色细胞)。从图3的结果中可以看到基于DCNN的模型表现要比基于手工模型的表现好。同样,DCNN+SMC的效果要比DCNN+SVM的效果好。

  • 5.2. Quantitative results
    The quantitative performance for tissue segmentation and classification for the different models on D1 and D2 are shown in Table 4. The DCNN based approach yields a perfect result (100%) in terms of True Positive Rate (TPR), True Negative Rate (TNR), Positive Predictive Value (PPV), Negative Predictive Value (NPV), Accuracy (ACC), F1 Score (F1), and Matthews Correlation Coefficient (MCC) and outperforms the approaches described in [6] and [19] respectively. Fig. 5(a) and (b) shows the ROC curves corresponding to segmentation accuracy for DCNN-Ncut-SMC, DCNN-SLIC-SMC, DCNN-Ncut-SVM, DCNN-SLIC-SVM, DCNN-SW-SVM, Color-SW-SVM, Linda [19] on NKI (Fig. 5(a)) and VGH ((Fig. 5(b)) of D1. The AUC values suggest that the DCNN based models outperform the handcrafted feature based approaches (Color-SW-SVM model and approaches in [19]), with the DCNN-Ncut-SVM is emerging as marginally better than the other models. Fig. 6 shows the histogram for the DCNN-SW-SVM model for EP and ST patch based classification for D2. Fig. 6 is a plot of number of images (Y-axis) versus the confidence score (X-axis) for the SVM classifier. The two types of image patches appear to be well separated. Finally, in terms of the comparison between two different SP algorithms, Table 4 shows that the performance of Ncut is slightly better than SLIC on D1. Additionally, the SVM classifier slightly outperforms SMC on D1.

  • 译文
    在数据D1和数据D2上使用不同模型进行图像的分类及分割的结果定量分析如表格4所示。基于DCNN方法的产生了在真阳性率(TPR),真阴性率(TNR),正预测值(PPV),阴性预测值(NPV),准确性(ACC),F1评分(F1),马休斯相关系数(MCC)均非常完美的结果(100%),并且比在[16]和[19]中效果都要好。Fig. 5(a) and (b)展示了来自 D1 数据集上 NKI(Fig. 5(a)) 和来自 VGH(Fig. 5(b)) 图像上使用DCNN-Ncut-SMC, DCNN-SLIC-SMC, DCNN-Ncut-SVM, DCNN-SLIC-SVM, DCNN-SW-SVM, Color-SW-SVM, Linda这几类方法的ROC曲线。AUC的值表示基于DCNN的模型要好于基于手工特征的模型(比如Color-SW-SVM 手工特征模型和在[19]中的手工特征模型),并且DCNN-Ncut-SVM比其他的模型稍好一些。图6展示了使用DCNN-SW-SVM方法对D2数据进行分类的直方图。图6是SVM分类器的图,横轴是置信值,纵轴是图片数量。这两种类型的图像块似乎是很好的分离。最后根据两种不同的超像素方法的比较,表格4展示了在数据集 D1 上Ncut 的效果要比 SLIC 的效果要好一些。附加地,在数据集 D1 上SVM的分类效果要比SMC 的效果稍好一些。

  • 5.3. Sensitivity analysis
    Fig. 7 shows the sensitivity of window size (X-axis) on the segmentation accuracy (Y-axis) for the DCNN-SW-SVM model D1. Fig. 7 suggests that the DCNN-SW-SVM model achieves the best AUC value when the window size is around 50 × 50 pixels. As a result, we chose a window size of 50 × 50 for all our subsequent experiments.

  • 译文
    图7显示了在数据集 D1 上使用 DCNN-SW-SVM 模型,窗口的尺寸(横轴)对于分割的准确度(纵轴)的灵敏度。图7表明当窗口的尺寸为 50 × 50 的时候,DCNN-SW-SVM 模型获得了最好的AUC值。因为这个结果,我们选择 50 × 50 的窗口尺寸作为接下来所有实验的选择。
  • 6. Concluding remarks
    In this paper we presented a new Deep Convolutional Neural Network (DCNN) based model for segmentation and classification of epithelial and stromal regions within Hematoxylin and Eosin (H & E) and Immunohistochemistry (IHC) images of breast and colon cancer. DCNN uses a deep architecture to learn complex features in a data-driven fashion and that has been shown in multiple applications outperform the classification accuracy obtained via handcrafted features. We compared the DCNN based models with extant handcrafted features and showed that for the task of separating stroma from epithelium, the DCNN based models consistently outperformed handcrafted features based models. Future work will entail evaluation of our approach on tissue partitioning for other types of cancers as well.
  • 译文
    在本篇文章中我们采用了DCNN模型对通过H&E染色和IHC染色的乳腺癌和结直肠癌的EP和ST组织进行分类和分割。DCNN以数据驱动的方式通过使用一个深度的框架来学习复杂的特征,并且在多个应用中表明它的效果要比通过手工特征得到的结果好。我们在EP和ST细胞的分离任务中比较了DCNN模型与现存的手工制作的特点,DCNN模型始终优于手工特征模型。未来的工作还需要评估我们对其他类型癌症的组织分区的方法。

A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions相关推荐

  1. 二值网络--Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy

    Optimize Deep Convolutional Neural Network with Ternarized Weights and High Accuracy IEEE Winter Con ...

  2. 目标检测--A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection

    A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection ECCV2016 https://g ...

  3. [Paper]Application of deep convolutional neural network for automated detection of myocardial...

    *侵删 *限于博主英语水平,若翻译不当之处恳请批评指正~3Q Application of deep convolutional neural network for automated detect ...

  4. CVPR:Weakly-supervised Deep Convolutional Neural Network Learning for Facial Action Intensity Estima

    Weakly-supervised Deep Convolutional Neural Network Learning for Facial Action Intensity Estimation ...

  5. 【转】ASPLOS'17论文导读——SC-DCNN: Highly-Scalable Deep Convolutional Neural Network using Stochastic Comput

    今年去参加了ASPLOS 2017大会,这个会议总体来说我感觉偏系统和偏软一点,涉及硬件的相对少一些,对我这个喜欢算法以及硬件架构的菜鸟来说并不算非常契合.中间记录了几篇相对比较有趣的paper,今天 ...

  6. PRN(20200816):A Hierarchical Deep Convolutional Neural Network for Incremental Learning [Tree-CNN]

    Roy D , Panda P , Roy K . Tree-CNN: A Hierarchical Deep Convolutional Neural Network for Incremental ...

  7. 【医学+深度论文:F16】2015 EMBC Glaucoma detection based on deep convolutional neural network

    16 2015 EMBC Glaucoma detection based on deep convolutional neural network Method : 分类 Dataset :ORIG ...

  8. HD-CNN: HIERARCHICAL DEEP CONVOLUTIONAL NEURAL NETWORK FOR IMAGE CLASSIFICATION(泛读)

    一.文献名字和作者    HD-CNN: HIERARCHICAL DEEP CONVOLUTIONAL NEURAL NETWORK FOR IMAGE CLASSIFICATION, 2014 二 ...

  9. 3D Human Pose Estimation from Monocular Images with Deep Convolutional Neural Network(2014)

    Deep network for 3D pose estimation(2014) 本文提出两种策略去训练deep convolutional neural network以进行3D pose est ...

最新文章

  1. [Android动画] 帧动画-获取帧数( getNumberOfFrames)七
  2. 通过pyinotify实现文件的监控,包括监控文件是否传输完成
  3. android openGl纹理的使用
  4. Linux优化之IO子系统监控与调优
  5. 4步教你玩转可视化大屏设计|内附实际操作
  6. silverlight 自定义资源整理(待后续补充)
  7. 嵌入式论文3000字_普通期刊发表论文费用是多少
  8. PetShop数据访问层之消息处理(转Bruce Zhang)
  9. easyui---tree拖拽同步到数据库
  10. D轮融资1亿美金,6亿美金估值,3位计算机学霸如何带领海归团队创造业内神话?!
  11. 数据库之DB2数据库备份
  12. Android性能优化—TraceView的使用
  13. 提取爱词霸页面中的自定义信息
  14. Arduino火焰传感器(含代码)
  15. eCommerce电子商务业务领域常见的一些术语
  16. linux中cpu使用率命令,LINUX下查看CPU使用率的命令
  17. 阿里云服务器 API 的使用
  18. 陈景润定理对筛法理论的重要贡献
  19. C++ 不知算法系列之聊聊希尔、归并排序算法中的分治哲学
  20. 计算机网络物联,物联网计算机网络安全及控制

热门文章

  1. java setprocessaffinitymask_SetThreadAffinityMask详解 | 学步园
  2. WIN10安装矢量绘图软件Inkscape 1.0
  3. 23、自动喷水灭火系统采用氯化聚氯乙烯管材及管件时的要求
  4. iOS 调出细细的字体
  5. java浏览器读取本地路径,怎么获取浏览器的文件下载路径
  6. 快捷短语怎么设置,淘宝快捷短语设置教程
  7. python可以考的资格认证有哪些?
  8. 批量删除HTML链接软件,Excel原来也可以批量删除超链接
  9. python爬虫:爬取男生喜欢的图片
  10. 脚本之家c语言指针错误,wincc编译C语言脚本是出错!-工业支持中心-西门子中国...