文章目录

  • 介绍
  • 读入、查看数据
  • Logistic、SVM、RandomForest建模
    • 数据预处理
    • logistic结果
    • SVM结果
    • RandomForest结果
    • 三个模型小结
  • 卷积神经网络(CNN)建模
    • 构建数据生成器,分批次将数据读入CNN训练
    • 使用预训练的VGG19,在该数据集上微调
    • CNN结果
  • 总结

介绍

  • 目的:识别淋巴结病理切片有无癌细胞
  • 数据:Histopathologic Cancer Detection(鉴别淋巴结病理切片有无癌细胞),为图像二分类数据集(图片大小 96 × 96 × 3 96\times96\times3 96×96×3),来自Kaggle,可下载;为减少运算时间,仅使用220025张训练集中的12000张图片用于分析(已知标签),划分训练集(8000张)、验证集(2000张)和测试集(2000张);训练集用于建模,验证集用于超参数调优,测试集用于评价模型预测效果
  • 预测模型:Logistic、SVM、RandomForest、CNN(基于keras)
  • 预测效果指标:准确率和ROC曲线下面积AUC

使用的Python库

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import warnings
from glob import glob
from PIL import Imagefrom sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split, KFold, GridSearchCV
from sklearn.metrics import roc_curve, auc, accuracy_scorewarnings.filterwarnings("ignore")
os.chdir("D:/BigData/histopathologic-cancer-detection")from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"

定义ROC曲线绘制函数

def plot_roc(y_true, y_pred, predict_prob, label = ""):FP, TP, thresholds = roc_curve(y_true, predict_prob)roc_auc = auc(FP, TP)acc = accuracy_score(y_true, y_pred)plt.style.use("ggplot")plt.figure(dpi = 120)plt.plot(FP, TP, color = "blue")plt.plot([0, 1], [0, 1], "r--")plt.title(label + " AUC")plt.xlabel("FP")plt.ylabel("TP")plt.text(0.6, 0.35, "AUC = %.3f\nAccuracy = %.3f" % (roc_auc, acc))plt.show;

读入、查看数据

train文件夹内是已知标签的病理图片,图片名与train_labels.csv文件的id变量对应,label变量表示有无癌细胞

训练集共有220025张图片

labels_df = pd.read_csv("train_labels.csv")
labels_df.shape
labels_df.head()
(220025, 2)
id label
0 f38a6374c348f90b587e046aac6079959adf3835 0
1 c18f2d887b7ae4f6742ee445113fa1aef383ed77 1
2 755db6279dae599ebb4d39a9123cce439965282d 0
3 bc3f0c64fb968ff4a8bd33af6971ecae77c75e08 0
4 068aba587a4950175d04c680d38943fd488d6a9d 0

读取train文件内所有图片的路径为path变量

imgs_path_df = pd.DataFrame(glob("train/*.tif"), columns = ["path"])
imgs_path_df["id"] = imgs_path_df["path"].map(lambda x: x.split("\\")[-1].split(".")[0])
imgs_path_df.shape
imgs_path_df.head()
(220025, 2)
path id
0 train\00001b2b5609af42ab0ab276dd4cd41c3e7745b5... 00001b2b5609af42ab0ab276dd4cd41c3e7745b5
1 train\000020de2aa6193f4c160e398a8edea95b1da598... 000020de2aa6193f4c160e398a8edea95b1da598
2 train\00004aab08381d25d315384d646f5ce413ea24b1... 00004aab08381d25d315384d646f5ce413ea24b1
3 train\0000d563d5cfafc4e68acb7c9829258a298d9b6a... 0000d563d5cfafc4e68acb7c9829258a298d9b6a
4 train\0000da768d06b879e5754c43e2298ce48726f722... 0000da768d06b879e5754c43e2298ce48726f722

将图片路径与图片标签merge起来

df = pd.merge(imgs_path_df, labels_df, on = "id", how = "left")
df.shape
df.head()
(220025, 3)
path id label
0 train\00001b2b5609af42ab0ab276dd4cd41c3e7745b5... 00001b2b5609af42ab0ab276dd4cd41c3e7745b5 1
1 train\000020de2aa6193f4c160e398a8edea95b1da598... 000020de2aa6193f4c160e398a8edea95b1da598 0
2 train\00004aab08381d25d315384d646f5ce413ea24b1... 00004aab08381d25d315384d646f5ce413ea24b1 0
3 train\0000d563d5cfafc4e68acb7c9829258a298d9b6a... 0000d563d5cfafc4e68acb7c9829258a298d9b6a 0
4 train\0000da768d06b879e5754c43e2298ce48726f722... 0000da768d06b879e5754c43e2298ce48726f722 1

选取训练集220025张图片中的12000张用于后续分析;8000张作为训练集,2000张作为验证集,2000张作为测试集

index = np.random.permutation(df.shape[0])
df = df.iloc[index]train, valid = train_test_split(df[:10000], test_size = 0.2, stratify = df[:10000]["label"], random_state = 0)
test = df.iloc[10000:12000]print("训练集图片数:" + str(train.shape[0]))
print("验证集图片数:" + str(valid.shape[0]))
print("测试集图片数:" + str(test.shape[0]))# 12000张图片中0、1结局的比例
df[:12000]["label"].value_counts(normalize=True)
训练集图片数:8000
验证集图片数:2000
测试集图片数:20000    0.601167
1    0.398833
Name: label, dtype: float64
print("图像大小为:" + str(np.asarray(Image.open(df.iloc[0]["path"])).shape))
图像大小为:(96, 96, 3)

分别展示6张有癌细胞的图片(Positive)和没有癌细胞的图片(Negative)

df1 = df[df["label"] == 1]
df2 = df[df["label"] == 0]fig, ax = plt.subplots(2, 6, figsize = (20, 8), dpi = 100)
fig.suptitle("Histopathologic scans of lymph node sections", fontsize = 20, fontweight = "bold")random_num1 = list(np.random.randint(0, df1.shape[0] - 1, size = 6))
random_num2 = list(np.random.randint(0, df2.shape[0] - 1, size = 6))for i, j in enumerate(zip(random_num1, random_num2)):j1, j2 = jax[0, i].imshow(np.asarray(Image.open(df.iloc[j1]["path"])))ax[0, i].set_xticks([])ax[0, i].set_yticks([])ax[0, i].set_title("Positive", fontweight = "bold")ax[1, i].imshow(np.asarray(Image.open(df.iloc[j2]["path"])))ax[1, i].set_xticks([])ax[1, i].set_yticks([])ax[1, i].set_title("Negative", fontweight = "bold")plt.show();

Logistic、SVM、RandomForest建模

数据预处理

读入所有图片为数组;压缩图片大小,由[96, 96, 3]转为[24, 24, 3]

train_imgs = np.ndarray((train.shape[0], 24, 24, 3))
train_labels = np.ndarray((train.shape[0], ))valid_imgs = np.ndarray((valid.shape[0], 24, 24, 3))
valid_labels = np.ndarray((valid.shape[0], ))test_imgs = np.ndarray((test.shape[0], 24, 24, 3))
test_labels = np.ndarray((test.shape[0], ))
for i, j in enumerate(train["path"]):img = np.asarray(Image.open(j).resize((24, 24)))train_imgs[i] = imgtrain_labels[i] = train["label"].iloc[i]for i, j in enumerate(valid["path"]):img = np.asarray(Image.open(j).resize((24, 24)))valid_imgs[i] = imgvalid_labels[i] = valid["label"].iloc[i]for i, j in enumerate(test["path"]):img = np.asarray(Image.open(j).resize((24, 24)))test_imgs[i] = imgtest_labels[i] = test["label"].iloc[i]

将数组形状由[*, 24, 24, 3]转变为[*, 24 × 24 × 3 24\times 24\times 3 24×24×3],以用于Logistic、SVM、RandomForest建模

# 最小最大标准化
scaler = MinMaxScaler()train_x = train_imgs.reshape(-1, 24 * 24 * 3)
scaler.fit(train_x)
train_x = scaler.transform(train_x)
train_y = train_labelsvalid_x = valid_imgs.reshape(-1, 24 * 24 * 3)
valid_x = scaler.transform(valid_x)
valid_y = valid_labelstest_x = test_imgs.reshape(-1, 24 * 24 * 3)
test_x = scaler.transform(test_x)
test_y = test_labelstrain_x.shape
train_y.shapevalid_x.shape
valid_y.shapetest_x.shape
test_y.shape
MinMaxScaler(copy=True, feature_range=(0, 1))(8000, 1728)(8000,)(2000, 1728)(2000,)(2000, 1728)(2000,)

logistic结果

%%timelogit = LogisticRegression()
logit.fit(train_x, train_y)y_pred = logit.predict(test_x)
y_prob = logit.predict_proba(test_x)
plot_roc(test_y, y_pred, y_prob[:, 1], label = "LogisticRegression")
Wall time: 1.34 s

SVM结果

%%timesvc = SVC(probability = True)
svc.fit(train_x, train_y)y_pred = svc.predict(test_x)
y_prob = svc.predict_proba(test_x)
plot_roc(test_y, y_pred, y_prob[:, 1], label = "SVC")
Wall time: 9min 50s

RandomForest结果

%%timerf = RandomForestClassifier()
rf.fit(train_x, train_y)y_pred = rf.predict(test_x)
y_prob = rf.predict_proba(test_x)
plot_roc(test_y, y_pred, y_prob[:, 1], label = "RandomForest")
Wall time: 26.6 s

使用网格搜索和3折交叉验证选择RandomForest超参数:分类树数量和最小叶节点数;共需要拟合 2 × 3 × 3 2\times3\times3 2×3×3个模型

%%timekfold = KFold(n_splits=3, random_state=0)
rf_param_grid = {"n_estimators": [100, 300],"min_samples_leaf": [1, 3, 10]}gsrf = GridSearchCV(RandomForestClassifier(), param_grid=rf_param_grid, cv=kfold, scoring="accuracy", n_jobs= 4, verbose = 1)gsrf.fit(np.concatenate([train_x, valid_x], axis=0), np.concatenate([train_y, valid_y], axis=0))
rf_best = gsrf.best_estimator_
Fitting 3 folds for each of 6 candidates, totalling 18 fits[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done  18 out of  18 | elapsed:  4.1min finishedWall time: 5min 22s

使用网格搜索选出的最优RandomForest对测试集进行预测

y_pred = rf_best.predict(test_x)
y_prob = rf_best.predict_proba(test_x)
plot_roc(test_y, y_pred, y_prob[:, 1], label = "RandomForest")

三个模型小结

  • 明显看出Logistic分类效果在三者中最差,不再考虑对它进行超参数调优;建立单个模型所用时间,SVM(RBF核)最长;RandomForest预测效果略微差于SVM,且训练时间较短,因此对它进行超参数调优
  • LogisticRegression预测准确率为:0.69,ROC曲线下面积AUC为:0.74
  • SVC with RBF kernel预测准确率为:0.76,ROC曲线下面积AUC为:0.83
  • RandomForest调优前预测准确率为:0.74,ROC曲线下面积AUC为:0.81;调优后为0.75和0.82

卷积神经网络(CNN)建模

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, BatchNormalization, Activation
from keras import optimizers
from keras.applications.vgg19 import VGG19
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
from keras import backend as K
Using TensorFlow backend.
train["label"] = train["label"].astype(str)
valid["label"] = valid["label"].astype(str)
test["label"] = test["label"].astype(str)

构建数据生成器,分批次将数据读入CNN训练

# 像素值归一化到0-1,同时使用水平、垂直、旋转、缩放变换等数据增强方法
train_datagen = ImageDataGenerator(rescale = 1 / 255,vertical_flip = True,horizontal_flip = True,rotation_range = 90,zoom_range = 0.2, width_shift_range = 0.1,height_shift_range = 0.1,shear_range = 0.05,channel_shift_range = 0.1
)# 按照train数据框的图片路径和标签分批次读入数据
train_generator = train_datagen.flow_from_dataframe(dataframe = train, directory = None, x_col = "path", y_col = "label", target_size = (96, 96), class_mode = "binary", batch_size = 64, seed = 0, shuffle = True
)valid_datagen = ImageDataGenerator(rescale = 1 / 255
)valid_generator = valid_datagen.flow_from_dataframe(dataframe = valid, directory = None, x_col = "path", y_col = "label", target_size = (96, 96), class_mode = "binary", batch_size = 64, seed = 0, shuffle = False
)test_datagen = ImageDataGenerator(rescale = 1 / 255
)test_generator = valid_datagen.flow_from_dataframe(dataframe = test, directory = None, x_col = "path", y_col = "label", target_size = (96, 96), class_mode = "binary", batch_size = 64, seed = 0, shuffle = False
)
Found 8000 validated image filenames belonging to 2 classes.
Found 2000 validated image filenames belonging to 2 classes.
Found 2000 validated image filenames belonging to 2 classes.

使用预训练的VGG19,在该数据集上微调

K.clear_session()# 在imagenet数据集上训练的VGG19,去掉顶端的全连接网络
vgg_conv_base = VGG19(weights = "imagenet", include_top = False, input_shape = (96, 96, 3))vgg = Sequential()
vgg.add(vgg_conv_base)
vgg.add(Flatten())
vgg.add(Dense(128, use_bias = False))
vgg.add(BatchNormalization())
vgg.add(Activation("relu"))
vgg.add(Dropout(0.5))
vgg.add(Dense(1, activation = "sigmoid"))

查看网络结构

vgg_conv_base.summary()
Model: "vgg19"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         (None, 96, 96, 3)         0
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 96, 96, 64)        1792
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 96, 96, 64)        36928
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 48, 48, 64)        0
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 48, 48, 128)       73856
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 48, 48, 128)       147584
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 24, 24, 128)       0
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 24, 24, 256)       295168
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 24, 24, 256)       590080
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 24, 24, 256)       590080
_________________________________________________________________
block3_conv4 (Conv2D)        (None, 24, 24, 256)       590080
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 12, 12, 256)       0
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 12, 12, 512)       1180160
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 12, 12, 512)       2359808
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 12, 12, 512)       2359808
_________________________________________________________________
block4_conv4 (Conv2D)        (None, 12, 12, 512)       2359808
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 6, 6, 512)         0
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 6, 6, 512)         2359808
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 6, 6, 512)         2359808
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 6, 6, 512)         2359808
_________________________________________________________________
block5_conv4 (Conv2D)        (None, 6, 6, 512)         2359808
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 3, 3, 512)         0
=================================================================
Total params: 20,024,384
Trainable params: 20,024,384
Non-trainable params: 0
_________________________________________________________________
vgg.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
vgg19 (Model)                (None, 3, 3, 512)         20024384
_________________________________________________________________
flatten_1 (Flatten)          (None, 4608)              0
_________________________________________________________________
dense_1 (Dense)              (None, 128)               589824
_________________________________________________________________
batch_normalization_1 (Batch (None, 128)               512
_________________________________________________________________
activation_1 (Activation)    (None, 128)               0
_________________________________________________________________
dropout_1 (Dropout)          (None, 128)               0
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 129
=================================================================
Total params: 20,614,849
Trainable params: 20,614,593
Non-trainable params: 256
_________________________________________________________________

冻结block5_conv1之前的网络参数

vgg_conv_base.Trainable = True
set_trainable = Falsefor layer in vgg_conv_base.layers:if layer.name == "block5_conv1":set_trainable = Trueif set_trainable:layer.trainable = Trueelse:layer.trainable = False

训练模型,应用早期终止和学习率衰减方法

%%timevgg.compile(optimizers.Adam(0.001), loss = "binary_crossentropy", metrics = ["accuracy"])train_step_size = train_generator.n // train_generator.batch_size
valid_step_size = valid_generator.n // valid_generator.batch_sizeearlystopper = EarlyStopping(monitor = "val_loss", patience = 3, verbose = 1, restore_best_weights = True)
reducelr = ReduceLROnPlateau(monitor = "val_loss", patience = 3, verbose = 1, factor = 0.1)vgg_history = vgg.fit_generator(train_generator, steps_per_epoch = train_step_size, epochs = 10, validation_data = valid_generator, validation_steps = valid_step_size, callbacks = [reducelr, earlystopper], verbose = 1)
Epoch 1/10
125/125 [==============================] - 702s 6s/step - loss: 0.4816 - accuracy: 0.7899 - val_loss: 0.5927 - val_accuracy: 0.8019
Epoch 2/10
125/125 [==============================] - 705s 6s/step - loss: 0.4177 - accuracy: 0.8221 - val_loss: 0.4217 - val_accuracy: 0.7686
Epoch 3/10
125/125 [==============================] - 688s 6s/step - loss: 0.4039 - accuracy: 0.8241 - val_loss: 0.3934 - val_accuracy: 0.8311
Epoch 4/10
125/125 [==============================] - 694s 6s/step - loss: 0.3793 - accuracy: 0.8371 - val_loss: 0.5445 - val_accuracy: 0.8466
Epoch 5/10
125/125 [==============================] - 696s 6s/step - loss: 0.3696 - accuracy: 0.8393 - val_loss: 0.4362 - val_accuracy: 0.7955
Epoch 6/10
125/125 [==============================] - 696s 6s/step - loss: 0.3635 - accuracy: 0.8390 - val_loss: 0.2730 - val_accuracy: 0.7820
Epoch 7/10
125/125 [==============================] - 703s 6s/step - loss: 0.3648 - accuracy: 0.8431 - val_loss: 0.3188 - val_accuracy: 0.8492
Epoch 8/10
125/125 [==============================] - 708s 6s/step - loss: 0.3544 - accuracy: 0.8441 - val_loss: 0.4358 - val_accuracy: 0.8590
Epoch 9/10
125/125 [==============================] - 713s 6s/step - loss: 0.3483 - accuracy: 0.8494 - val_loss: 0.3449 - val_accuracy: 0.8461Epoch 00009: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
Restoring model weights from the end of the best epoch
Epoch 00009: early stopping
Wall time: 1h 45min 6s

CNN结果

%%timefilenames = test_generator.filenames
labels = test_generator.labelsvgg_prob = vgg.predict_generator(test_generator).reshape(-1, )
vgg_pred = (vgg_prob > 0.5) * 1res = pd.DataFrame(np.c_[filenames, vgg_prob, vgg_pred, labels], columns = ["filenames", "vgg_prob", "vgg_pred", "labels"])
res["vgg_prob"] = res["vgg_prob"].astype("float32")
res["vgg_pred"] = res["vgg_pred"].astype(int)
res["labels"] = res["labels"].astype(int)
Wall time: 1min 53s
res.head()
filenames vgg_prob vgg_pred labels
0 train\a1218ce627e2bd0ec1e9b3a3941a5ceff64aa31a... 0.010656 0 0
1 train\5731b50de8c10cdcdf1fb95aaa8868badc1941eb... 0.019585 0 0
2 train\ee2f4057bbdf19eb8929dc5b442fe60d7ee13a30... 0.097148 0 1
3 train\554660e1400ba3c437ff69e0a933d663d94aa123... 0.011726 0 0
4 train\838f5915c6c8c4051955452b5135832318e64c8c... 0.413153 0 1

CNN的预测准确率为:0.80,ROC曲线下面积AUC为:0.92

plot_roc(res["labels"], res["vgg_pred"], res["vgg_prob"], label = "VGG19")

画出测试集的6张图片,标出真实标签和CNN的预测结果

labels_dict = {"1": "Positive", "0": "Negative"
}res["labels"] = res["labels"].astype(str).map(labels_dict)
res["vgg_pred"] = res["vgg_pred"].astype(str).map(labels_dict)fig, ax = plt.subplots(1, 6, figsize = (20, 6), dpi = 100);
fig.suptitle("Histopathologic scans of lymph node sections", fontsize = 20, fontweight = "bold")num_random = np.random.randint(0, res.shape[0] - 1, size = 6)  for i, j in enumerate(num_random):ax[i].imshow(np.asarray(Image.open(res.iloc[j]["filenames"])))ax[i].set_xticks([])ax[i].set_yticks([])ax[i].set_title("Label: " + res.iloc[j]["labels"] + "\n" + \"Predictive: " + res.iloc[j]["vgg_pred"], fontweight = "bold")plt.show();

总结

  • 使用8000张图片训练,在测试集上CNN的预测准确率和ROC曲线下面积均优于Logistic、SVM、RandomForest
  • 相比于预测准确率,ROC曲线下面积更能代表一个预测模型的区分度:预测准确率受选定的概率cutoff值影响,选定不同的概率cutoff值就会产生不同的预测准确率;预测准确率在极度非平衡样本中是不可信的,比如数据中99%的个体为阳性,1%为阴性,即使不建立预测模型,直接猜测所有个体为阳性,即可达到0.99的预测准确率;ROC曲线下面积反映了预测模型在不同概率cutoff值下的综合预测能力,且在非平衡样本中依然可靠
  • 由于CNN的模型复杂度远高于Logistic、SVM、RandomForest,在更大的训练数据集上,其预测能力不容易饱和;如果有足够多的训练数据,甚至可以从头训练一个CNN,其期预测能力会更佳

Kaggle淋巴结病理切片有无癌细胞鉴别建模:Logistic+SVM+RandomForest+CNN相关推荐

  1. dataframe切片_Kaggle:识别淋巴结病理切片有无癌细胞(logistic+svm+rf+CNN)

    介绍 目的:识别淋巴结病理切片有无癌细胞 数据:Histopathologic Cancer Detection(鉴别淋巴结病理切片有无癌细胞),为图像二分类数据集 (图片大小 ),来自 Kaggle ...

  2. 【深度学习】利用神网框架分割病理切片中的癌组织(胃)

    [深度学习]利用神网框架分割病理切片中的癌组织(胃) 文章目录 1 数据描述 2 思路 3 准备数据 4 构建模型 5 模型优化 6 程序执行 7 观察结果 1 数据描述 初赛选取癌病理切片,为常规 ...

  3. 病理切片染色技术和生物医学基础知识

    病理切片常用的染色方法是苏木素-伊红(Hematoxylin-Eosin)染色法,简称H&E染色法.这种方法对任何固定液固定的组织和应用各种包埋法的切片均可使用.苏木素是一种碱性染料,可使组织 ...

  4. 影像组学病理切片案例教学课程(免费赠送)另有现成的代码,模型供大家上手使用

    组织病理学切片是诊断癌症的黄金标准.它提供了大量关于肿瘤微环境(tumor microenvironment, TME)的信息,TME不仅在解释肿瘤的起始和进展中起着至关重要的作用,而且还影响着肿瘤患 ...

  5. 基于明度(亮度)和RGB通道方差的异常图片检测:筛除病理切片的组织液、肌肉和纯色区域(深色、亮色图片检测)

    基于明度和RGB通道方差的异常图片检测:筛除病理切片中的组织液.肌肉和纯色区域(深色.亮色图片检测) 问题描述 在全视野病理切片(WSI)的预处理中,将WSI切割成patch是其中非常重要的环节.最近 ...

  6. 人工智能邀您挑战!BOT系列大赛之病理切片识别AI挑战赛火热报名中

    不想等待命运被宣判?一起用AI"逆天改命 BOT系列大赛之病理切片识别AI挑战赛,以人类健康暗影--胃癌为靶心,围绕胃癌病理切片智能识别.诊断,开启大数据与人工智能技术在胃癌病理切片诊断及辅 ...

  7. 病理切片的相关文章总结

    1.数字病理学中的人工智能 - 用于诊断和精确肿瘤学的新工具 2.小样本学习研究综述(中科院计算所) 3.基于无监督组织分类的结直肠癌生存预测研究 4.常规组织病理学弱监督无标注癌检测与基因型预 5. ...

  8. 医学影像组学之病理切片分割(免费训练数据,标注数据,免费代码,免费教程)三天走完影像组学全部流程

    谁规定说影像组学只能从写代码,学语言耗费半年时间才能开始实验,三天让你走完影像组学完整流程 训练数据已经集成好 标注数据也已备好 代码也备好了 训练过程是这样的 另外笔者整理了其他影像组学的教学小视频 ...

  9. 数字病理切片处理---------组织学处理学习

    url:    http://www.andrewjanowczyk.com/deep-learning/

最新文章

  1. 缓存(CDN缓存,浏览器(客户端)缓存)
  2. Windows 技术篇-cmd强制关闭端口、解除端口占用方法,cmd查询端口相关的进程pid并杀死进程实例演示
  3. C#中控制窗体的显示与隐藏
  4. 多线程执行CPU过高问题
  5. echaer 地图_ECharts地图系列
  6. 版本控制之svn命令行下检出、提交、更新
  7. iOS 用Swipe手势和动画实现循环播放图片
  8. 详解BSCI实验一、配置EIGRP 验证,汇总
  9. catia中的螺旋伞齿轮画法_CATIA齿轮绘制方法
  10. 电源与地之间的电容作用
  11. AI智能识别盒 智能识别垃圾分类
  12. Gateway NV47H18C BIOS 密码清除
  13. python效验车辆VIN码小程序
  14. TesseractOCR(光学字符识别)引擎概述(一)
  15. 合作共赢:加密云储——穿针引线,布局IPFS分布式存储领域
  16. Autoware感知瞎学笔记(一)lidar_kf_contour_track
  17. oracle11g ocr voting,Oracle 11g RAC OCR,VOTING DISK存储全部损坏,利用自动备份,恢复到新位置...
  18. 登陆验证成功后的跳转页面依然保留用户信息
  19. 比脱发更可怕的是白头发!一种特殊的压力会使你早生华发
  20. 图像采集数据集整理和扩充方案(含代码)

热门文章

  1. 【阅读笔记】TypeScript菜鸟教程
  2. 【杰理AC632n】IIC-VCNL36826S
  3. 应对LTE测试复杂化及成本挑战 非信令测试将成主流
  4. unigui设置文件下载类型
  5. T285913 英桀们的战役
  6. 华南x79主板u盘装系统教程_英特尔X79主板怎么设置u盘启动
  7. flex布局自动换行并解决最后一行对齐
  8. Java web对试卷进行单选多选答题进行打分_如何快速进行学生学情测试,这里有具体的攻略和操作指南!...
  9. 利用Hashcat破解WiFi密码
  10. 10的负次方的名称怎么称呼?