这是RSNA2017的一个挑战赛,在kaggle上下载了数据集

RSNA Bone Age | Kagglehttps://www.kaggle.com/kmader/rsna-bone-agegithub上也有一些代码,但是因为配置问题,都没有运行成功,也参考了一篇文章(15条消息) 【蓝蜗牛】骨龄检测(一)_新手村打蘑菇-CSDN博客_kaggle骨龄https://blog.csdn.net/weixin_43346901/article/details/99678300

下面是我参考了一些资料后写的一些代码

首先是用Xception网络

from tensorflow.keras.models import Model
from keras_preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization
from tensorflow.keras.layers import Input, Conv2D, multiply, LocallyConnected2D, Lambda, Flatten, concatenate
from tensorflow.keras.layers import GlobalAveragePooling2D, AveragePooling2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from tensorflow.keras import optimizers
from tensorflow.keras.metrics import mean_absolute_error
from tensorflow.keras.applications import Xception
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
import os
import matplotlib.pyplot as plt
# import seaborn as snsimport tensorflow as tfos.environ["CUDA_VISIBLE_DEVICES"] = '1'  # use GPU with ID=0
config = tf.compat.v1.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5  # maximun alloc gpu50% of MEM
config.gpu_options.allow_growth = True  # allocate dynamically
sess = tf.compat.v1.Session(config=config)# %%
EPOCHS = 30
LEARNING_RATE = 0.001
BATCH_SIZE_TRAIN = 16  # 出现OOM,调小这个数值 8
BATCH_SIZE_VAL = 16  # 16# 图像参数
PIXELS = 299  # Xception输入大小
CHANNELS = 3
IMG_SIZE = (PIXELS, PIXELS)
IMG_DIMS = (PIXELS, PIXELS, CHANNELS)
VALIDATION_FRACTION = 0.25
SEED = 1234# %%
# 读取数据
path = '/home/user/X-ray/archive/'train_path = path + 'boneage-training-dataset/'  # 图像文件夹
# test_path = path + 'boneage-test-dataset/'df = pd.read_csv(path + 'boneage-training-dataset.csv')  # csv标注文件files = [train_path + str(i) + '.png' for i in df['id']]
df['file'] = files
df['exists'] = df['file'].map(os.path.exists)boneage_mean = df['boneage'].mean()
boneage_div = 2 * df['boneage'].std()
df['boneage_zscore'] = df['boneage'].map(lambda x:(x - boneage_mean) / boneage_div)
df.dropna(inplace=True)df['gender'] = df['male'].map(lambda x: 1 if x else 0)
df['boneage_category'] = pd.cut(df['boneage'], 10)
# Examine the distribution of age and gender
print("{} images found out of total {} images".format(df['exists'].sum(), df.shape[0]))
column_headers = list(df.columns.values)
print("column_handers = ", column_headers)  # 列标签
print(df.sample(5))  # csv中随机抽取5行# 导出数据 生成csv
test = pd.DataFrame(columns=column_headers, data=df)
test.to_csv('/home/user/tanminhui/X-ray/archive/test_df.csv')  # 如果生成excel,可以用to_excel
# raw_train_df, test_df = train_test_split(df,
#                                          test_size=0.2,
#                                          random_state=2018,
#                                          stratify=df['boneage_category'])
# raw_train_df, valid_df = train_test_split(raw_train_df,
#                                           test_size=0.1,
#                                           random_state=2018,
#                                           stratify=raw_train_df['boneage_category'])
raw_train_df, raw_valid_df = train_test_split(df, test_size=0.25, random_state=1234,stratify=df['boneage_category'])
train_df = raw_train_df.groupby(['boneage_category', 'male']).apply(lambda x: x.sample(500, replace=True)).reset_index(drop=True)
valid_df, test_df = train_test_split(raw_valid_df, test_size=0.25, random_state=1234)raw_train_df_size = raw_train_df.shape[0]
valid_size = valid_df.shape[0]
test_size = test_df.shape[0]
print("# Training images:   {}".format(raw_train_df))
print("# Validation images: {}".format(valid_size))
print("# Testing images:    {}".format(test_size))optim = optimizers.Nadam(lr=LEARNING_RATE, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.0003)
weight_path = "{}_weights.best.hdf5".format('bone_age')
checkpoint = ModelCheckpoint(weight_path, monitor='val_loss', verbose=1, save_best_only=True, mode='min',save_weights_only=True)
reduceLROnPlat = ReduceLROnPlateau(monitor='val_loss', factor=0.8, patience=3, verbose=1, mode='auto', min_delta=0.0001,cooldown=5, min_lr=0.00006)
early = EarlyStopping(monitor="val_loss", mode="min", patience=6)
callbacks_list = [checkpoint, early, reduceLROnPlat]def gen_2inputs(imgDatGen, df, batch_size, seed, img_size):gen_img = imgDatGen.flow_from_dataframe(dataframe=df,x_col='file', y_col='boneage_zscore',batch_size=batch_size, seed=seed, shuffle=True, class_mode='other',target_size=img_size, color_mode='rgb',drop_duplicates=False)gen_gender = imgDatGen.flow_from_dataframe(dataframe=df,x_col='file', y_col='gender',batch_size=batch_size, seed=seed, shuffle=True, class_mode='other',target_size=img_size, color_mode='rgb',drop_duplicates=False)while True:X1i = gen_img.next()X2i = gen_gender.next()yield [X1i[0], X2i[1]], X1i[1]def test_gen_2inputs(imgDatGen, df, batch_size, img_size):gen_img = imgDatGen.flow_from_dataframe(dataframe=df,x_col='file', y_col='boneage_zscore',batch_size=batch_size, shuffle=False, class_mode='other',target_size=img_size, color_mode='rgb',drop_duplicates=False)gen_gender = imgDatGen.flow_from_dataframe(dataframe=df,x_col='file', y_col='gender',batch_size=batch_size, shuffle=False, class_mode='other',target_size=img_size, color_mode='rgb',drop_duplicates=False)while True:X1i = gen_img.next()X2i = gen_gender.next()yield [X1i[0], X2i[1]], X1i[1]train_idg = ImageDataGenerator(zoom_range=0.2,fill_mode='nearest',rotation_range=25,width_shift_range=0.25,height_shift_range=0.25,vertical_flip=False,horizontal_flip=True,shear_range=0.2,samplewise_center=False,samplewise_std_normalization=False)val_idg = ImageDataGenerator(width_shift_range=0.25,height_shift_range=0.25,horizontal_flip=True)test_idg = ImageDataGenerator()train_flow = gen_2inputs(train_idg, train_df, BATCH_SIZE_TRAIN, SEED, IMG_SIZE)
valid_flow = gen_2inputs(val_idg, valid_df, BATCH_SIZE_VAL, SEED, IMG_SIZE)
test_flow = test_gen_2inputs(test_idg, test_df, 500, IMG_SIZE)def mae_months(in_gt, in_pred):return mean_absolute_error(boneage_div * in_gt, boneage_div * in_pred)# 构建卷积神经网络in_layer_img = Input(shape=IMG_DIMS, name='input_img')
in_layer_gender = Input(shape=(1,), name='input_gender')base = Xception(input_shape=IMG_DIMS, weights='imagenet', include_top=False)
base_out = base(in_layer_img)# base = GlobalAveragePooling2D()(base)
base = Dropout(0.5)(base_out) bn_base = BatchNormalization()(base)con_layer = Conv2D(512, kernel_size=(1, 1), padding='same', activation='relu')(bn_base)
con_layer = Dropout(0.5)(con_layer) # 要是跑不通,就注释这些
con_layer = Conv2D(512, kernel_size=(1, 1), padding='same', activation='relu')(con_layer)  # 64
con_layer = Dropout(0.5)(con_layer) # 要是跑不通,就注释这些feature_img = GlobalAveragePooling2D()(con_layer)
feature_gender = Dense(32, activation='relu')(in_layer_gender)feature = concatenate([feature_img, feature_gender], axis=1)out = Dense(512, activation='relu')(feature)
out=Dropout(0.5)(out)
out = Dense(512, activation='relu')(out)
out=Dropout(0.5)(out)
out = Dense(1, activation='linear')(out)model = Model(inputs=[in_layer_img, in_layer_gender], outputs=out)model.compile(loss='mean_absolute_error', optimizer=optim, metrics=[mae_months])
model.summary()# from keras.utils import plot_model
# plot_model(model, show_shapes=True, to_file='model.png')BATCH_SIZE_TEST = len(test_df) // 3
STEP_SIZE_TEST = 3
STEP_SIZE_TRAIN = len(train_df) // BATCH_SIZE_TRAIN
STEP_SIZE_VALID = len(valid_df) // BATCH_SIZE_VALmodel_history = model.fit_generator(generator=train_flow,steps_per_epoch=STEP_SIZE_TRAIN,validation_data=valid_flow,validation_steps=STEP_SIZE_VALID,epochs=EPOCHS,callbacks=callbacks_list)loss_history = model_history.history['loss']
history_df = pd.DataFrame.from_dict(model_history.history)
history_df.to_csv('/home/user/X-ray/xception_loss_history.csv')# %%
# 测试
model.load_weights("bone_age_weights.best.hdf5")
test_X, test_Y = next(test_flow)# ------------------------------------------------------------------------------------------------
# column_headers_X = list(test_X.columns.values)
# print("column_handers = ", column_headers_X)  # 列标签
# print(test_X.sample(5))  # csv中随机抽取5行
# # 导出数据 生成csv
# test_X_CSV = pd.DataFrame(columns=column_headers_X, data=test_X)
# test_X_CSV.to_csv('D:/test_X_CSV.csv')  # 如果生成excel,可以用to_excel
#
# column_headers_Y = list(test_Y.columns.values)
# print("column_handers = ", column_headers_Y)  # 列标签
# print(test_Y.sample(5))  # csv中随机抽取5行
# # 导出数据 生成csv
# test_Y_CSV = pd.DataFrame(columns=column_headers_Y, data=test_Y)
# test_Y_CSV.to_csv('D:/test_Y_CSV.csv')  # 如果生成excel,可以用to_excel
# --------------------------------------------------------------------------------------------------------------------
# plt.style.use("dark_background")
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.sans-serif'] = 'DejaVu Sans'
pred_Y = boneage_div * model.predict(test_X, batch_size=16, verbose=True) + boneage_mean
test_Y_months = boneage_div * test_Y + boneage_meanfrom sklearn.metrics import mean_absolute_error as sk_mae
print("Mean absolute error on test data: " + str(sk_mae(test_Y_months, pred_Y)))# # 导出数据 生成csv
# column_headers = list(test_Y.columns.values)
# column_headers.append('test_Y_months', 'pred_Y')
# data_data = [test_Y, test_Y_months, pred_Y]
# test_pred = pd.DataFrame(columns=column_headers, data=data_data)
# test_pred.to_csv('D:/test_pred.csv')  # 如果生成excel,可以用to_excelfig, ax1 = plt.subplots(1, 1, figsize=(6, 6))
ax1.plot(test_Y_months, pred_Y, 'b+', label='predictions')
ax1.plot(test_Y_months, test_Y_months, 'r-', label='actual')
ax1.legend()
ax1.set_xlabel('Actual Age (Months)')
ax1.set_ylabel('Predicted Age (Months)')
fig.savefig('test_result.png', dpi=300)# test_X 四维数组
print('test_X', test_X)
print('test_Y', test_Y)
print('pred_Y',pred_Y)
print('test_Y_months',test_Y_months)ord_idx = np.argsort(test_Y)
ord_idx = ord_idx[np.linspace(0, len(ord_idx) - 1, num=8).astype(int)]  # take 8 evenly spaced ones
fig, m_axs = plt.subplots(2, 4, figsize=(16, 32))
for (idx, c_ax) in zip(ord_idx, m_axs.flatten()):c_ax.imshow(test_X[0][idx, :, :, 0], cmap='bone')title = 'Age: %2.1f\nPredicted Age: %2.1f\nGender: ' % (test_Y_months[idx], pred_Y[idx])if test_X[1][idx] == 0:title += "Female\n"else:title += "Male\n"c_ax.set_title(title)c_ax.axis('off')
plt.show()
model.save('model_xception.h5') # HDF5文件

以下用Inception_v3网络

import numpy as np
import pandas as pd
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"  # which gpu to use
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from sklearn.model_selection import train_test_split
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from tensorflow.keras.applications.inception_v3 import InceptionV3, preprocess_input
from tensorflow.keras.layers import Input, GlobalAveragePooling2D, Dense, Dropout, Flatten, Concatenate
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.metrics import mean_absolute_error
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from sklearn.metrics import mean_absolute_error as sk_mae
import matplotlib.pyplot as plt
import tensorflow as tf# from keras.backend.
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session()
# tensorflow_backend import set_session
# from tensorflow.compat.v1.keras.backend import set_session
# config = tf.ConfigProto()
# config.gpu_options.allow_growth = True  # dont allocate entire vram initially
# set_session(tf.Session(config=config))
# Reading data
print("Reading data...")
img_dir = "/home/user/X-ray/archive/boneage-training-dataset/"
csv_path = "/home/user/X-ray/archive/boneage-training-dataset.csv"
age_df = pd.read_csv(csv_path)
age_df['path'] = age_df['id'].map(lambda x: img_dir + "{}.png".format(x))
age_df['exists'] = age_df['path'].map(os.path.exists)
age_df['gender'] = age_df['male'].map(lambda x: "male" if x else "female")
mu = age_df['boneage'].mean()
sigma = age_df['boneage'].std()
age_df['zscore'] = age_df['boneage'].map(lambda x: (x - mu) / sigma)
age_df.dropna(inplace=True)# # 查看图片大小  1514*2044
# from PIL import Image
# file_path = 'D:/X射线骨龄预测/archive/boneage-training-dataset/1500.png'
# img = Image.open(file_path)
# imgSize = img.size  # 大小/尺寸
# w = img.width  # 图片的宽
# h = img.height  # 图片的高
# f = img.format  # 图像格式
# print(imgSize)
# print(w, h, f)# Examine the distribution of age and gender
print("{} images found out of total {} images".format(age_df['exists'].sum(), age_df.shape[0]))
column_headers = list(age_df.columns.values)
print("column_handers = ", column_headers)  # 列标签
print(age_df.sample(5))  # csv中随机抽取5行# # 导出数据 生成csv
# test = pd.DataFrame(columns=column_headers, data=age_df)
# test.to_csv('D:/test.csv')  # 如果生成excel,可以用to_excel# # # 绘出均衡前后骨龄的数量直方图
# # age_df[['boneage', 'zscore']].hist()
# # plt.xlabel('bone age')
# # plt.ylabel('Number of samples')
# # plt.show()
# # # 男女直方图
# # age_df[['male']]= age_df[['male']].astype(int)
# # age_df[['male']].hist(figsize=(10, 5))
# # plt.show()# # 获取age_df的某一列
# age_df_col_path_zscore = age_df[['path', 'zscore']]
# age_df_path_zscore = np.array(age_df_col_path_zscore)
# print('age_df_col_path_zscore\n ', age_df_col_path_zscore)
# # age_df[['boneage','male','zscore']].hist()
# # plt.show()
print("Reading complete !!!\n")# Split into training testing and validation datasets
print("Preparing training, testing and validation datasets ...")
age_df['boneage_category'] = pd.cut(age_df['boneage'], 10)  # 按骨龄的梯度分为10个量级
raw_train_df, test_df = train_test_split(age_df,test_size=0.2,random_state=2018,stratify=age_df['boneage_category'])
raw_train_df, valid_df = train_test_split(raw_train_df,test_size=0.1,random_state=2018,stratify=raw_train_df['boneage_category'])
# # raw_train_df[['boneage']].hist(figsize=(10, 5))  # 绘图(均衡前)
# # plt.xlabel('bone age')
# # plt.ylabel('Number of samples')
# # plt.show()
# # # 男女直方图
# # raw_train_df[['male']] = raw_train_df[['male']].astype(int)
# # raw_train_df[['male']].hist(figsize=(10, 5))  # 绘图(均衡前)
# # plt.show()
# #
# raw_train_df_size = raw_train_df.shape[0]
# valid_size = valid_df.shape[0]
# test_size = test_df.shape[0]
# print("# Training images:   {}".format(raw_train_df))
# print("# Validation images: {}".format(valid_size))
# print("# Testing images:    {}".format(test_size))
# Training images:   9076 | Validation images: 1009 | Test images:  2523# Balance the distribution in the training set
# raw_train_df中,boneage_category有10类,male有两类,排列组合共20种 每类重复采样500次,共得到 20 * 500 = 10000 个样本
train_df = raw_train_df.groupby(['boneage_category', 'male']).apply(lambda x: x.sample(500, replace=True)).reset_index(drop=True)
# print(train_df.sample(5))
# train_df[['boneage']].hist(figsize=(10, 5))  # 绘图 (均衡后)
# plt.title('Equalized image')
# plt.xlabel('bone age')
# plt.ylabel('Number of samples')
# plt.show()
# # 男女直方图
# train_df[['male']] = train_df[['male']].astype(int)
# train_df[['male']].hist(figsize=(10, 5))  # 绘图(均衡后)
# plt.title('Equalized male')
# plt.show()
# # Training images:   10000 | Validation images: 1009 | Test images:   2523train_size = train_df.shape[0]
valid_size = valid_df.shape[0]
test_size = test_df.shape[0]
print("# Training images:   {}".format(train_size))
print("# Validation images: {}".format(valid_size))
print("# Testing images:    {}".format(test_size))# Make training, validation and testing dataset
IMG_SIZE = (299, 299)  # default size for inception_v3img_data_gen = ImageDataGenerator(samplewise_center=False,samplewise_std_normalization=False,horizontal_flip=True,vertical_flip=False,height_shift_range=0.25,width_shift_range=0.25,rotation_range=25,shear_range=0.2,fill_mode='reflect',zoom_range=0.2,preprocessing_function=preprocess_input)def gen_2inputs(imgDatGen, df, batch_size, seed, img_size):gen_img = imgDatGen.flow_from_dataframe(dataframe=df,x_col='path', y_col='zscore',batch_size=batch_size, seed=seed, shuffle=True, class_mode='raw',target_size=img_size, color_mode='rgb')gen_gender = imgDatGen.flow_from_dataframe(dataframe=df,x_col='path', y_col='male',batch_size=batch_size, seed=seed, shuffle=True, class_mode='raw',target_size=img_size, color_mode='rgb')while True:X1i = gen_img.next()X2i = gen_gender.next()yield [X1i[0], X2i[1]], X1i[1]def test_gen_2inputs(imgDatGen, df, batch_size, img_size):gen_img = imgDatGen.flow_from_dataframe(dataframe=df,x_col='path', y_col='zscore',batch_size=batch_size, shuffle=False, class_mode='raw',target_size=img_size, color_mode='rgb')gen_gender = imgDatGen.flow_from_dataframe(dataframe=df,x_col='path', y_col='male',batch_size=batch_size, shuffle=False, class_mode='raw',target_size=img_size, color_mode='rgb')while True:X1i = gen_img.next()X2i = gen_gender.next()yield [X1i[0], X2i[1]], X1i[1]BATCH_SIZE_TRAIN = 16
SEED = 8309
BATCH_SIZE_VAL = 16
train_flow = gen_2inputs(img_data_gen, train_df, BATCH_SIZE_TRAIN, SEED, IMG_SIZE)
valid_flow = gen_2inputs(img_data_gen, valid_df, BATCH_SIZE_VAL, SEED, IMG_SIZE)
test_flow = test_gen_2inputs(img_data_gen, test_df, test_size, IMG_SIZE)# Model definition
print("Compiling deep model ...")
IMG_SHAPE = (299, 299, 3)  # 224
# 1、两个输入分别是原始图像和性别输入
img = Input(shape=IMG_SHAPE)
gender = Input(shape=(1,))# 2、预训练主干网络(图像)
cnn_vec = InceptionV3(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')(img)# 3、主干网络输出
cnn_vec = GlobalAveragePooling2D()(cnn_vec)
cnn_vec = Dropout(0.2)(cnn_vec)# 4.性别输入网络
gender_vec = Dense(32, activation='relu')(gender)# 5.两个网络拼接
features = Concatenate(axis=-1)([cnn_vec, gender_vec])dense_layer = Dense(512, activation='relu')(features)
dense_layer = Dropout(0.2)(dense_layer)
dense_layer = Dense(512, activation='relu')(dense_layer)
dense_layer = Dropout(0.2)(dense_layer)
output_layer = Dense(1, activation='linear')(dense_layer)  # linear is what 16bit didbone_age_model = Model(inputs=[img, gender], outputs=output_layer)# # VGG Model definition
# print("Compiling deep model ...")
# img = Input(shape=IMG_SHAPE)
# gender = Input(shape=(1,))
# cnn_vec = VGG16(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')(img)
# cnn_vec = GlobalAveragePooling2D()(cnn_vec)
# cnn_vec = Dropout(0.2)(cnn_vec)
# gender_vec = Dense(32, activation='relu')(gender)
# features = Concatenate(axis=-1)([cnn_vec, gender_vec])
# dense_layer = Dense(1024, activation='relu')(features)
# dense_layer = Dropout(0.2)(dense_layer)
# dense_layer = Dense(1024, activation='relu')(dense_layer)
# dense_layer = Dropout(0.2)(dense_layer)
# output_layer = Dense(1, activation='linear')(dense_layer)  # linear is what 16bit did
# bone_age_model = Model(inputs=[img, gender], outputs=output_layer)# Compile model编译模型
def mae_months(in_gt, in_pred):return mean_absolute_error(mu + sigma * in_gt, mu + sigma * in_pred)bone_age_model.compile(optimizer='adam', loss='mse', metrics=[mae_months])
bone_age_model.summary()
print("Model compiled !!!\n")# Training deep model
print("Training deep model ...")
# 测试步长   学习率: 步长更大= 学习率更高
EPOCHS = 30
BATCH_SIZE_TEST = len(test_df) // 3
STEP_SIZE_TEST = 3
STEP_SIZE_TRAIN = len(train_df) // BATCH_SIZE_TRAIN
STEP_SIZE_VALID = len(valid_df) // BATCH_SIZE_VAL
from tensorflow.keras import optimizers
# Model Callbacks
weight_path = "{}_weights.best.hdf5".format('bone_age')
checkpoint = ModelCheckpoint(weight_path, monitor='val_loss', verbose=1, save_best_only=True, mode='min',save_weights_only=True)
optim = optimizers.Nadam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.0003)
reduceLROnPlat = ReduceLROnPlateau(monitor='val_loss', factor=0.8, patience=3, verbose=1, mode='auto', epsilon=0.0001,cooldown=5, min_lr=0.0006)
early = EarlyStopping(monitor="val_loss", mode="min",patience=10)  # probably needs to be more patient, but kaggle time is limited
callbacks_list = [checkpoint, early, reduceLROnPlat]bone_age_model_history = bone_age_model.fit_generator(generator=train_flow,steps_per_epoch=STEP_SIZE_TRAIN,validation_data=valid_flow,validation_steps=STEP_SIZE_VALID,epochs=EPOCHS,callbacks=callbacks_list)# 保存训练的loss
loss_history = bone_age_model_history.history['loss']
history_df = pd.DataFrame.from_dict(bone_age_model_history.history)
history_df.to_csv('/home/user/X-ray/incption_v3_loss_history1r.csv')bone_age_model.load_weights("bone_age_weights.best.hdf5")
print("Training complete !!!\n")# Evaluate model on test dataset
print("Evaluating model on test data ...\n")
print("Preparing testing dataset...")
test_X, test_Y = next(test_flow)  # one big batch
print("Data prepared !!!")
pred_Y = mu + sigma * bone_age_model.predict(x=test_X, batch_size=16, verbose=1) # 25
test_Y_months = mu + sigma * test_Y
print("Mean absolute error on test data: " + str(sk_mae(test_Y_months, pred_Y)))fig, ax1 = plt.subplots(1, 1, figsize=(6, 6))
ax1.plot(test_Y_months, pred_Y, 'r.', label='predictions')
ax1.plot(test_Y_months, test_Y_months, 'b-', label='actual')
ax1.legend()
ax1.set_xlabel('Actual Age (Months)')
ax1.set_ylabel('Predicted Age (Months)')ord_idx = np.argsort(test_Y)
ord_idx = ord_idx[np.linspace(0, len(ord_idx) - 1, num=8).astype(int)]  # take 8 evenly spaced ones
fig, m_axs = plt.subplots(2, 4, figsize=(16, 32))
for (idx, c_ax) in zip(ord_idx, m_axs.flatten()):c_ax.imshow(test_X[0][idx, :, :, 0], cmap='bone')title = 'Age: %2.1f\nPredicted Age: %2.1f\nGender: ' % (test_Y_months[idx], pred_Y[idx])if test_X[1][idx] == 0:title += "Female\n"else:title += "Male\n"c_ax.set_title(title)c_ax.axis('off')
plt.show()
bone_age_model.save('bone_age_model_inception.h5') # HDF5文件

Bone-Age-Detection-From-X-Ray相关推荐

  1. (翻译)Fully Automated Deep Learning System for Bone Age Assessment

                                       完全自动化的骨龄评估深度学习系统 摘要:骨骼成熟度通过不连续的阶段进行,这是常规用于儿科的事实,其中将骨龄评估(BAAs)与评估内 ...

  2. 【今日CS 视觉论文速览】Wed, 30 Jan 2019

    今日CS.CV计算机视觉论文速览 Wed, 30 Jan 2019 Totally 38 papers Daily Computer Vision Papers [1] Title: Diversit ...

  3. 【今日CV 计算机视觉论文速览 第122期】Fri, 31 May 2019

    今日CS.CV 计算机视觉论文速览 Fri, 31 May 2019 Totally 50 papers ?上期速览✈更多精彩请移步主页 Interesting: ?基于条件GANs的图像去水印方法, ...

  4. 神经网络 mse一直不变_使用深度卷积神经网络的儿科骨龄评估

    此篇文章内容源自 Pediatric Bone Age Assessment Using Deep Convolutional Neural Networks,若侵犯版权,请告知本人删帖. 此篇文章是 ...

  5. (十八:2020.10.10)MICCAI 2020 追踪之论文纲要(译)<上>

    讲在前面 暂时先更新PART I, 持续更新. 论文目录 PART I <Machine Learning Methodologies 机器学习方法> 论文 概要 1.Attention, ...

  6. (二十九:2021.01.10)MICCAI 2019 追踪之论文纲要(下)

    讲在前面 这部分是PART V和PART VI. 论文目录 PART V Computer-Assisted Interventions(计算机辅助干预) 概要 1.Robust Cochlear M ...

  7. 统计学每日论文速递[02.26]

    stat 方向,今日共计86篇 公众号(arXiv每日学术速递),欢迎关注,感谢支持哦~ [1] A General Method for Robust Learning from Batches 标 ...

  8. (二十:2020.11.06)MICCAI 2020 追踪之论文纲要(译)<下>

    讲在前面 此文章包含第五.第六和第七部分. 论文目录 PART V <Biological, Optical and Microscopic Image Analysis 生物,光学和显微图像分 ...

  9. 100+医学影像数据集集锦

    医学影像数据集集锦 前言 本项目的目标是整理一个医学影像方向数据集的列表,提供每个数据集的基本信息,并对其中License允许的提供不限速下载.项目按照数据集关注的器官对其进行分类.需要整理的数据集很 ...

  10. [Unity3D]总结使用Unity 3D优化游戏运行性能的经验

    作者:Amir Fasshihi 流畅的游戏玩法来自流畅的帧率,而我们即将推出的动作平台游戏<Shadow Blade>已经将在标准iPhone和iPad设备上实现每秒60帧视为一个重要目 ...

最新文章

  1. mvc mvp mvvm
  2. 软件研发的这些误区,你中了吗?
  3. android壁纸选择器,Android 图片选择器
  4. Linux文件压缩命令笔记
  5. deepin 安装cuda 编译 ffmpeg
  6. java锁机制ppt_总结:Java锁机制
  7. redis cluster 集群拓展
  8. 【pnpm】pnpm : 无法加载文件 C:\Users\M_F15\AppData\Roaming\npm\pnpm.ps1
  9. php服务器搬迁失败原因
  10. docker---dockerfile 编写优化
  11. 在团购网上空手赚钱项目,你敢做就敢赚!
  12. Validation进行参数校验
  13. android手机截图功能,安卓手机怎么截屏?三星/华为/小米等手机截图方法
  14. c语言解三色旗问题加注释,C语言经典算法——三色旗问题
  15. HTML5,CSS,JS前端常见知识面试题
  16. kerberos 之TGS_REQ、TGS_REP
  17. 2017京东校招笔试题
  18. SSM+老年人社区服务平台 毕业设计-附源码211711
  19. 虚拟计算服务器吗,云计算服务器是虚拟的吗
  20. (0101)iOS开发之iPad模拟器如何实现分屏模式调试

热门文章

  1. Android 操作系统简介
  2. SQL Server 数据库之角色、管理权限
  3. 基于图正则化的贝叶斯宽度学习系统
  4. android中gravity什么意思,浅谈android 中layout_gravity和gravity
  5. 云周刊】第177期:马云见证!蚂蚁金服推出全球首个区块链跨境汇款服务
  6. 洋码头API接口:item_search - 根据关键词取商品列表
  7. 汇编输出出现笑脸梅花等奇怪符号
  8. Nelder–Mead method
  9. 《消费者行为学》读书笔记 第一章 消费者行为学导论
  10. SSH The authenticity of host can’t be established Are you sure you want to continue connecting