Residual Networks(吴恩达课程)
Residual Networks(吴恩达课程)
# UNQ_C1
# GRADED FUNCTION: identity_blockdef identity_block(X, f, filters, training=True, initializer=random_uniform):"""Implementation of the identity block as defined in Figure 4Arguments:X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)f -- integer, specifying the shape of the middle CONV's window for the main pathfilters -- python list of integers, defining the number of filters in the CONV layers of the main pathtraining -- True: Behave in training modeFalse: Behave in inference modeinitializer -- to set up the initial weights of a layer. Equals to random uniform initializerReturns:X -- output of the identity block, tensor of shape (m, n_H, n_W, n_C)"""# Retrieve FiltersF1, F2, F3 = filters# Save the input value. You'll need this later to add back to the main path. X_shortcut = X# First component of main pathX = Conv2D(filters = F1, kernel_size = 1, strides = (1,1), padding = 'valid', kernel_initializer = initializer(seed=0))(X)X = BatchNormalization(axis = 3)(X, training = training) # Default axisX = Activation('relu')(X)### START CODE HERE## Second component of main path (≈3 lines)## Set the padding = 'same'X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', kernel_initializer = initializer(seed=0))(X)X = BatchNormalization(axis = 3)(X, training = training) X = Activation('relu')(X) ## Third component of main path (≈2 lines)## Set the padding = 'valid'X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', kernel_initializer = initializer(seed=0))(X)X = BatchNormalization(axis = 3)(X, training = training) ## Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)X = Add()([X, X_shortcut])X = Activation('relu')(X) ### END CODE HEREreturn X
# UNQ_C2
# GRADED FUNCTION: convolutional_blockdef convolutional_block(X, f, filters, s = 2, training=True, initializer=glorot_uniform):"""Implementation of the convolutional block as defined in Figure 4Arguments:X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)f -- integer, specifying the shape of the middle CONV's window for the main pathfilters -- python list of integers, defining the number of filters in the CONV layers of the main paths -- Integer, specifying the stride to be usedtraining -- True: Behave in training modeFalse: Behave in inference modeinitializer -- to set up the initial weights of a layer. Equals to Glorot uniform initializer, also called Xavier uniform initializer.Returns:X -- output of the convolutional block, tensor of shape (m, n_H, n_W, n_C)"""# Retrieve FiltersF1, F2, F3 = filters# Save the input valueX_shortcut = X##### MAIN PATH ###### First component of main path glorot_uniform(seed=0)X = Conv2D(filters = F1, kernel_size = 1, strides = (s, s), padding='valid', kernel_initializer = initializer(seed=0))(X)X = BatchNormalization(axis = 3)(X, training=training)X = Activation('relu')(X)### START CODE HERE## Second component of main path (≈3 lines)X = Conv2D(F2, (f, f), strides = (1,1), padding='same', kernel_initializer = initializer(seed=0))(X) X = BatchNormalization(axis = 3)(X, training=training)X = Activation('relu')(X) ## Third component of main path (≈2 lines)X = Conv2D(F3, (1, 1), strides = (1,1), padding='valid', kernel_initializer = initializer(seed=0))(X) X = BatchNormalization(axis = 3)(X, training=training)##### SHORTCUT PATH ##### (≈2 lines)X_shortcut = Conv2D(F3, (1, 1), strides = (s,s), padding='valid', kernel_initializer = initializer(seed=0))(X_shortcut)X_shortcut = BatchNormalization(axis = 3)(X_shortcut, training=training)### END CODE HERE# Final step: Add shortcut value to main path (Use this order [X, X_shortcut]), and pass it through a RELU activationX = Add()([X, X_shortcut])X = Activation('relu')(X)return X
# UNQ_C3
# GRADED FUNCTION: ResNet50def ResNet50(input_shape = (64, 64, 3), classes = 6):"""Stage-wise implementation of the architecture of the popular ResNet50:CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> FLATTEN -> DENSE Arguments:input_shape -- shape of the images of the datasetclasses -- integer, number of classesReturns:model -- a Model() instance in Keras"""# Define the input as a tensor with shape input_shapeX_input = Input(input_shape)# Zero-PaddingX = ZeroPadding2D((3, 3))(X_input)# Stage 1X = Conv2D(64, (7, 7), strides = (2, 2), kernel_initializer = glorot_uniform(seed=0))(X)X = BatchNormalization(axis = 3)(X)X = Activation('relu')(X)X = MaxPooling2D((3, 3), strides=(2, 2))(X)# Stage 2X = convolutional_block(X, f = 3, filters = [64, 64, 256], s = 1)X = identity_block(X, 3, [64, 64, 256])X = identity_block(X, 3, [64, 64, 256])### START CODE HERE## Stage 3 (≈4 lines)X = convolutional_block(X, f=3, filters=[128, 128, 512], s=2) X = identity_block(X, 3, [128, 128, 512]) X = identity_block(X, 3, [128, 128, 512])X = identity_block(X, 3, [128, 128, 512])## Stage 4 (≈6 lines)X = convolutional_block(X, f=3, filters=[256, 256, 1024],s=2) X = identity_block(X, 3, [256, 256, 1024])X = identity_block(X, 3, [256, 256, 1024])X = identity_block(X, 3, [256, 256, 1024])X = identity_block(X, 3, [256, 256, 1024])X = identity_block(X, 3, [256, 256, 1024])## Stage 5 (≈3 lines)X = convolutional_block(X, f=3, filters=[512, 512, 2048],s=2) X = identity_block(X, 3, [512, 512, 2048])X = identity_block(X, 3, [512, 512, 2048])## AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"X = AveragePooling2D(pool_size=(2, 2), padding='same')(X) ### END CODE HERE# output layerX = Flatten()(X)X = Dense(classes, activation='softmax', kernel_initializer = glorot_uniform(seed=0))(X)# Create modelmodel = Model(inputs = X_input, outputs = X)return model
Residual Networks(吴恩达课程)相关推荐
- Ex6_机器学习_吴恩达课程作业(Python):SVM支持向量机(Support Vector Machines)
Ex6_机器学习_吴恩达课程作业(Python):SVM支持向量机(Support Vector Machines) 文章目录 Ex6_机器学习_吴恩达课程作业(Python):SVM支持向量机(Su ...
- Deep Learning Art: Neural Style Transfer(吴恩达课程)
Deep Learning & Art: Neural Style Transfer(吴恩达课程) # UNQ_C1 # GRADED FUNCTION: compute_content_co ...
- Image Segmentation with U-Net(吴恩达课程)
Image Segmentation with U-Net(吴恩达课程) # UNQ_C1 # GRADED FUNCTION: conv_block def conv_block(inputs=No ...
- Transfer Learning with MobileNetV2(吴恩达课程)
Transfer Learning with MobileNetV2(吴恩达课程) # UNQ_C1 # GRADED FUNCTION: data_augmenter def data_augmen ...
- Autonomous Driving - Car Detection(吴恩达课程)
Autonomous Driving - Car Detection(吴恩达课程) # UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT) # GRADED FU ...
- 干货|机器学习零基础?不要怕,吴恩达课程笔记第三周!逻辑回归与正则
吴恩达Coursera机器学习课系列笔记 课程笔记|吴恩达Coursera机器学习 Week1 笔记-机器学习基础 干货|机器学习零基础?不要怕,吴恩达机器学习课程笔记2-多元线性回归 1 Logis ...
- 吴恩达课程翻译_中文学习资源:斯坦福大学CS231n计算机视觉课程
hi,我是为你们的xio习操碎了心的和鲸社区男运营 我们的网站:和鲸社区 Kesci.co 我们的公众号:和鲸社区(ID:heywhale-kesci) 有干货,来! 大家好,此次本鲸给大家翻译的项目 ...
- 吴恩达课程及视频笔记汇总
马克一记https://zhuanlan.zhihu.com/p/30870804 转载于:https://www.cnblogs.com/573177885qq/p/7852564.html
- 【CNN】 吴恩达课程中几种网络的比较
LeNet5 AlexNet VGG16 ResNet : 残差网络 Inception Net : 特点,可以通过1*1*192 的卷积核来缩减参数个数,压缩维度,约缩减10倍, 例如 :用1 ...
最新文章
- Linux_学习_01_常用命令大全
- 打开 计算机 找不到桌面图标,电脑启动后桌面图标不见了怎么办 不显示图标解决办法...
- TechEd China 2009 课程幻灯片和代码下载 - MBL311 SQL Server Compact 终极性能调校
- js中的全局变量和局部变量
- JavaScript中闭包实现的私有属性的getter()和setter()方法
- 现实生活中常用的动态路由OSPF(单区)
- C#配置及使用log4net
- html字符串转svg,【SVG】如何操作SVG Text
- C语言中执行python代码或源程序文件(高级嵌入方式)
- 如何科学地浪费朋友手中的啤酒
- python爬取js动态网页_Python 爬取网页中JavaScript动态添加的内容(一)
- yii 定义controller model
- hdu1789----贪心+回溯
- Web前端开发配色表及标准颜色表
- 禁止用鼠标拖动窗口的大小 - 回复 合肥的石头 的问题
- ELDER-RAY (多头力度和空头力度)
- HTML中的语义化标签
- FPGA中的竞争冒险消除
- uni-app 即时聊天
- spss数据处理—数据输入