1DCNN是1维卷积
2DCNN是两层卷积,+池化层
leNet5是两段卷积层+池化层,最后加三层全连接层
VGGNet16总共分为八段:

from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow import kerasdef LeNet_CNNmodel():model = keras.models.Sequential([layers.Conv2D(filters=64, kernel_size=(3, 3),padding='same',input_shape=(16, 16, 1), activation='relu'),layers.MaxPooling2D(pool_size=(2, 2), padding = 'same' ),layers.Conv2D(filters=32, kernel_size=(3, 3), padding='same',activation='relu'),layers.MaxPooling2D(pool_size=(2, 2), padding = 'same' ),#layers.Dropout(0.25),#(5,5,16) > 400layers.Flatten(),layers.Dense(256, activation='relu'),#layers.Dropout(0.5),#layers.Dense(84, activation='relu'),layers.Dense(128, activation='relu'),#layers.Dropout(0.5),layers.Dense(12, activation='softmax')])
# Compile modelmodel.compile(loss="sparse_categorical_crossentropy", optimizer='adam', metrics=['accuracy'])return model
#,class_weight=class_weight
def LeNet_CNN():t1 = time.time()model = LeNet_CNNmodel()X_train = tf.reshape(train_x,[-1,16,16,1])X_test =  tf.reshape(test_x, [-1,16,16,1])model.summary()history = model.fit(X_train, train_y, validation_data=(X_test, test_y), nb_epoch=25, batch_size=128, verbose=2)scores = model.evaluate(X_test, test_y, verbose=0)t2 = time.time()pred_y = model.predict(X_test)print(scores)print("Baseline Error: %.2f%%" % (100 - scores[1] * 100),t2-t1)print(history.history)return scores,pred_y
#simple_CNN()def oneD_cNNmodel():model = keras.models.Sequential([layers.Conv1D(50,7, input_shape = (32,8),activation='relu'),layers.MaxPooling1D(3),layers.Conv1D(50, 7, input_shape=(32, 8), activation='relu'),layers.GlobalAveragePooling1D(),#layers.Dropout(0.5),layers.Dense(12, activation='softmax')])model.compile(loss="sparse_categorical_crossentropy", optimizer='adam', metrics=['accuracy'])return modeldef oneD_cNN():t1 =time.time()model = oneD_cNNmodel()X_train = tf.reshape(train_x,[-1,32,8])X_test =  tf.reshape(test_x, [-1,32,8])model.summary()history = model.fit(X_train, train_y, validation_data=(X_test, test_y), nb_epoch=25, batch_size=128)scores = model.evaluate(X_test, test_y, verbose=0)t2 = time.time()pred_y = model.predict(X_test)print(scores)print("Baseline Error: %.2f%%" % (100 - scores[1] * 100),t2-t1)print(history.history)return scores,pred_ydef two_CNNmodel():model = keras.models.Sequential([layers.Conv2D(64, kernel_size=(3, 3),padding='same',input_shape=(16, 16, 1), activation='relu'),layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu'),layers.MaxPooling2D(pool_size=(2, 2), padding='same'),layers.Flatten(),layers.Dense(128, activation='relu'),layers.Dropout(0.5),layers.Dense(12, activation = 'softmax')])model.compile(loss="sparse_categorical_crossentropy", optimizer='adam', metrics=['accuracy'])return model
def twoD_CNN():t1 = time.time()model = two_CNNmodel()X_train = tf.reshape(train_x,[-1,16,16,1])X_test =  tf.reshape(test_x, [-1,16,16,1])model.summary()history = model.fit(X_train, train_y, validation_data=(X_test, test_y), nb_epoch=25, batch_size=128, verbose=2)scores = model.evaluate(X_test, test_y, verbose=0)t2 = time.time()pred_y = model.predict(X_test)print(scores)print("Baseline Error: %.2f%%" % (100 - scores[1] * 100),t2-t1)print(history.history)return scores,pred_ydef VGGNet16_model():model = keras.models.Sequential([layers.Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(16, 16, 1)),layers.Conv2D(64, (3, 3), activation='relu', padding='same'),layers.MaxPooling2D(pool_size=(2, 2), padding='same'),# block 2layers.Conv2D(128, (3, 3), activation='relu', padding='same'),layers.Conv2D(128, (3, 3), activation='relu', padding='same'),layers.MaxPooling2D(pool_size=(2, 2), padding='same'),#block3layers.Conv2D(256, (3, 3), activation='relu', padding='same'),layers.Conv2D(256, (3, 3), activation='relu', padding='same'),layers.Conv2D(256, (3, 3), activation='relu', padding='same'),layers.MaxPooling2D(pool_size=(2, 2), padding='same'),#block4layers.Conv2D(512, (3, 3), activation='relu', padding='same'),layers.Conv2D(512, (3, 3), activation='relu', padding='same'),layers.Conv2D(512, (3, 3), activation='relu', padding='same'),layers.MaxPooling2D(pool_size=(2, 2), padding='same'),# block5layers.Conv2D(512, (3, 3), activation='relu', padding='same'),layers.Conv2D(512, (3, 3), activation='relu', padding='same'),layers.Conv2D(512, (3, 3), activation='relu', padding='same'),layers.MaxPooling2D(pool_size=(2, 2), padding='same'),# layers.Dropout(0.25),# (5,5,16) > 400layers.Flatten(),layers.Dense(256, activation='relu'),# layers.Dropout(0.5),# layers.Dense(84, activation='relu'),layers.Dense(128, activation='relu'),# layers.Dropout(0.5),layers.Dense(12, activation='softmax')])# Compile modelmodel.compile(loss="sparse_categorical_crossentropy", optimizer='adam', metrics=['accuracy'])return modeldef VGGNet16():t1 = time.time()model = VGG16_Model()X_train = tf.reshape(train_x,[-1,16,16,1])X_test =  tf.reshape(test_x, [-1,16,16,1])model.summary()history = model.fit(X_train, train_y, validation_data=(X_test, test_y), nb_epoch=25, batch_size=128, verbose=2)scores = model.evaluate(X_test, test_y, verbose=0)t2 = time.time()pred_y = model.predict(X_test)print(scores)print("Baseline Error: %.2f%%" % (100 - scores[1] * 100),t2-t1)print(history.history)return scores,pred_y

1DCNN 2DCNN LeNet5,VGGNet16使用tensorflow2.X实现相关推荐

  1. Coding and Paper Letter(二)

    版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.net/ESA_DSQ/article/details/80960290 资源整理. 1 Coding: 2 ...

  2. 一维卷积(1D-CNN)、二维卷积(2D-CNN)、三维卷积(3D-CNN)

    一维卷积神经网络(1D-CNN) 一维卷积常用在序列模型.自然语言处理领域: 假设输入数据维度为8,filter维度为5: 不加padding时,输出维度为4,如果filter的数量为16,那么输出数 ...

  3. matlab的2DCNN、1DCNN、BP、SVM故障诊断与结果可视化

    0.前言 本文针对十分类轴承故障诊断问题,采用四种经典方法2DCNN.1DCNN.BP.SVM进行建模,并对比最终结果. 1.理论介绍 BP和SVM理论不再进行描述.1DCNN指的是采用一维卷积,2D ...

  4. Tensorflow2.0:实战LeNet-5识别MINIST数据集

    LeNet-5模型 1990 年代提出的LeNet-5使卷积神经网络在当时成功商用,下图是 LeNet-5 的网络结构图,它接受32 × 32大小的数字.字符图片,这次将LeNet-5模型用来识别MI ...

  5. Tensorflow2.0版本 笔记

    文章目录 Tensorflow笔记 1 常用函数 1.1 tf.where() 1.2 np.mgrid() 1.3 tf.nn.softmax_cross_entropy_with_logits() ...

  6. tensorflow综合示例7:LeNet-5实现mnist识别

    在本文中,我们使用tensorflow2.x实现了lenet-5,用于mnist的识别. import numpy as np import matplotlib.pyplot as plt impo ...

  7. 使用MATLAB搭建用于时间序列分类的1DCNN模型

    MATALB版本为2022b 数据 为了能够便于验证实验的可行性,在这里给出实验所需的数据. 该数据为人体活动时产生的加速度数据,包括三轴传感器的X轴.Y轴.Z轴和合加速度. 百度网盘链接:https ...

  8. 深度学习之卷积神经网络经典网络LeNet-5简介

    1. LeNet-5简介 LeNet5卷积神经网络源于Yann LeCun在1998年发表的论文:Gradient-based Learning Applied to Document Recogni ...

  9. 【TensorFlow2.0】如何搭建网络模型?

    大家好,这是专栏<TensorFlow2.0>的第四篇文章,讲述网络模型的搭建. 我们知道在不考虑输入层的情况下,一个典型的卷积神经网络通常由若干个卷积层.激活层.池化层及全连接层组成,无 ...

最新文章

  1. 规格选项表管理之保存规格选项表数据
  2. 专题 14 IPC之共享内存
  3. python读音有道词典-有道词典命令行快速翻译,Python编程的利器
  4. WinForm中给DataGridView添加 自动编号
  5. IOS UI 第三篇:基本UI
  6. 奇怪的 Markdown / LaTeX 笔记
  7. dell台式机进入安全模式_打造未来高效办公体验 华为首款商用台式机正式发布...
  8. Powershell 语法总结
  9. 携程Apollo(阿波罗)配置中心在Spring Boot项目快速集成
  10. HDU1874 畅通工程续【Dijkstra算法】
  11. 手机app通达信添加自定义公式(分时T+0)为例子讲解
  12. 跨站脚本攻击和跨站请求伪造
  13. 磊科NBR100企业有线路由器IP和Mac地址绑定教程
  14. Debian7升级glibc和gcc
  15. linux 配置回指路由,不配置回指路由多网段网络如何互联?
  16. 路由器配置深入浅出—路由器接口PPP协议封装及PAP和CHAP验证配置
  17. 判断深度学习的效果好坏loss和val_loss比较
  18. Aconvert 文档格式转换-PDF转免费转其他文档网址-免费
  19. TI DM36X 名词
  20. K_A02_004 基于单片机驱动8位数码管模块(74HC595) 0-7滚动+ 时钟显示

热门文章

  1. 手把手教你爬取清纯小姐姐私房照,小孩子写学
  2. 高效 MacBook 工作环境配置,超实用!
  3. C++ 字符跑酷#1 游戏制作实录
  4. 6个接私活的网站,你有技术就有钱!
  5. c语言如何在1序号方编程,asn1编码的理解
  6. Delaunay三角化算法
  7. 猜数字游戏,输入一个 1-100 以内的数字
  8. 禾苗绘本借阅管理系统
  9. Java基础 --- 泛型 Generics
  10. 今天发布贾永刚老师关于深海原位测试视频