深度学习 keras

什么是Keras? (What is Keras?)

Keras is a high-level neural networks API. It is written in Python and can run on top of Theano, TensorFlow or CNTK. It was developed with the idea of:

Keras是高级神经网络API。 它是用Python编写的,可以在Theano,TensorFlow或CNTK之上运行。 它的开发思想是:

Being able to go from idea to result with the least possible delay is key to doing good research.
能够以尽可能少的延迟将想法付诸实践是进行良好研究的关键。

Keras is a user-friendly, extensible and modular library which makes prototyping easy and fast. It supports convolutional networks, recurrent networks and even the combination of both.

Keras是一个用户友好,可扩展的模块化库,可轻松快速地制作原型。 它支持卷积网络,循环网络,甚至两者的组合。

Initial development of Keras was a part of the research of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System).

Keras的最初开发是ONEIROS(开放式神经电子智能机器人操作系统)项目研究的一部分。

为什么选择Keras? (Why Keras?)

There are countless deep-learning frameworks available today, but there are some of the areas in which Keras proved better than other alternatives.

当今有无数的深度学习框架可用,但是Keras在某些领域被证明比其他选择要好。

Keras focuses on minimal user action requirement when common use cases are concerned also if the user makes an error, clear and actionable feedback is provided. This makes keras easy to learn and use.

当涉及到常见用例时,如果用户犯了错误,提供了清晰且可行的反馈,Keras专注于最小化用户操作要求。 这使喀拉拉邦易于学习和使用

When you want to put your Keras models to use into some application, you need to deploy it on other platforms which is comparatively easy if you are using keras. It also supports multiple backends and also allows portability across backends i.e. you can train using one backend and load it with another.

当您想将Keras模型用于某些应用程序时,您需要将其部署在其他平台上,如果您使用的是keras,这相对容易。 它还支持多个后端,并且还允许跨后端进行可移植性,即,您可以使用一个后端进行培训,然后将其加载到另一个后端。

It has got a strong back with built-in multiple GPU support, it also supports distributed training.

凭借内置的多个GPU支持获得了强大的支持,它还支持分布式培训。

Keras教程 (Keras Tutorial)

安装Keras (Installing Keras)

We need to install one of the backend engines before we actually get to installing Keras. Let’s go and install any of TensorFlow or Theano or CNTK modules.

在实际安装Keras之前,我们需要安装一个后端引擎。 我们去安装任何TensorFlow或Theano或CNTK模块。

Now, we are ready to install keras. We can either use pip installation or clone the repository from git. To install using pip, open the terminal and run the following command:

现在,我们准备安装keras。 我们可以使用pip安装或从git克隆存储库。 要使用pip进行安装,请打开终端并运行以下命令:

pip install keras

In case pip installation doesn’t work or you want another method, you can clone the git repository using

如果无法安装pip或需要其他方法,则可以使用以下命令克隆git存储库

git clone https://github.com/keras-team/keras.git

Once cloned, move to the cloned directory and run:

克隆后,移至克隆目录并运行:

sudo python setup.py install

使用Keras (Using Keras)

To use Keras in any of your python scripts we simply need to import it using:

要在您的任何python脚本中使用Keras,我们只需要使用以下命令导入即可:

import keras

密集连接的网络 (Densely Connected Network)

A Sequential model is probably a better choice to create such network, but we are just getting started so it’s a better choice to start with something really simple:

顺序模型可能是创建此类网络的更好选择,但是我们才刚刚开始,因此从一个非常简单的东西开始是一个更好的选择:

from keras.layers import Input, Dense
from keras.models import Model
# This returns a tensor
inputs = Input(shape=(784,))# a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
# This creates a model that includes
# the Input layer and three Dense layers
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

Now that you have seen how to create a simple Densely Connected Network model you can train it with your training data and may use it in your deep learning module.

既然您已经了解了如何创建简单的密集连接网络模型,则可以将其与训练数据一起训练,并可以在深度学习模块中使用它。

顺序模型 (Sequential Model)

Model is core data structure of Keras. The simplest type of model is a linear stack of layers, we call it Sequential Model. Let’s put our hands in code and try to build one:

模型是Keras的核心数据结构。 模型的最简单类型是层的线性堆栈,我们称之为顺序模型。 让我们动手编写代码,尝试构建一个:

# import required modules
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
# Create a model
model= Sequential()
# Stack Layers
model.add(Dense(units=64, activation='relu', input_dim=100))
model.add(Dense(units=10, activation='softmax'))
# Configure learning
model.compile(loss='categorical_crossentropy', optimizer='sgd',metrics=['accuracy'])
# Create Numpy arrays with random values, use your training or test data here
x_train = np.random.random((64,100))
y_train = np.random.random((64,10))
x_test = np.random.random((64,100))
y_test = np.random.random((64,10))
# Train using numpy arrays
model.fit(x_train, y_train, epochs=5, batch_size=32)
# evaluate on existing data
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)
# Generate predictions on new data
classes = model.predict(x_test, batch_size=128)

Let’s run the program to see the results:

keras tutorial, keras deep learning tutorial

让我们运行程序以查看结果:

Let’s try a few more models and how to create them like, Residual Connection on a Convolution Layer:

让我们尝试其他一些模型,以及如何在卷积层上创建“残留连接”,例如:

from keras.layers import Conv2D, Input# input tensor for a 3-channel 256x256 image
x = Input(shape=(256, 256, 3))
# 3x3 conv with 3 output channels (same as input channels)
y = Conv2D(3, (3, 3), padding='same')(x)
# this returns x + y.
z = keras.layers.add([x, y])

共享视觉模型 (Shared Vision Model)

Shared Vision Model helps to classify whether two MNIST digits are the same digit or different digits by reusing the same image-processing module on two inputs. Let’s create one as shown below.

共享视觉模型通过在两个输入上重用相同的图像处理模块,有助于对两个MNIST数字是相同数字还是不同数字进行分类。 让我们创建一个如下所示。

from keras.layers import Conv2D, MaxPooling2D, Input, Dense, Flatten
from keras.models import Model
import keras
# First, define the vision modules
digit_input = Input(shape=(27, 27, 1))
x = Conv2D(64, (3, 3))(digit_input)
x = Conv2D(64, (3, 3))(x)
x = MaxPooling2D((2, 2))(x)
out = Flatten()(x)
vision_model = Model(digit_input, out)
# Then define the tell-digits-apart model
digit_a = Input(shape=(27, 27, 1))
digit_b = Input(shape=(27, 27, 1))
# The vision model will be shared, weights and all
out_a = vision_model(digit_a)
out_b = vision_model(digit_b)
concatenated = keras.layers.concatenate([out_a, out_b])
out = Dense(1, activation='sigmoid')(concatenated)
classification_model = Model([digit_a, digit_b], out)

视觉问答模型 (Visual Question Answering Model)

Let’s create a model which can choose the correct one-word answer to a natural-language question about a picture.

让我们创建一个模型,该模型可以为有关图片的自然语言问题选择正确的单字答案。

It can be done by encoding the question and image into two separate vectors, concatenating both of them and training on top a logistic regression over some vocabulary of potential answers. Let’s try the model:

可以通过将问题和图像编码为两个单独的向量,并将它们连接在一起,并在一些潜在答案的词汇表上进行逻辑回归来训练来完成。 让我们尝试一下模型:

from keras.layers import Conv2D, MaxPooling2D, Flatten
from keras.layers import Input, LSTM, Embedding, Dense
from keras.models import Model, Sequential
import keras# First, let's define a vision model using a Sequential model.
# This model will encode an image into a vector.
vision_model = Sequential()
vision_model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)))
vision_model.add(Conv2D(64, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(128, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(Conv2D(256, (3, 3), activation='relu'))
vision_model.add(MaxPooling2D((2, 2)))
vision_model.add(Flatten())# Now let's get a tensor with the output of our vision model:
image_input = Input(shape=(224, 224, 3))
encoded_image = vision_model(image_input)# Next, let's define a language model to encode the question into a vector.
# Each question will be at most 100 word long,
# and we will index words as integers from 1 to 9999.
question_input = Input(shape=(100,), dtype='int32')
embedded_question = Embedding(input_dim=10000, output_dim=256, input_length=100)(question_input)
encoded_question = LSTM(256)(embedded_question)# Let's concatenate the question vector and the image vector:
merged = keras.layers.concatenate([encoded_question, encoded_image])# And let's train a logistic regression over 1000 words on top:
output = Dense(1000, activation='softmax')(merged)# This is our final model:
vqa_model = Model(inputs=[image_input, question_input], outputs=output)# The next stage would be training this model on actual data.

If you want to learn more about Visual Question Answering (VQA), check out this beginner’s guide to VQA.

如果您想了解有关视觉问答(VQA)的更多信息,请查阅此VQA初学者指南 。

训练神经网络 (Training Neural Network)

Now that we have seen how to build different models using Keras, let’s put things together and work on a complete example. The following example trains a Neural Network on MNIST data set:

既然我们已经了解了如何使用Keras构建不同的模型,那么让我们将它们放在一起并研究一个完整的示例。 以下示例在MNIST数据集上训练神经网络:

import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSpropbatch_size = 128
num_classes = 10
epochs = 20# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))model.summary()
# Compile model
model.compile(loss='categorical_crossentropy',optimizer=RMSprop(),metrics=['accuracy'])history = model.fit(x_train, y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
# Print the results
print('Test loss:', score[0])
print('Test accuracy:', score[1])

Let’s run this example and wait for results:

keras example, keras neural network tutorial

The output shows only the final part, it might take a few minutes for the program to finish execution depending on machine

让我们运行此示例并等待结果:

输出仅显示最后一部分,根据机器的不同,程序可能需要几分钟才能完成执行

结论 (Conclusion)

In this tutorial, we discovered that Keras is a powerful framework and makes it easy for the user to create prototypes and that too very quickly. We have also seen how different models can be created using keras. These models can be used for feature extraction, fine-tuning and prediction. We have also seen how to train a neural network using keras.

在本教程中,我们发现Keras是一个功能强大的框架,它使用户易于创建原型,而且开发起来太快了。 我们还看到了如何使用keras创建不同的模型。 这些模型可用于特征提取,微调和预测。 我们还看到了如何使用keras训练神经网络。

Keras has grown popular with other frameworks and it is one of the most popular frameworks on Kaggle.

Keras在其他框架中变得越来越流行,它是Kaggle上最受欢迎的框架之一。

翻译自: https://www.journaldev.com/18314/keras-deep-learning-tutorial

深度学习 keras

深度学习 keras_Keras深度学习教程相关推荐

  1. 深度学习(三十八)——深度强化学习(1)教程

    教程 http://incompleteideas.net/sutton/book/the-book-2nd.html <Reinforcement Learning: An Introduct ...

  2. 深度学习与自然语言处理教程(8) - NLP中的卷积神经网络(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  3. 深度学习与自然语言处理教程(4) - 句法分析与依存解析(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  4. AI火爆干货最全整理!五套深度学习和算法学习教程和三套Python学习视频!!!限时无套路免费领取!...

    点击蓝色"AI专栏"关注我哟 选择"星标",重磅干货,第一时间送达 这是站长第 31 期免费送丰富宝贵的干货资源与教程 本期绝对是满满的干货! 获取更多资源请关 ...

  5. 深度学习与自然语言处理教程(5) - 语言模型、RNN、GRU与LSTM(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  6. 深度学习与自然语言处理教程(3) - 神经网络与反向传播(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  7. 深度学习与自然语言处理教程(7) - 问答系统(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  8. 深度学习与自然语言处理教程(6) - 神经机器翻译、seq2seq与注意力机制(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

  9. 深度学习与自然语言处理教程(1) - 词向量、SVD分解与Word2Vec(NLP通关指南·完结)

    作者:韩信子@ShowMeAI 教程地址:https://www.showmeai.tech/tutorials/36 本文地址:https://www.showmeai.tech/article-d ...

最新文章

  1. 图卷积神经网络(part5)--GraphSAGE
  2. MFC下列表控件的使用
  3. 尝试引用已删除的函数_如何在Excel中使用ROW函数
  4. Adobe Photoshop源代码以及3800万用户信息泄漏
  5. oracle表单独创建完成之后,在加备注语法
  6. 一篇关于实体链接的小综述
  7. java jdk中优先队列的实现
  8. 基于STM32设计的掌上游戏机(运行NES游戏模拟器)详细开发过程
  9. 用代码做一个浪漫的“3D照片墙”
  10. 卡片游戏 基础c语言试题
  11. 谁说门户已死?从世界杯看新浪的四大优势
  12. 一直激励我的一个故事--驴子的故事
  13. 美术 2.4 UV原理基础
  14. Hunger Snake
  15. 计算机睡眠会影响游戏挂机吗,为什么很多人玩游戏的时候会挂机?断网是其一,过来人说出大实话...
  16. PMU配置(RK808)
  17. VAN(大核注意力机制)
  18. 公交语音播报调试第四天
  19. 你需要掌握的经典知识之Spring
  20. ChinaVis 2018第五届可视化与可视分析大会Day2

热门文章

  1. 扩展 delphi 泛型 以实现类似lambda功能 , C#中的any count first last 等扩展方法
  2. C++头文件重复定义问题的处理(不会看看,会了防身!)
  3. 如何测量C#代码的运行时间
  4. 明基电通董事长李焜耀
  5. Oracle:Authid Current_User使用
  6. 【蓝桥杯单片机11】单总线温度传感器DS18B20的基本操作
  7. const限定符用法汇总
  8. LCD中调色板的概念
  9. 20款超酷的404错误页面(上)
  10. javascript 经常会用到的东西