这个例子实现了可以完成通过拍照发送到指定的微信账号,自动回复花的类型。当然需要预先安装Python运行环境以及TensorFlow相关工具。具体通过搜索引擎查一下,很多博客文章可以参考。我的安装环境如下:

C:\Users\Administrator.WIN7-1609091712>python
Python 3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)]on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.4.0'
>>> tf.__path__
['D:\\Program Files\\Python\\Python36\\lib\\site-packages\\tensorflow']
>>>

实现步骤:

1.下载花卉样本数据集

首先下载flowers样本数据

链接:https://pan.baidu.com/s/1Sh3f09K27RV64q0yxC-GHw 密码:zx7x

保存到D:\study\tensorflow\flower_photos

2.通过训练程序生成识别模型并保存

新建python文件flower.py,内容如下:

from skimage import io,transform
import glob
import os
import tensorflow as tf
import numpy as np
import time#数据集地址
path='D:/study/tensorflow/flower_photos/'
#模型保存地址
model_path='D:/study/tensorflow/flower/model.ckpt'#将所有的图片resize成100*100
w=100
h=100
c=3#读取图片
def read_img(path):cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)]imgs=[]labels=[]for idx,folder in enumerate(cate):for im in glob.glob(folder+'/*.jpg'):print('reading the images:%s'%(im))img=io.imread(im)img=transform.resize(img,(w,h))imgs.append(img)labels.append(idx)return np.asarray(imgs,np.float32),np.asarray(labels,np.int32)
data,label=read_img(path)#打乱顺序
num_example=data.shape[0]
arr=np.arange(num_example)
np.random.shuffle(arr)
data=data[arr]
label=label[arr]#将所有数据分为训练集和验证集
ratio=0.8
s=np.int(num_example*ratio)
x_train=data[:s]
y_train=label[:s]
x_val=data[s:]
y_val=label[s:]#-----------------构建网络----------------------
#占位符
x=tf.placeholder(tf.float32,shape=[None,w,h,c],name='x')
y_=tf.placeholder(tf.int32,shape=[None,],name='y_')def inference(input_tensor, train, regularizer):with tf.variable_scope('layer1-conv1'):conv1_weights = tf.get_variable("weight",[5,5,3,32],initializer=tf.truncated_normal_initializer(stddev=0.1))conv1_biases = tf.get_variable("bias", [32], initializer=tf.constant_initializer(0.0))conv1 = tf.nn.conv2d(input_tensor, conv1_weights, strides=[1, 1, 1, 1], padding='SAME')relu1 = tf.nn.relu(tf.nn.bias_add(conv1, conv1_biases))with tf.name_scope("layer2-pool1"):pool1 = tf.nn.max_pool(relu1, ksize = [1,2,2,1],strides=[1,2,2,1],padding="VALID")with tf.variable_scope("layer3-conv2"):conv2_weights = tf.get_variable("weight",[5,5,32,64],initializer=tf.truncated_normal_initializer(stddev=0.1))conv2_biases = tf.get_variable("bias", [64], initializer=tf.constant_initializer(0.0))conv2 = tf.nn.conv2d(pool1, conv2_weights, strides=[1, 1, 1, 1], padding='SAME')relu2 = tf.nn.relu(tf.nn.bias_add(conv2, conv2_biases))with tf.name_scope("layer4-pool2"):pool2 = tf.nn.max_pool(relu2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')with tf.variable_scope("layer5-conv3"):conv3_weights = tf.get_variable("weight",[3,3,64,128],initializer=tf.truncated_normal_initializer(stddev=0.1))conv3_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))conv3 = tf.nn.conv2d(pool2, conv3_weights, strides=[1, 1, 1, 1], padding='SAME')relu3 = tf.nn.relu(tf.nn.bias_add(conv3, conv3_biases))with tf.name_scope("layer6-pool3"):pool3 = tf.nn.max_pool(relu3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')with tf.variable_scope("layer7-conv4"):conv4_weights = tf.get_variable("weight",[3,3,128,128],initializer=tf.truncated_normal_initializer(stddev=0.1))conv4_biases = tf.get_variable("bias", [128], initializer=tf.constant_initializer(0.0))conv4 = tf.nn.conv2d(pool3, conv4_weights, strides=[1, 1, 1, 1], padding='SAME')relu4 = tf.nn.relu(tf.nn.bias_add(conv4, conv4_biases))with tf.name_scope("layer8-pool4"):pool4 = tf.nn.max_pool(relu4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')nodes = 6*6*128reshaped = tf.reshape(pool4,[-1,nodes])with tf.variable_scope('layer9-fc1'):fc1_weights = tf.get_variable("weight", [nodes, 1024],initializer=tf.truncated_normal_initializer(stddev=0.1))if regularizer != None: tf.add_to_collection('losses', regularizer(fc1_weights))fc1_biases = tf.get_variable("bias", [1024], initializer=tf.constant_initializer(0.1))fc1 = tf.nn.relu(tf.matmul(reshaped, fc1_weights) + fc1_biases)if train: fc1 = tf.nn.dropout(fc1, 0.5)with tf.variable_scope('layer10-fc2'):fc2_weights = tf.get_variable("weight", [1024, 512],initializer=tf.truncated_normal_initializer(stddev=0.1))if regularizer != None: tf.add_to_collection('losses', regularizer(fc2_weights))fc2_biases = tf.get_variable("bias", [512], initializer=tf.constant_initializer(0.1))fc2 = tf.nn.relu(tf.matmul(fc1, fc2_weights) + fc2_biases)if train: fc2 = tf.nn.dropout(fc2, 0.5)with tf.variable_scope('layer11-fc3'):fc3_weights = tf.get_variable("weight", [512, 5],initializer=tf.truncated_normal_initializer(stddev=0.1))if regularizer != None: tf.add_to_collection('losses', regularizer(fc3_weights))fc3_biases = tf.get_variable("bias", [5], initializer=tf.constant_initializer(0.1))logit = tf.matmul(fc2, fc3_weights) + fc3_biasesreturn logit#---------------------------网络结束---------------------------
regularizer = tf.contrib.layers.l2_regularizer(0.0001)
logits = inference(x,False,regularizer)#(小处理)将logits乘以1赋值给logits_eval,定义name,方便在后续调用模型时通过tensor名字调用输出tensor
b = tf.constant(value=1,dtype=tf.float32)
logits_eval = tf.multiply(logits,b,name='logits_eval') loss=tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y_)
train_op=tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
correct_prediction = tf.equal(tf.cast(tf.argmax(logits,1),tf.int32), y_)
acc= tf.reduce_mean(tf.cast(correct_prediction, tf.float32))#定义一个函数,按批次取数据
def minibatches(inputs=None, targets=None, batch_size=None, shuffle=False):assert len(inputs) == len(targets)if shuffle:indices = np.arange(len(inputs))np.random.shuffle(indices)for start_idx in range(0, len(inputs) - batch_size + 1, batch_size):if shuffle:excerpt = indices[start_idx:start_idx + batch_size]else:excerpt = slice(start_idx, start_idx + batch_size)yield inputs[excerpt], targets[excerpt]n_epoch=10
batch_size=64
saver=tf.train.Saver()
sess=tf.Session()
sess.run(tf.global_variables_initializer())
for epoch in range(n_epoch):start_time = time.time()#trainingtrain_loss, train_acc, n_batch = 0, 0, 0for x_train_a, y_train_a in minibatches(x_train, y_train, batch_size, shuffle=True):_,err,ac=sess.run([train_op,loss,acc], feed_dict={x: x_train_a, y_: y_train_a})train_loss += err; train_acc += ac; n_batch += 1print("   train loss: %f" % (np.sum(train_loss)/ n_batch))print("   train acc: %f" % (np.sum(train_acc)/ n_batch))#validationval_loss, val_acc, n_batch = 0, 0, 0for x_val_a, y_val_a in minibatches(x_val, y_val, batch_size, shuffle=False):err, ac = sess.run([loss,acc], feed_dict={x: x_val_a, y_: y_val_a})val_loss += err; val_acc += ac; n_batch += 1print("   validation loss: %f" % (np.sum(val_loss)/ n_batch))print("   validation acc: %f" % (np.sum(val_acc)/ n_batch))
saver.save(sess,model_path)
sess.close()

运行python flower.py

D:\study\tensorflow>python flower.py
reading the images:D:/study/tensorflow/flower_photos/daisy\100080576_f52e8ee070_
n.jpg
D:\Program Files\Python\Python36\lib\site-packages\skimage\transform\_warps.py:8
4: UserWarning: The default mode, 'constant', will be changed to 'reflect' in sk
image 0.15.warn("The default mode, 'constant', will be changed to 'reflect' in "
reading the images:D:/study/tensorflow/flower_photos/daisy\10140303196_b88d3d6ce
c.jpg
reading the images:D:/study/tensorflow/flower_photos/daisy\10172379554_b296050f8
2_n.jpg

运行完成后,在flower目录下生成训练后的模型。包含四个文件。

D:\study\tensorflow\flower>tree /f
文件夹 PATH 列表
卷序列号为 FEAC-6D64
D:.checkpointmodel.ckpt.data-00000-of-00001model.ckpt.indexmodel.ckpt.meta

3.安装Itchat工具包和scikit-image工具包

pip install itchat

pip install scikit-image

安装出现问的话,可以下载蓝灯代理。

itchat是一个开源的微信个人号接口,使用python调用微信,简化操作。

scikit-image (a.k.a. skimage) 是一个图像处理和计算机视觉的算法集合

具体查阅ItChat介绍以及scikit-image官网

4.训练模型调用代码文件imagerec.py

import sys, getoptfrom skimage import io, transform
import tensorflow as tf
import numpy as npflower_dict = {0: '菊花',1: '蒲公英',2: '玫瑰',3: '向日葵',4: '郁金香'
}def main(argv):inputfile = ""try:opts, args = getopt.getopt(argv, "hi:o:", ["infile=", "outfile="])except getopt.GetoptError:print('Error: test_arg.py -i <inputfile> -o <outputfile>')print('   or: test_arg.py --infile=<inputfile> --outfile=<outputfile>')sys.exit(2)for opt, arg in opts:if opt == "-h":print('test_arg.py -i <inputfile> -o <outputfile>')print('or: test_arg.py --infile=<inputfile> --outfile=<outputfile>')sys.exit()elif opt in ("-i", "--infile"):inputfile = argprint(inputfile)print(recgnize(inputfile))def read_one_image(path):img = io.imread(path)img = transform.resize(img, (100, 100))return np.asarray(img)def recgnize(filename):with tf.Session() as sess:data = []data.append(read_one_image(filename))saver = tf.train.import_meta_graph('D:/study/tensorflow/flower/model.ckpt.meta')saver.restore(sess, tf.train.latest_checkpoint('D:/study/tensorflow/flower/'))graph = tf.get_default_graph()x = graph.get_tensor_by_name("x:0")feed_dict = {x: data}logits = graph.get_tensor_by_name("logits_eval:0")classification_result = sess.run(logits, feed_dict)# 打印出预测矩阵#print(classification_result)# 打印出预测矩阵每一行最大值的索引#print(tf.argmax(classification_result, 1).eval())# 根据索引通过字典对应花的分类output = []output = tf.argmax(classification_result, 1).eval()for i in range(len(output)):#print("第", i + 1, "朵花预测:" + flower_dict[output[i]])return flower_dict[output[i]]if __name__ == "__main__":main(sys.argv[1: ])

5.微信集成wx.py

import itchat, time
from itchat.content import *
from imagerec import recgnize@itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING])
def text_reply(msg):msg.user.send('%s: %s' % ('回复', msg["Text"]))@itchat.msg_register([PICTURE, RECORDING, ATTACHMENT, VIDEO])
def download_files(msg):msg.download(msg.fileName)type1 = recgnize(msg.fileName)print(type1)typeSymbol = {PICTURE: 'img',VIDEO: 'vid', }.get(msg.type, 'fil')msg.user.send(type1)#return '@%s@%s' % (typeSymbol, msg.fileName)@itchat.msg_register(FRIENDS)
def add_friend(msg):msg.user.verify()msg.user.send('Nice to meet you!')itchat.auto_login()
itchat.run(True)

运行python wx.py

微信扫描生成的QR.PNG文件,登录确认,这样你的微信可以进行花卉识别了。

找个好友发个玫瑰花的图片给你,效果如下:

可以动手试试,有啥问题欢迎沟通!

基于微信的拍照花卉识别相关推荐

  1. 花卉识别python_基于深度学习的花卉识别系统设计与实现

    杨美艳 任富顺 顾志东 摘   要:深度学习是机器学习的一种前沿发展,设计主要利用谷歌的tensorflow框架,实现了对十种花卉的分类和识别.通过已有的大量的花卉图片素材,编写卷积神经网络对花卉图片 ...

  2. 基于MATLAB花卉识别系统,基于深度学习的花卉识别系统设计与实现

    杨美艳 任富顺 顾志东 摘   要:深度学习是机器学习的一种前沿发展,设计主要利用谷歌的tensorflow框架,实现了对十种花卉的分类和识别.通过已有的大量的花卉图片素材,编写卷积神经网络对花卉图片 ...

  3. 基于深度学习的花卉识别

    1.数据集 春天来了,我在公园的小道漫步,看着公园遍野的花朵,看起来真让人心旷神怡,一周工作带来的疲惫感顿时一扫而光.难得一个糙汉子有闲情逸致俯身欣赏这些花朵儿,然而令人尴尬的是,我一朵都也不认识.那 ...

  4. 毕业设计-基于深度学习的花卉识别分类

    目录 前言 课题背景和意义 实现技术思路 一.花卉识别相关理论基础 二.基于 ResNeXt 和迁移学习的花卉种类识别 三.基于 EfficientNet 和迁移学习的花卉种类识别 实现效果图样例 最 ...

  5. 基于VGG16网络的花卉识别

    环境 简述 python3.7 Tensorflow 卷积神经网络对花卉图片进行识别 VGG.py 这一部分包括数据处理,模型定义,模型训练. 1.第26行的名称为数据集文件夹每一类花的文件夹名字 2 ...

  6. 基于深度学习的花卉识别算法研究

    1.数据集 本数据集由本人亲自使用手机进行拍摄采集,原始数据集包含了14万张图片,图片的尺寸为1024x1024,为了方便储存和传输,把原图缩小为224x224,并从中抽取了47000多张花卉的图片组 ...

  7. 【MATLAB项目实战】基于CNN_SVM的图像花卉识别

    数据集:5类花卉 简单来说 就是CNN做特征提取 SVM做分类 训练集:测试集=8:2 代码中可以更换不同的CNN网络:AlexNet VGG16 VGG19 ResNet50 clc; clear ...

  8. 【01】花卉识别-基于tensorflow2.3实现

    ------------------------------------------------2021年6月18日重大更新-------------------------------------- ...

  9. 微信小程序基于百度云实现图文识别(胎教级教程)

    前言 最近开发微信小程序用到了图文识别的功能,刚开始还觉得很难,但其实配合一些第三方api接入,实现这个功能还是很简单的,下面我们一起来看看要怎么实现这个小功能吧. 1.首先我们需要注册一个百度云账号 ...

最新文章

  1. Alpha多样性稀释曲线rarefraction curve还不会画吗?快看此文
  2. 遭遇IE8下的JavaScript兼容问题
  3. hyper-v NAT网络
  4. 【opencv】25.图像卷积cv::filter2D()以及c++代码实例
  5. 答TOGAF企业架构的一些问题
  6. 计组—浮点数表示和运算
  7. python本地编译器_Python学习札记(0)——Python开发环境搭载及推荐几款Python编译器...
  8. 用dynamic增强C#泛型表达力
  9. Python版组合数计算方法优化思路和源码
  10. 详解Python的max、min和sum函数用法
  11. 面对 996,程序员如何利用“碎片时间”涨薪?
  12. 卸载windows激活码
  13. python 携程酒店数据爬取_小老弟,来爬取携程的民宿酒店数据啦(附带源码).md...
  14. 计算机内存不足黑屏怎么办,win10内存不足会黑屏怎么办
  15. 湖南现代物流职业技术学院校历课表
  16. 努比亚android11,努比亚Play开测Android11 填写基本信息即可
  17. stm32 串口2空闲中断死机_STM32串口空闲中断问题
  18. Facebook - 150亿张照片海量存贮架构
  19. 古老 IP 新玩法,盘点传统文化出圈秘籍
  20. linux 内存清理 释放命令,linux 内存清理释放命令(示例代码)

热门文章

  1. 传播智课C#-学习笔记(2)
  2. linux添加额外ip,CentOS7添加多个IP方法
  3. 大规模集群下Hadoop NameNode如何承载每秒上千次的高并发访问
  4. SAP ERP 物料主数据同步外围系统
  5. 参加了一个博客大赛......
  6. 软件工程毕业设计课题(87)微信小程序毕业设计PHP校园失物招领小程序系统设计与实现
  7. 功率谱分析笔记-------脑电相关
  8. 设计模式之策略模式、观察者模式浅析
  9. 二元logit回归分析
  10. Compose Preview 的 UX 设计之旅