https://www.cnblogs.com/dudu1992/p/8908081.html

下载cifar10数据集:http://www.cs.toronto.edu/~kriz/cifar.html

选择cifar-10-python.tar.gz进行下载。

1 建立 main.py

import tensorflow as tf
import os
import scipy.misc
import cifar10_input
def inputs_origin(data_dir):filenames = [os.path.join(data_dir, 'data_batch_%d' % i) for i in range(1, 6)]for f in filenames:print(f)if not tf.gfile.Exists(f):raise ValueError('Failed to find file' + f)filenames_queue =tf.train.string_input_producer(filenames)read_input = cifar10_input.read_cifar10(filenames_queue)reshaped_image = tf.cast(read_input.uint8image,tf.float32)print(reshaped_image)return reshaped_imageif __name__ == '__main__':with tf.Session() as sess:reshaped_image = inputs_origin('cifar-10-batches-py')threads = tf.train.start_queue_runners(sess=sess)print(threads)sess.run(tf.global_variables_initializer())if not os.path.exists('cifar-10-batches-py/raw/'):os.makedirs('cifar-10-batches-py/raw/')for i in range(30):image = sess.run(reshaped_image)scipy.misc.toimage(image).save('cifar-10-batches-py/raw/%d.jpg' %i)

2 建立 cifar10_input.py


from __future__ import absolute_importfrom __future__ import divisionfrom __future__ import print_functionimport osfrom six.moves import xrange  # pylint: disable=redefined-builtinimport tensorflow as tf# Process images of this size. Note that this differs from the original CIFAR# image size of 32 x 32. If one alters this number, then the entire model# architecture will change and any model would need to be retrained.IMAGE_SIZE = 24# Global constants describing the CIFAR-10 data set.NUM_CLASSES = 10NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000def read_cifar10(filename_queue):"""Reads and parses examples from CIFAR10 data files.Recommendation: if you want N-way read parallelism, call this functionN times.  This will give you N independent Readers reading differentfiles & positions within those files, which will give better mixing ofexamples.Args:filename_queue: A queue of strings with the filenames to read from.Returns:An object representing a single example, with the following fields:height: number of rows in the result (32)width: number of columns in the result (32)depth: number of color channels in the result (3)key: a scalar string Tensor describing the filename & record numberfor this example.label: an int32 Tensor with the label in the range 0..9.uint8image: a [height, width, depth] uint8 Tensor with the image data"""class CIFAR10Record(object):passresult = CIFAR10Record()# Dimensions of the images in the CIFAR-10 dataset.# See http://www.cs.toronto.edu/~kriz/cifar.html for a description of the# input format.label_bytes = 1  # 2 for CIFAR-100result.height = 50result.width = 50result.depth = 3image_bytes = result.height * result.width * result.depth# Every record consists of a label followed by the image, with a# fixed number of bytes for each.record_bytes = label_bytes + image_bytes# Read a record, getting filenames from the filename_queue.  No# header or footer in the CIFAR-10 format, so we leave header_bytes# and footer_bytes at their default of 0.reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)result.key, value = reader.read(filename_queue)# Convert from a string to a vector of uint8 that is record_bytes long.record_bytes = tf.decode_raw(value, tf.uint8)# The first bytes represent the label, which we convert from uint8->int32.result.label = tf.cast(tf.strided_slice(record_bytes, [0], [label_bytes]), tf.int32)# The remaining bytes after the label represent the image, which we reshape# from [depth * height * width] to [depth, height, width].depth_major = tf.reshape(tf.strided_slice(record_bytes, [label_bytes],[label_bytes + image_bytes]),[result.depth, result.height, result.width])# Convert from [depth, height, width] to [height, width, depth].result.uint8image = tf.transpose(depth_major, [1, 2, 0])return resultdef _generate_image_and_label_batch(image, label, min_queue_examples,batch_size, shuffle):"""Construct a queued batch of images and labels.Args:image: 3-D Tensor of [height, width, 3] of type.float32.label: 1-D Tensor of type.int32min_queue_examples: int32, minimum number of samples to retainin the queue that provides of batches of examples.batch_size: Number of images per batch.shuffle: boolean indicating whether to use a shuffling queue.Returns:images: Images. 4D tensor of [batch_size, height, width, 3] size.labels: Labels. 1D tensor of [batch_size] size."""# Create a queue that shuffles the examples, and then# read 'batch_size' images + labels from the example queue.num_preprocess_threads = 16if shuffle:images, label_batch = tf.train.shuffle_batch([image, label],batch_size=batch_size,num_threads=num_preprocess_threads,capacity=min_queue_examples + 3 * batch_size,min_after_dequeue=min_queue_examples)else:images, label_batch = tf.train.batch([image, label],batch_size=batch_size,num_threads=num_preprocess_threads,capacity=min_queue_examples + 3 * batch_size)# Display the training images in the visualizer.tf.summary.image('images', images)return images, tf.reshape(label_batch, [batch_size])def distorted_inputs(data_dir, batch_size):"""Construct distorted input for CIFAR training using the Reader ops.Args:data_dir: Path to the CIFAR-10 data directory.batch_size: Number of images per batch.Returns:images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.labels: Labels. 1D tensor of [batch_size] size."""filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]for f in filenames:if not tf.gfile.Exists(f):raise ValueError('Failed to find file: ' + f)# Create a queue that produces the filenames to read.filename_queue = tf.train.string_input_producer(filenames)# Read examples from files in the filename queue.read_input = read_cifar10(filename_queue)reshaped_image = tf.cast(read_input.uint8image, tf.float32)height = IMAGE_SIZEwidth = IMAGE_SIZE# Image processing for training the network. Note the many random# distortions applied to the image.# Randomly crop a [height, width] section of the image.distorted_image = tf.random_crop(reshaped_image, [height, width, 3])# Randomly flip the image horizontally.distorted_image = tf.image.random_flip_left_right(distorted_image)# Because these operations are not commutative, consider randomizing# the order their operation.distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)# Subtract off the mean and divide by the variance of the pixels.float_image = tf.image.per_image_standardization(distorted_image)# Set the shapes of tensors.float_image.set_shape([height, width, 3])read_input.label.set_shape([1])# Ensure that the random shuffling has good mixing properties.min_fraction_of_examples_in_queue = 0.4min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN * min_fraction_of_examples_in_queue)print('Filling queue with %d CIFAR images before starting to train. ''This will take a few minutes.' % min_queue_examples)# Generate a batch of images and labels by building up a queue of examples.return _generate_image_and_label_batch(float_image,read_input.label,min_queue_examples,batch_size,shuffle=True)def inputs(eval_data, data_dir, batch_size):"""Construct input for CIFAR evaluation using the Reader ops.Args:eval_data: bool, indicating if one should use the train or eval data set.data_dir: Path to the CIFAR-10 data directory.batch_size: Number of images per batch.Returns:images: Images. 4D tensor of [batch_size, IMAGE_SIZE, IMAGE_SIZE, 3] size.labels: Labels. 1D tensor of [batch_size] size."""if not eval_data:filenames = [os.path.join(data_dir, 'data_batch_%d.bin' % i) for i in xrange(1, 6)]num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAINelse:filenames = [os.path.join(data_dir, 'test_batch.bin')]num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVALfor f in filenames:if not tf.gfile.Exists(f):raise ValueError('Failed to find file: ' + f)# Create a queue that produces the filenames to read.filename_queue = tf.train.string_input_producer(filenames)# Read examples from files in the filename queue.read_input = read_cifar10(filename_queue)reshaped_image = tf.cast(read_input.uint8image, tf.float32)height = IMAGE_SIZEwidth = IMAGE_SIZE# Image processing for evaluation.# Crop the central [height, width] of the image.resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, width, height)# Subtract off the mean and divide by the variance of the pixels.float_image = tf.image.per_image_standardization(resized_image)# Set the shapes of tensors.float_image.set_shape([height, width, 3])read_input.label.set_shape([1])# Ensure that the random shuffling has good mixing properties.min_fraction_of_examples_in_queue = 0.4min_queue_examples = int(num_examples_per_epoch * min_fraction_of_examples_in_queue)# Generate a batch of images and labels by building up a queue of examples.return _generate_image_and_label_batch(float_image,read_input.label,min_queue_examples,batch_size,shuffle=False)

 显示部分图片:

 

(转)将cifar10数据集保存为可见图片相关推荐

  1. 【神经网络与深度学习】CIFAR10数据集介绍,并使用卷积神经网络训练图像分类模型——[附完整训练代码]

    [神经网络与深度学习]CIFAR-10数据集介绍,并使用卷积神经网络训练模型--[附完整代码] 一.CIFAR-10数据集介绍 1.1 CIFAR-10数据集的内容 1.2 CIFAR-10数据集的结 ...

  2. CIFAR-10 数据集可视化详细讲解(附代码)

    The CIFAR-10 dataset 介绍 The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, wi ...

  3. cifar10数据集测试有多少张图_pytorch VGG11识别cifar10数据集(训练+预测单张输入图片操作)...

    首先这是VGG的结构图,VGG11则是红色框里的结构,共分五个block,如红框中的VGG11第一个block就是一个conv3-64卷积层: 一,写VGG代码时,首先定义一个 vgg_block(n ...

  4. cifar10数据集下载及图片格式解析

    CIFAR-10 是由 Hinton 的学生 Alex Krizhevsky 和 Ilya Sutskever 整理的一个用于识别普适物体的小型数据集.一共包含 10 个类别的 RGB 彩色图 片:飞 ...

  5. CIFAR-10数据集(介绍、下载读取、可视化显示、另存为图片)

    cifar10数据集(下载并读取.可视化显示.另存为图片) 2022-06-09 18:23:38 数据集简介 CIFAR-10 是由 Hinton 的学生 Alex Krizhevsky 和 Ily ...

  6. mnist数据集保存为图片

    #coding: utf-8 from tensorflow.examples.tutorials.mnist import input_data import scipy.misc import o ...

  7. TF:利用TF读取数据操作,将CIFAR-10 数据集中的训练图片读取出来,并保存为.jpg格式

    TF:利用TF读取数据操作,将CIFAR-10 数据集中的训练图片读取出来,并保存为.jpg 格式 目录 输出结果 核心代码 输出结果 核心代码 def inputs_origin(data_dir) ...

  8. python 手动读取cifar10_如何用python解析cifar10数据集图片

    概述 通用图像分类公开的标准数据集常用的有CIFAR.ImageNet.COCO等,常用的细粒度图像分类数据集包括CUB-200-2011.Stanford Dog.Oxford-flowers等.其 ...

  9. Caffe2——cifar10数据集创建lmdb或leveldb类型的数据

    Caffe2--cifar10数据集创建lmdb或leveldb类型的数据 时间:2015-05-05 15:52:31      阅读:5183      评论:0      收藏:1      [ ...

最新文章

  1. 【PAT乙级】1001 害死人不偿命的(3n+1)猜想 (15 分)
  2. 架构师成长系列 | 从 2019 到 2020,Apache Dubbo 年度回顾与总结
  3. varnish基本配置(二)
  4. 普通用户Mysql 5.6.13 主从,主主及nagios的mysql slave监控
  5. Thinkpad在Windows8上热键的解决方案
  6. UNIX环境编程学习笔记(25)——信号处理进阶学习之 sigaction 函数
  7. Nacos配置管理模型
  8. SAP 电商云 Spartacus UI CheckoutDeliveryService 的单元测试设计
  9. spring定时注解方式定时写到xml里面融合
  10. 按键 使用WinHttp实现POST方式用户模拟登录网站
  11. 《炉石传说》建筑设计欣赏(7):采用Google.ProtocolBuffers处理网络消息
  12. linux中Grep常用的15个例子,Linux中Grep惯用的15个例子
  13. linux 进入rescue模式,一个简单小例子来说一下Rescue营救模式
  14. mybatis-plus忽略映射字段
  15. php向浏览器输出,使PHP即时输出结果到浏览器
  16. 审计计算机考试报名,审计业务考试计算机(5页)-原创力文档
  17. Terrasolid安装与破解
  18. “the+形容词”的四种类型及语法特征
  19. PDO介绍[不包括具体使用方法]
  20. php如何生成一年的日历表_PHP生成日历

热门文章

  1. 钢筋计数怎么计算的?快来看看这篇文章
  2. switch case语句的流程图和盒图
  3. 6-5 使用函数求余弦函数的近似值 (15分)
  4. 英语单词原油Petrolaeum石油
  5. selenium自动化测试04
  6. 一文看懂内存池原理及创建(C++实现)
  7. 销售人员如何做好大客户销售
  8. mysql实用教程 电子版_MySQL实用教程
  9. Alexa排名的常识
  10. [pyecharts学习笔记]——主题设置