一、修改方案

  TensorFlow官方例子mnist_with_summaries.py里面的主函数里面是按照linux的路径在写的,这在windows运行后生成的log文件路径会出现问题。

官方例子中的主函数代码如下:
if __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument('--fake_data', nargs='?', const=True, type=bool,default=False,help='If true, uses fake data for unit testing.')parser.add_argument('--max_steps', type=int, default=1000,help='Number of steps to run trainer.')parser.add_argument('--learning_rate', type=float, default=0.001,help='Initial learning rate')parser.add_argument('--dropout', type=float, default=0.9,help='Keep probability for training dropout.')parser.add_argument('--data_dir',type=str,default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'),'tensorflow/mnist/input_data'),help='Directory for storing input data')parser.add_argument('--log_dir',type=str,default=os.path.join(os.getenv('TEST_TMPDIR', '/tmp'),'tensorflow/mnist/logs/mnist_with_summaries'),help='Summaries log directory')FLAGS, unparsed = parser.parse_known_args()tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
将上述代码中的路径部分修改为如下代码,可以实现将文件夹路径’tensorflow/mnist/logs/mnist_with_summaries’生成在当前’mnist_with_summaries.py’文件的同级目录下:
  parser.add_argument('--data_dir',type=str,default=os.path.join(os.getcwd(),'tensorflow\\mnist\\input_data'),help='Directory for storing input data')parser.add_argument('--log_dir',type=str,default=os.path.join(os.getcwd(),'tensorflow\\mnist\\logs\\mnist_with_summaries'),help='Summaries log directory')
在windows下mnist_with_summaries.py的完整代码如下:
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A simple MNIST classifier which displays summaries in TensorBoard.This is an unimpressive MNIST model, but it is a good example of using
tf.name_scope to make a graph legible in the TensorBoard graph explorer, and of
naming summary tags so that they are grouped meaningfully in TensorBoard.It demonstrates the functionality of every TensorBoard dashboard.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_functionimport argparse
import os
import sysimport tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_dataFLAGS = Nonedef train():# Import datamnist = input_data.read_data_sets(FLAGS.data_dir,fake_data=FLAGS.fake_data)sess = tf.InteractiveSession()# Create a multilayer model.# Input placeholderswith tf.name_scope('input'):x = tf.placeholder(tf.float32, [None, 784], name='x-input')y_ = tf.placeholder(tf.int64, [None], name='y-input')with tf.name_scope('input_reshape'):image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])tf.summary.image('input', image_shaped_input, 10)# We can't initialize these variables to 0 - the network will get stuck.def weight_variable(shape):"""Create a weight variable with appropriate initialization."""initial = tf.truncated_normal(shape, stddev=0.1)return tf.Variable(initial)def bias_variable(shape):"""Create a bias variable with appropriate initialization."""initial = tf.constant(0.1, shape=shape)return tf.Variable(initial)def variable_summaries(var):"""Attach a lot of summaries to a Tensor (for TensorBoard visualization)."""with tf.name_scope('summaries'):mean = tf.reduce_mean(var)tf.summary.scalar('mean', mean)with tf.name_scope('stddev'):stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))tf.summary.scalar('stddev', stddev)tf.summary.scalar('max', tf.reduce_max(var))tf.summary.scalar('min', tf.reduce_min(var))tf.summary.histogram('histogram', var)def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):"""Reusable code for making a simple neural net layer.It does a matrix multiply, bias add, and then uses ReLU to nonlinearize.It also sets up name scoping so that the resultant graph is easy to read,and adds a number of summary ops."""# Adding a name scope ensures logical grouping of the layers in the graph.with tf.name_scope(layer_name):# This Variable will hold the state of the weights for the layerwith tf.name_scope('weights'):weights = weight_variable([input_dim, output_dim])variable_summaries(weights)with tf.name_scope('biases'):biases = bias_variable([output_dim])variable_summaries(biases)with tf.name_scope('Wx_plus_b'):preactivate = tf.matmul(input_tensor, weights) + biasestf.summary.histogram('pre_activations', preactivate)activations = act(preactivate, name='activation')tf.summary.histogram('activations', activations)return activationshidden1 = nn_layer(x, 784, 500, 'layer1')with tf.name_scope('dropout'):keep_prob = tf.placeholder(tf.float32)tf.summary.scalar('dropout_keep_probability', keep_prob)dropped = tf.nn.dropout(hidden1, keep_prob)# Do not apply softmax activation yet, see below.y = nn_layer(dropped, 500, 10, 'layer2', act=tf.identity)with tf.name_scope('cross_entropy'):# The raw formulation of cross-entropy,## tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),#                               reduction_indices=[1]))## can be numerically unstable.## So here we use tf.losses.sparse_softmax_cross_entropy on the# raw logit outputs of the nn_layer above, and then average across# the batch.with tf.name_scope('total'):cross_entropy = tf.losses.sparse_softmax_cross_entropy(labels=y_, logits=y)tf.summary.scalar('cross_entropy', cross_entropy)with tf.name_scope('train'):train_step = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(cross_entropy)with tf.name_scope('accuracy'):with tf.name_scope('correct_prediction'):correct_prediction = tf.equal(tf.argmax(y, 1), y_)with tf.name_scope('accuracy'):accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))tf.summary.scalar('accuracy', accuracy)# Merge all the summaries and write them out to# /tmp/tensorflow/mnist/logs/mnist_with_summaries (by default)merged = tf.summary.merge_all()train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')tf.global_variables_initializer().run()# Train the model, and also write summaries.# Every 10th step, measure test-set accuracy, and write test summaries# All other steps, run train_step on training data, & add training summariesdef feed_dict(train):"""Make a TensorFlow feed_dict: maps data onto Tensor placeholders."""if train or FLAGS.fake_data:xs, ys = mnist.train.next_batch(100, fake_data=FLAGS.fake_data)k = FLAGS.dropoutelse:xs, ys = mnist.test.images, mnist.test.labelsk = 1.0return {x: xs, y_: ys, keep_prob: k}for i in range(FLAGS.max_steps):if i % 10 == 0:  # Record summaries and test-set accuracysummary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))test_writer.add_summary(summary, i)print('Accuracy at step %s: %s' % (i, acc))else:  # Record train set summaries, and trainif i % 100 == 99:  # Record execution statsrun_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)run_metadata = tf.RunMetadata()summary, _ = sess.run([merged, train_step],feed_dict=feed_dict(True),options=run_options,run_metadata=run_metadata)train_writer.add_run_metadata(run_metadata, 'step%03d' % i)train_writer.add_summary(summary, i)print('Adding run metadata for', i)else:  # Record a summarysummary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))train_writer.add_summary(summary, i)train_writer.close()test_writer.close()def main(_):if tf.gfile.Exists(FLAGS.log_dir):tf.gfile.DeleteRecursively(FLAGS.log_dir)tf.gfile.MakeDirs(FLAGS.log_dir)train()if __name__ == '__main__':parser = argparse.ArgumentParser()parser.add_argument('--fake_data', nargs='?', const=True, type=bool,default=False,help='If true, uses fake data for unit testing.')parser.add_argument('--max_steps', type=int, default=1000,help='Number of steps to run trainer.')parser.add_argument('--learning_rate', type=float, default=0.001,help='Initial learning rate')parser.add_argument('--dropout', type=float, default=0.9,help='Keep probability for training dropout.')parser.add_argument('--data_dir',type=str,default=os.path.join(os.getcwd(),'tensorflow\\mnist\\input_data'),help='Directory for storing input data')parser.add_argument('--log_dir',type=str,default=os.path.join(os.getcwd(),'tensorflow\\mnist\\logs\\mnist_with_summaries'),help='Summaries log directory')FLAGS, unparsed = parser.parse_known_args()tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)

二、运行tensorboard

  进入刚运行完的’mnist_with_summaries.py’的同级目录下,继续进入目录’tensorflow\mnist\logs\mnist_with_summaries’。复制文件夹’mnist_with_summaries’的绝对路径。
  打开cmd,输入tensorboard --logdir = 'mnist_with_summaries的绝对路径'。接着cmd中会出现TensorBoard 1.6.0 at http://xxx:6006 (Press CTRL+C to quit)的url,在浏览器中输入该url打开tensorboard。

【TensorFlow】官方例子mnist_with_summaries.py在windows下运行tensorboard相关推荐

  1. Windows下运行rabbitmqctl 相关命令(如rabbitmqctl stop)报错:Error: unable to perform an operation on node解决方案

    Windows下运行rabbitmqctl 相关命令(如rabbitmqctl stop)报错:Error: unable to perform an operation on node解决方案 参考 ...

  2. 在Linux中某些程序无法运行,为何linux下的程序不能在windows下运行,不是“废话”那么简单...

    大家好,近期我发的文章都比较底层,过些天再发Linux基础方面的内容,多谢大伙儿的支持,回到正题. 您看,linux和windows都是运行在x86体系架构cpu上的操作系统,也就是指令的机器码都是一 ...

  3. 解决Windows下运行php Composer出现SSL报错的问题

    解决Windows下运行php Composer出现SSL报错的问题 2015-01-14 20:05 在windows下运行composer却出现SSL报错: E:\www>php -f co ...

  4. 关闭windows hello然后尝试再次运行安装程序_蜂鸟E203系列——Windows下运行hello world例程...

    蜂鸟E203系列--Windows下运行hello world例程​mp.weixin.qq.com 创建 hello world 例程 选择file->new->C / C++ Proj ...

  5. Windows下运行python脚本报错“ImportError: No Module named ...”的解决方法

    Windows下运行python脚本报错"ImportError: No Module named ..."的解决方法 参考文章: (1)Windows下运行python脚本报错& ...

  6. windows下运行vbs脚本

    windows下运行vbs脚本 vbs基础教程 简单的代码(此代码来自脚本之家,如果侵犯了您的权利请与本人联系) REM 输入并回显你的名字 '使用InputBox和Msgbox函数 '(上面及本行可 ...

  7. Windows下运行Hadoop

    Windows下运行Hadoop,通常有两种方式:一种是用VM方式安装一个Linux操作系统,这样基本可以实现全Linux环境的Hadoop运行:另一种是通过Cygwin模拟Linux环境.后者的好处 ...

  8. Windows下运行PCR-GLOBWB的学习和总结

    目录 1. 背景 2. PCR-GLOBWB介绍 3. PCR-GLOBWB下载与运行 3.1 下载 3.2 Python环境配置 3.2.1 miniconda安装Python环境 3.2.2 mi ...

  9. 本地提交spark_Spark在Windows下运行,本地提交spark到远程yarn集群

    [问题]Spark在windows能跑集群模式吗? 我认为是可以的,但是需要详细了解cmd命令行的写法.目前win下跑spark的单机模式是没有问题的. [关键点]spark启动机制容易被window ...

最新文章

  1. Android的EditText自动获取焦点并弹出输入法问题
  2. 如何搭建html运行环境,搭建基于express框架运行环境的方法步骤
  3. Java基础-IO流对象之数据流(DataOutputStream与DataInputStream)
  4. 2018宁夏邀请赛 - Copy and Submit II(推公式)
  5. java标记错误_标记电子邮件Java时出错
  6. Android之如何解决android.os.NetworkOnMainThreadException的异常
  7. linux(虚拟机中)与windows共享文件两种方法
  8. 网络的性能指标与分组交换网络
  9. java 两个sql文_Java和SQL取两个字符间的值
  10. php源码安装配置,php源码安装时configure配置参数 | 学步园
  11. (3)散列函数设计:直接定址法
  12. Docker教程:docker远程repository和自建本地registry
  13. python github库_让pip使用git和github存储库
  14. Spring Cloud Ribbon 详解
  15. 公众号、小程序、短信消息推送的区别
  16. kali中如何更新python_怎么在线更新kali linux
  17. 1. SCARA机器人建模
  18. ZT自老罗的博客 Android系统的智能指针(轻量级指针、强指针和弱指针)的实现原理分析...
  19. 九张图纵观加密市场周期规律
  20. 2022-2028年中国光声成像系统行业市场调研分析及发展规模预测报告

热门文章

  1. 微信小程序如何支持分享给朋友和分享到朋友圈?如何解决分享朋友圈灰色问题
  2. 如何看待北大硕士买米粉事件?听听广东亨盛维嘉怎么说
  3. 什么是加密?什么是md5加密算法?
  4. PR学习笔记——Pr2019快速使用指南——视频剪辑
  5. android记账本折线图_Android Studio——记账本以及图表可视化实现
  6. linux temp文件夹在哪_手机文件夹是英文不敢乱删?找出这5个文件夹,手机瞬间轻松6个G...
  7. 脑残的NODE_MODULE_VERSION,node冷眼看着electron
  8. 基于51单片机的三路超声测距仪设计wifi通信proteus仿真原理图PCB
  9. 客制化机械键盘改键软件VIA介绍
  10. 貌似高大上,实则黑中介