TF之DCGAN:基于TF利用DCGAN测试自己的数据集并进行生成过程全记录

目录

训练的数据集部分图片

输出结果

1、默认参数输出结果

训练过程全记录


训练的数据集部分图片

以从网上收集了许多日式动画为例

输出结果

1、默认参数输出结果

train_00_0099 train_00_0399
train_00_0599 train_00_0799
train_01_0099    

2、更改不同参数option=0 、option=1输出结果

使用option=0的可视化方法产生的图片

使用option=1的可视化方法产生的图片

GAN 模型隐空间中的插值可视化

训练过程全记录

1518~1910

开始训练……
{'batch_size': <absl.flags._flag.Flag object at 0x000002C943CD16A0>,'beta1': <absl.flags._flag.Flag object at 0x000002C9463D5F60>,'checkpoint_dir': <absl.flags._flag.Flag object at 0x000002C946422CC0>,'crop': <absl.flags._flag.BooleanFlag object at 0x000002C946422E10>,'dataset': <absl.flags._flag.Flag object at 0x000002C946422BA8>,'epoch': <absl.flags._flag.Flag object at 0x000002C93CA90320>,'h': <tensorflow.python.platform.app._HelpFlag object at 0x000002C946422EF0>,'help': <tensorflow.python.platform.app._HelpFlag object at 0x000002C946422EF0>,'helpfull': <tensorflow.python.platform.app._HelpfullFlag object at 0x000002C946422F60>,'helpshort': <tensorflow.python.platform.app._HelpshortFlag object at 0x000002C946422FD0>,'input_fname_pattern': <absl.flags._flag.Flag object at 0x000002C946422C18>,'input_height': <absl.flags._flag.Flag object at 0x000002C943CD1B38>,'input_width': <absl.flags._flag.Flag object at 0x000002C946422A20>,'learning_rate': <absl.flags._flag.Flag object at 0x000002C93E5E7DA0>,'output_height': <absl.flags._flag.Flag object at 0x000002C946422A90>,'output_width': <absl.flags._flag.Flag object at 0x000002C946422B38>,'sample_dir': <absl.flags._flag.Flag object at 0x000002C946422D30>,'train': <absl.flags._flag.BooleanFlag object at 0x000002C946422D68>,'train_size': <absl.flags._flag.Flag object at 0x000002C943CD10F0>,'visualize': <absl.flags._flag.BooleanFlag object at 0x000002C946422E80>}
2018-10-06 15:18:41.635062: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
---------
Variables: name (type shape) [size]
---------
generator/g_h0_lin/Matrix:0 (float32_ref 100x4608) [460800, bytes: 1843200]
generator/g_h0_lin/bias:0 (float32_ref 4608) [4608, bytes: 18432]
generator/g_bn0/beta:0 (float32_ref 512) [512, bytes: 2048]
generator/g_bn0/gamma:0 (float32_ref 512) [512, bytes: 2048]
generator/g_h1/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
generator/g_h1/biases:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/beta:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/gamma:0 (float32_ref 256) [256, bytes: 1024]
generator/g_h2/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
generator/g_h2/biases:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/beta:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/gamma:0 (float32_ref 128) [128, bytes: 512]
generator/g_h3/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
generator/g_h3/biases:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/beta:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/gamma:0 (float32_ref 64) [64, bytes: 256]
generator/g_h4/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
generator/g_h4/biases:0 (float32_ref 3) [3, bytes: 12]
discriminator/d_h0_conv/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
discriminator/d_h0_conv/biases:0 (float32_ref 64) [64, bytes: 256]
discriminator/d_h1_conv/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
discriminator/d_h1_conv/biases:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/beta:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/gamma:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_h2_conv/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
discriminator/d_h2_conv/biases:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/beta:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/gamma:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_h3_conv/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
discriminator/d_h3_conv/biases:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/beta:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/gamma:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_h4_lin/Matrix:0 (float32_ref 4608x1) [4608, bytes: 18432]
discriminator/d_h4_lin/bias:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 9086340
Total bytes of variables: 36345360[*] Reading checkpoints...[*] Failed to find a checkpoint[!] Load failed...
Epoch: [ 0] [   0/ 800] time: 14.9779, d_loss: 5.05348301, g_loss: 0.00766894
Epoch: [ 0] [   1/ 800] time: 28.0542, d_loss: 4.82881641, g_loss: 0.01297333
Epoch: [ 0] [   2/ 800] time: 40.2559, d_loss: 3.48951864, g_loss: 0.07677600
Epoch: [ 0] [   3/ 800] time: 53.2987, d_loss: 4.46177912, g_loss: 0.01912572
Epoch: [ 0] [   4/ 800] time: 66.6449, d_loss: 3.76898527, g_loss: 0.06732680
Epoch: [ 0] [   5/ 800] time: 80.2566, d_loss: 3.12670279, g_loss: 0.12792118
Epoch: [ 0] [   6/ 800] time: 94.6307, d_loss: 3.61706448, g_loss: 0.05859204
Epoch: [ 0] [   7/ 800] time: 108.9309, d_loss: 2.67836666, g_loss: 0.26883626
Epoch: [ 0] [   8/ 800] time: 122.1341, d_loss: 3.90734839, g_loss: 0.05641707
Epoch: [ 0] [   9/ 800] time: 135.7154, d_loss: 1.87382483, g_loss: 1.13096261
Epoch: [ 0] [  10/ 800] time: 148.9689, d_loss: 6.14149714, g_loss: 0.00330601……Epoch: [ 0] [  80/ 800] time: 1174.5982, d_loss: 2.07529640, g_loss: 0.39124209
Epoch: [ 0] [  81/ 800] time: 1192.4455, d_loss: 2.01820517, g_loss: 0.43641573
Epoch: [ 0] [  82/ 800] time: 1210.1161, d_loss: 2.14325690, g_loss: 0.41077107
Epoch: [ 0] [  83/ 800] time: 1226.0585, d_loss: 2.06479096, g_loss: 0.49251628
Epoch: [ 0] [  84/ 800] time: 1242.0143, d_loss: 2.23370504, g_loss: 0.43198395
Epoch: [ 0] [  85/ 800] time: 1257.2267, d_loss: 2.12133884, g_loss: 0.49163312
Epoch: [ 0] [  86/ 800] time: 1272.5151, d_loss: 2.12812853, g_loss: 0.45083773
Epoch: [ 0] [  87/ 800] time: 1289.7231, d_loss: 1.85827374, g_loss: 0.54915452
Epoch: [ 0] [  88/ 800] time: 1305.7893, d_loss: 1.75407577, g_loss: 0.59886670
Epoch: [ 0] [  89/ 800] time: 1324.8202, d_loss: 1.92280674, g_loss: 0.43640304
Epoch: [ 0] [  90/ 800] time: 1342.7920, d_loss: 1.90137959, g_loss: 0.45802355
Epoch: [ 0] [  91/ 800] time: 1361.9827, d_loss: 1.85933983, g_loss: 0.47512102
Epoch: [ 0] [  92/ 800] time: 1376.7853, d_loss: 1.83109379, g_loss: 0.53952801
Epoch: [ 0] [  93/ 800] time: 1391.9553, d_loss: 1.89624429, g_loss: 0.48314875
Epoch: [ 0] [  94/ 800] time: 1405.7957, d_loss: 1.95725751, g_loss: 0.50201762
Epoch: [ 0] [  95/ 800] time: 1419.8575, d_loss: 2.04467034, g_loss: 0.47200602
Epoch: [ 0] [  96/ 800] time: 1432.6235, d_loss: 1.86375761, g_loss: 0.63056684
Epoch: [ 0] [  97/ 800] time: 1446.1109, d_loss: 1.75833380, g_loss: 0.68587345
Epoch: [ 0] [  98/ 800] time: 1459.7021, d_loss: 1.61311054, g_loss: 0.56521410
Epoch: [ 0] [  99/ 800] time: 1473.4438, d_loss: 1.63083386, g_loss: 0.55198652
[Sample] d_loss: 1.56934571, g_loss: 0.58893394
Epoch: [ 0] [ 100/ 800] time: 1490.8011, d_loss: 2.02212882, g_loss: 0.38942879
Epoch: [ 0] [ 101/ 800] time: 1504.8573, d_loss: 2.08615398, g_loss: 0.41869015
Epoch: [ 0] [ 102/ 800] time: 1520.3561, d_loss: 1.94494843, g_loss: 0.52331185
Epoch: [ 0] [ 103/ 800] time: 1534.8911, d_loss: 1.68799090, g_loss: 0.57893807
Epoch: [ 0] [ 104/ 800] time: 1550.2059, d_loss: 1.73278153, g_loss: 0.55513334
Epoch: [ 0] [ 105/ 800] time: 1564.4857, d_loss: 1.66107357, g_loss: 0.58009803
Epoch: [ 0] [ 106/ 800] time: 1577.7365, d_loss: 1.62651777, g_loss: 0.68608046
Epoch: [ 0] [ 107/ 800] time: 1591.2906, d_loss: 1.68899119, g_loss: 0.64795619
Epoch: [ 0] [ 108/ 800] time: 1604.4354, d_loss: 1.64453030, g_loss: 0.66518682
Epoch: [ 0] [ 109/ 800] time: 1618.1593, d_loss: 1.56328249, g_loss: 0.66451979
Epoch: [ 0] [ 110/ 800] time: 1633.1294, d_loss: 1.51543558, g_loss: 0.77611113……Epoch: [ 0] [ 160/ 800] time: 2385.2872, d_loss: 1.92123890, g_loss: 0.45402479
Epoch: [ 0] [ 161/ 800] time: 2400.4567, d_loss: 1.78833413, g_loss: 0.53086638
Epoch: [ 0] [ 162/ 800] time: 2415.2647, d_loss: 1.57849348, g_loss: 0.71513641
Epoch: [ 0] [ 163/ 800] time: 2429.8398, d_loss: 1.67605543, g_loss: 0.65658081
Epoch: [ 0] [ 164/ 800] time: 2447.2616, d_loss: 1.41697562, g_loss: 0.69170052
Epoch: [ 0] [ 165/ 800] time: 2462.9209, d_loss: 1.37472379, g_loss: 0.81910974
Epoch: [ 0] [ 166/ 800] time: 2479.5134, d_loss: 1.52106404, g_loss: 0.65593958
Epoch: [ 0] [ 167/ 800] time: 2499.4337, d_loss: 1.48481750, g_loss: 0.56352514
Epoch: [ 0] [ 168/ 800] time: 2515.0022, d_loss: 1.51672626, g_loss: 0.61658454
Epoch: [ 0] [ 169/ 800] time: 2529.4996, d_loss: 1.60589409, g_loss: 0.63836646
Epoch: [ 0] [ 170/ 800] time: 2543.3981, d_loss: 1.44772625, g_loss: 0.65181255……Epoch: [ 0] [ 190/ 800] time: 2825.9758, d_loss: 1.47412062, g_loss: 0.54513580
Epoch: [ 0] [ 191/ 800] time: 2838.9723, d_loss: 1.55055904, g_loss: 0.58368361
Epoch: [ 0] [ 192/ 800] time: 2852.2630, d_loss: 1.59510207, g_loss: 0.66829801
Epoch: [ 0] [ 193/ 800] time: 2866.4205, d_loss: 1.46519923, g_loss: 0.61558247
Epoch: [ 0] [ 194/ 800] time: 2879.9993, d_loss: 1.32191777, g_loss: 0.80541551
Epoch: [ 0] [ 195/ 800] time: 2893.4340, d_loss: 1.01147175, g_loss: 1.06913197
Epoch: [ 0] [ 196/ 800] time: 2906.5733, d_loss: 0.93962598, g_loss: 0.83171976
Epoch: [ 0] [ 197/ 800] time: 2920.1912, d_loss: 1.17017913, g_loss: 0.67285419
Epoch: [ 0] [ 198/ 800] time: 2933.5356, d_loss: 1.59560084, g_loss: 0.56722575
Epoch: [ 0] [ 199/ 800] time: 2947.0078, d_loss: 1.79016471, g_loss: 0.63441348
[Sample] d_loss: 1.81597352, g_loss: 0.72201991
Epoch: [ 0] [ 200/ 800] time: 2962.8138, d_loss: 1.84360504, g_loss: 0.68355072
Epoch: [ 0] [ 201/ 800] time: 2976.0156, d_loss: 1.79623175, g_loss: 0.82725859
Epoch: [ 0] [ 202/ 800] time: 2990.1701, d_loss: 1.84564495, g_loss: 0.36759761
Epoch: [ 0] [ 203/ 800] time: 3003.2376, d_loss: 1.33034515, g_loss: 1.12043190
Epoch: [ 0] [ 204/ 800] time: 3016.9012, d_loss: 1.43244946, g_loss: 0.60710204
Epoch: [ 0] [ 205/ 800] time: 3031.2064, d_loss: 1.77543664, g_loss: 0.37925830
Epoch: [ 0] [ 206/ 800] time: 3044.6623, d_loss: 1.38716245, g_loss: 0.79690325
Epoch: [ 0] [ 207/ 800] time: 3058.6295, d_loss: 1.41732562, g_loss: 0.71504021
Epoch: [ 0] [ 208/ 800] time: 3075.1982, d_loss: 1.48065066, g_loss: 0.58098531
Epoch: [ 0] [ 209/ 800] time: 3092.2044, d_loss: 1.39409590, g_loss: 0.85311776
Epoch: [ 0] [ 210/ 800] time: 3106.7110, d_loss: 1.55829871, g_loss: 0.71159673……Epoch: [ 0] [ 250/ 800] time: 3706.7694, d_loss: 1.48207712, g_loss: 0.62254345
Epoch: [ 0] [ 251/ 800] time: 3722.4864, d_loss: 1.43726230, g_loss: 0.59676802
Epoch: [ 0] [ 252/ 800] time: 3739.1110, d_loss: 1.39565313, g_loss: 0.61483824
Epoch: [ 0] [ 253/ 800] time: 3753.9008, d_loss: 1.64175820, g_loss: 0.55743980
Epoch: [ 0] [ 254/ 800] time: 3768.4591, d_loss: 2.25337219, g_loss: 0.39440048
Epoch: [ 0] [ 255/ 800] time: 3784.3170, d_loss: 2.21880293, g_loss: 0.43557072
Epoch: [ 0] [ 256/ 800] time: 3799.8508, d_loss: 1.92927480, g_loss: 0.60396165
Epoch: [ 0] [ 257/ 800] time: 3819.0884, d_loss: 1.54789436, g_loss: 0.62363708
Epoch: [ 0] [ 258/ 800] time: 3835.9283, d_loss: 1.45292878, g_loss: 0.78123999
Epoch: [ 0] [ 259/ 800] time: 3851.7583, d_loss: 1.38242722, g_loss: 0.71697128
Epoch: [ 0] [ 260/ 800] time: 3867.8912, d_loss: 1.42830288, g_loss: 0.72657067……Epoch: [ 0] [ 290/ 800] time: 4347.6360, d_loss: 1.51859045, g_loss: 0.63133144
Epoch: [ 0] [ 291/ 800] time: 4362.6835, d_loss: 1.51562345, g_loss: 0.63072002
Epoch: [ 0] [ 292/ 800] time: 4376.7609, d_loss: 1.51966012, g_loss: 0.68376446
Epoch: [ 0] [ 293/ 800] time: 4391.5809, d_loss: 1.46159744, g_loss: 0.77321720
Epoch: [ 0] [ 294/ 800] time: 4405.9471, d_loss: 1.51635325, g_loss: 0.64838612
Epoch: [ 0] [ 295/ 800] time: 4421.1065, d_loss: 1.63491082, g_loss: 0.59127223
Epoch: [ 0] [ 296/ 800] time: 4436.1505, d_loss: 1.56633282, g_loss: 0.63173258
Epoch: [ 0] [ 297/ 800] time: 4451.4322, d_loss: 1.73018694, g_loss: 0.64139992
Epoch: [ 0] [ 298/ 800] time: 4466.8813, d_loss: 1.60332918, g_loss: 0.64779305
Epoch: [ 0] [ 299/ 800] time: 4482.4206, d_loss: 1.30365634, g_loss: 0.69317293
[Sample] d_loss: 1.52858722, g_loss: 0.66097701
Epoch: [ 0] [ 300/ 800] time: 4501.5354, d_loss: 1.54065537, g_loss: 0.61486077
Epoch: [ 0] [ 301/ 800] time: 4517.9595, d_loss: 1.40912437, g_loss: 0.62744296
Epoch: [ 0] [ 302/ 800] time: 4532.8548, d_loss: 1.83548975, g_loss: 0.48546115
Epoch: [ 0] [ 303/ 800] time: 4548.5219, d_loss: 1.78749907, g_loss: 0.54208493
Epoch: [ 0] [ 304/ 800] time: 4565.8423, d_loss: 1.59532309, g_loss: 0.70925272
Epoch: [ 0] [ 305/ 800] time: 4582.6995, d_loss: 1.55741489, g_loss: 0.69813800
Epoch: [ 0] [ 306/ 800] time: 4598.1985, d_loss: 1.46890306, g_loss: 0.65037167
Epoch: [ 0] [ 307/ 800] time: 4613.3077, d_loss: 1.47391725, g_loss: 0.66135353
Epoch: [ 0] [ 308/ 800] time: 4628.6944, d_loss: 1.47143006, g_loss: 0.68910688
Epoch: [ 0] [ 309/ 800] time: 4643.7088, d_loss: 1.49028301, g_loss: 0.67232418
Epoch: [ 0] [ 310/ 800] time: 4659.5347, d_loss: 1.59941697, g_loss: 0.67055005……Epoch: [ 0] [ 350/ 800] time: 5263.0389, d_loss: 1.52133381, g_loss: 0.66190934
Epoch: [ 0] [ 351/ 800] time: 5277.8903, d_loss: 1.50694644, g_loss: 0.57145911
Epoch: [ 0] [ 352/ 800] time: 5292.1188, d_loss: 1.70610642, g_loss: 0.49781984
Epoch: [ 0] [ 353/ 800] time: 5306.9107, d_loss: 1.77215934, g_loss: 0.58978939
Epoch: [ 0] [ 354/ 800] time: 5321.5938, d_loss: 1.74831009, g_loss: 0.67320079
Epoch: [ 0] [ 355/ 800] time: 5336.4302, d_loss: 1.59669852, g_loss: 0.68336225
Epoch: [ 0] [ 356/ 800] time: 5351.4221, d_loss: 1.46689534, g_loss: 0.84482712
Epoch: [ 0] [ 357/ 800] time: 5367.1353, d_loss: 1.38674009, g_loss: 0.78510588
Epoch: [ 0] [ 358/ 800] time: 5384.3114, d_loss: 1.30605173, g_loss: 0.85381281
Epoch: [ 0] [ 359/ 800] time: 5398.4569, d_loss: 1.29629779, g_loss: 0.81868672
Epoch: [ 0] [ 360/ 800] time: 5413.3162, d_loss: 1.21817279, g_loss: 0.80424130
Epoch: [ 0] [ 361/ 800] time: 5427.5560, d_loss: 1.35527205, g_loss: 0.67310977
Epoch: [ 0] [ 362/ 800] time: 5441.6695, d_loss: 1.40627885, g_loss: 0.67996454
Epoch: [ 0] [ 363/ 800] time: 5459.0163, d_loss: 1.33116567, g_loss: 0.73797810
Epoch: [ 0] [ 364/ 800] time: 5478.6128, d_loss: 1.29250467, g_loss: 0.82915306
Epoch: [ 0] [ 365/ 800] time: 5495.0862, d_loss: 1.37827444, g_loss: 0.73634720
Epoch: [ 0] [ 366/ 800] time: 5514.7329, d_loss: 1.35434794, g_loss: 0.60365015
Epoch: [ 0] [ 367/ 800] time: 5529.6542, d_loss: 1.53991985, g_loss: 0.62364745
Epoch: [ 0] [ 368/ 800] time: 5543.7427, d_loss: 1.72570002, g_loss: 0.62098628
Epoch: [ 0] [ 369/ 800] time: 5561.1792, d_loss: 1.73738861, g_loss: 0.55012739
Epoch: [ 0] [ 370/ 800] time: 5575.9147, d_loss: 1.58512247, g_loss: 0.55001098
Epoch: [ 0] [ 371/ 800] time: 5592.3616, d_loss: 1.59266281, g_loss: 0.69175625……Epoch: [ 0] [ 399/ 800]……Epoch: [ 0] [ 499/ 800]……Epoch: [ 0] [ 599/ 800]……Epoch: [ 0] [ 699/ 800]……Epoch: [ 0] [ 799/ 800]……Epoch: [ 1] [ 99/ 800]

TF之DCGAN:基于TF利用DCGAN测试自己的数据集并进行生成过程全记录相关推荐

  1. TF之DCGAN:基于TF利用DCGAN测试MNIST数据集并进行生成过程全记录

    TF之DCGAN:基于TF利用DCGAN测试MNIST数据集并进行生成 目录 测试结果 测试过程全记录 测试结果 train_00_0099 train_00_0799 train_00_0899 t ...

  2. TF之pix2pix:基于TF利用Facades数据集训练pix2pix模型、测试并进行生成过程全记录

    TF之pix2pix:基于TF利用Facades数据集训练pix2pix模型.测试并进行生成过程全记录 目录 TB监控 1.SCALARS 2.IMAGES 3.GRAPHS 4.DISTRIBUTI ...

  3. docker环境安装mysql、canal、elasticsearch,基于binlog利用canal实现mysql的数据同步到elasticsearch中

    文章目录 1.docker安装 1.1 基于ubuntu 1.2 基于centos7 2.数据卷统一管理 3.安装mysql 4.安装elasticsearch 5.es安装ik中文分词器 5.1 在 ...

  4. DL之Attention-ED:基于TF NMT利用带有Attention的 ED模型训练、测试(中英文平行语料库)实现将英文翻译为中文的LSTM翻译模型过程全记录

    DL之Attention-ED:基于TF NMT利用带有Attention的 ED模型训练(中英文平行语料库)实现将英文翻译为中文的LSTM翻译模型过程全记录 目录 测试输出结果 模型监控 训练过程全 ...

  5. DL之DCGNN:基于TF利用DCGAN实现在MNIST数据集上训练生成新样本

    DL之DCGNN:基于TF利用DCGAN实现在MNIST数据集上训练生成新样本 目录 输出结果 设计思路 实现部分代码 说明:所有图片文件丢失 输出结果 更新-- 设计思路 更新-- 实现部分代码 更 ...

  6. DL之pix2pix:基于TF利用pix2pix模型对food_resized数据集实现Auto Color自动上色技术—训练测试过程全记录

    DL之pix2pix:基于TF利用pix2pix模型对food_resized数据集实现Auto Color自动上色技术 目录 训练 food_resized数据集展示 TB过程监控 1.SCALAR ...

  7. TF之LoR:基于tensorflow利用逻辑回归算LoR法实现手写数字图片识别提高准确率

    TF之LoR:基于tensorflow利用逻辑回归算LoR法实现手写数字图片识别提高准确率 目录 输出结果 设计代码 输出结果 设计代码 #TF之LoR:基于tensorflow实现手写数字图片识别准 ...

  8. RL之PG:基于TF利用策略梯度算法玩Cartpole游戏实现智能得高分

    RL之PG:基于TF利用策略梯度算法玩Cartpole游戏实现智能得高分 目录 输出结果 设计思路 测试过程 输出结果 视频观看地址:强化学习-基于TF利用策略梯度算法玩Cartpole游戏实现智能得 ...

  9. DL之LSTM之MvP:基于TF利用LSTM基于DIY时间训练csv文件数据预测后100个数据(多值预测)状态

    DL之LSTM之MvP:基于TF利用LSTM基于DIY时间训练csv文件数据预测后100个数据(多值预测)状态 目录 数据集csv文件内容 输出结果 设计思路 训练记录全过程 数据集csv文件内容 输 ...

最新文章

  1. [Linux].netrc或者_netrc使用可以
  2. 手机的次世代竞争年代
  3. 如何隐藏你的 Linux 的命令行历史
  4. vim高级技巧(split)_小花_新浪博客
  5. C语言选择结构和循环结构的汇总
  6. MySQL 创建索引和索引效率验证
  7. 判断语句_如何学好C语言判断语句?攻略if语句是第一步
  8. RsyncServer服务无法启动的解决方法
  9. 摩托罗拉Edge真机谍照曝光:挖孔瀑布屏+骁龙765
  10. Snapshot使用
  11. vue实现周日历切换(两种方式)
  12. 新型安卓木马SpyNote生成器遭泄露
  13. Wangle源码分析:编解码Handler
  14. android listview 图片闪烁,listView异步加载图片导致图片错位、闪烁、重复的问题的解决...
  15. 网络知识详解之:网络攻击与安全防护
  16. 安卓手机如何查看手机控制台输出
  17. 排序算法总结--希尔排序
  18. ug java环境变量设置_关于UG环境变量
  19. 北京,探索「宜居」的技术路径
  20. mysql事务转账_模拟数据库事务实现转账

热门文章

  1. tcp http https
  2. Office365 Manager Plus之报表
  3. 同余方程———扩展欧几里得
  4. Vim文件管理器NERD tree
  5. 阿里巴巴为什么要禁止使用存储过程?
  6. 图文并茂,万字详解,带你掌握 JVM 垃圾回收!
  7. 首次公开:京东数科强一致、高性能分布式事务中间件 JDTX
  8. 凭借这 10 大算法,就可以主宰世界!
  9. 如何成为一个卓越的程序员
  10. Java8 HashMap源码分析