图像分类

在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。

获取数据

请运行以下单元,以下载 CIFAR-10 数据集(Python版)

In [9]:
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile

cifar10_dataset_folder_path = 'cifar-10-batches-py'

# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
    tar_gz_path = floyd_cifar10_location
else:
    tar_gz_path = 'cifar-10-python.tar.gz'

class DLProgress(tqdm):
    last_block = 0

    def hook(self, block_num=1, block_size=1, total_size=None):
        self.total = total_size
        self.update((block_num - self.last_block) * block_size)
        self.last_block = block_num

if not isfile(tar_gz_path):
    with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
        urlretrieve(
            'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
            tar_gz_path,
            pbar.hook)

if not isdir(cifar10_dataset_folder_path):
    with tarfile.open(tar_gz_path) as tar:
        tar.extractall()
        tar.close()


tests.test_folder_path(cifar10_dataset_folder_path)
All files found!

探索数据

该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片:

  • 飞机
  • 汽车
  • 鸟类
  • 鹿
  • 青蛙
  • 船只
  • 卡车

了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_idsample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。

问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。

In [10]:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import helper
import numpy as np

# Explore the dataset
batch_id = 1
sample_id = 4
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]

Example of Image 4:
Image - Min Value: 0 Max Value: 254
Image - Shape: (32, 32, 3)
Label - Label Id: 1 Name: automobile

实现预处理函数

标准化

在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。

In [11]:
def normalize(x):
    """
    Normalize a list of sample image data in the range of 0 to 1
    : x: List of image data.  The image shape is (32, 32, 3)
    : return: Numpy array of normalize data
    """
    # TODO: Implement Function
   
    
    return (x-x.min())/(x.max()-x.min())


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
Tests Passed

One-hot 编码

和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。

提示:不要重复发明轮子。

In [14]:
from sklearn.preprocessing import LabelBinarizer
def one_hot_encode(x):
    """
    One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
    : x: List of sample Labels
    : return: Numpy array of one-hot encoded labels
    """
    # TODO: Implement Function
    encoder=LabelBinarizer().fit(range(10))
    return encoder.transform(x)


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
Tests Passed

随机化数据

之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。

预处理所有数据并保存

运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。

In [15]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)

检查点

这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。

In [16]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper

# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))

构建网络

对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。

注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow LayersTensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。

但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2dtf.nn.conv2d,而不是 TF Layers 版本的 conv2dtf.layers.conv2d

我们开始吧!

输入

神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数:

  • 实现 neural_net_image_input
    • 返回 TF Placeholder
    • 使用 image_shape 设置形状,部分大小设为 None
    • 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "x" 命名
  • 实现 neural_net_label_input
    • 返回 TF Placeholder
    • 使用 n_classes 设置形状,部分大小设为 None
    • 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "y" 命名
  • 实现 neural_net_keep_prob_input
    • 返回 TF Placeholder,用于丢弃保留概率
    • 使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 "keep_prob" 命名

这些名称将在项目结束时,用于加载保存的模型。

注意:TensorFlow 中的 None 表示形状可以是动态大小。

In [18]:
import tensorflow as tf

def neural_net_image_input(image_shape):
    """
    Return a Tensor for a batch of image input
    : image_shape: Shape of the images
    : return: Tensor for image input.
    """
    # TODO: Implement Function
    shape=[None,*image_shape]
    return tf.placeholder(tf.float32,shape=shape,name="x")


def neural_net_label_input(n_classes):
    """
    Return a Tensor for a batch of label input
    : n_classes: Number of classes
    : return: Tensor for label input.
    """
    # TODO: Implement Function
    shape=[None,n_classes]
    return tf.placeholder(tf.float32,shape=shape,name="y")


def neural_net_keep_prob_input():
    """
    Return a Tensor for keep probability
    : return: Tensor for keep probability.
    """
    # TODO: Implement Function
    return tf.placeholder(tf.float32,name="keep_prob")


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.

卷积和最大池化层

卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化:

  • 使用 conv_ksizeconv_num_outputsx_tensor 的形状创建权重(weight)和偏置(bias)。
  • 使用权重和 conv_stridesx_tensor 应用卷积。
    • 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。
  • 添加偏置
  • 向卷积中添加非线性激活(nonlinear activation)
  • 使用 pool_ksizepool_strides 应用最大池化
    • 建议使用我们建议的间距(padding),当然也可以使用任何其他间距。

注意:对于此层请勿使用 TensorFlow LayersTensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。

In [19]:
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
    """
    Apply convolution then max pooling to x_tensor
    :param x_tensor: TensorFlow Tensor
    :param conv_num_outputs: Number of outputs for the convolutional layer
    :param conv_ksize: kernal size 2-D Tuple for the convolutional layer
    :param conv_strides: Stride 2-D Tuple for convolution
    :param pool_ksize: kernal size 2-D Tuple for pool
    :param pool_strides: Stride 2-D Tuple for pool
    : return: A tensor that represents convolution and max pooling of x_tensor
    """
    # TODO: Implement Function
    input_channels=x_tensor.get_shape().as_list()[-1]
    f_w=tf.Variable(tf.truncated_normal([conv_ksize[0],conv_ksize[1],input_channels,conv_num_outputs],stddev=0.1,dtype=tf.float32))
    f_b=tf.Variable(tf.zeros(conv_num_outputs))
    f_strides=[1,conv_strides[0],conv_strides[1],1]
    conv_layer=tf.nn.conv2d(x_tensor,f_w,f_strides,'SAME')
    conv_layer=tf.nn.bias_add(conv_layer,f_b)
    
    relu_layer=tf.nn.relu(conv_layer)
    
    p_size=[1,pool_ksize[0],pool_ksize[1],1]
    p_strides=[1,pool_strides[0],pool_strides[1],1]
    
    pool_layer=tf.nn.max_pool(relu_layer,p_size,p_strides,'SAME')
    
    return pool_layer 


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
Tests Passed

扁平化层

实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size)扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow LayersTensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。

In [20]:
import numpy as np
def flatten(x_tensor):
    """
    Flatten x_tensor to (Batch Size, Flattened Image Size)
    : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
    : return: A tensor of size (Batch Size, Flattened Image Size).
    """
    # TODO: Implement Function
    shape = x_tensor.get_shape().as_list()     
    dim = np.prod(shape[1:])         
    return tf.reshape(x_tensor,[-1,dim])


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
Tests Passed

全连接层

实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size)num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow LayersTensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。

In [21]:
def fully_conn(x_tensor, num_outputs):
    """
    Apply a fully connected layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    f_w=tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[-1],num_outputs],stddev=0.1,dtype=tf.float32))
    f_b=tf.Variable(tf.zeros(num_outputs))
    f_output=tf.nn.relu(tf.nn.bias_add(tf.matmul(x_tensor,f_w),f_b))
    
    return f_output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
Tests Passed

输出层

实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size)num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow LayersTensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。

注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。

In [22]:
def output(x_tensor, num_outputs):
    """
    Apply a output layer to x_tensor using weight and bias
    : x_tensor: A 2-D tensor where the first dimension is batch size.
    : num_outputs: The number of output that the new tensor should be.
    : return: A 2-D tensor where the second dimension is num_outputs.
    """
    # TODO: Implement Function
    o_w=tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[-1],num_outputs],stddev=0.1,dtype=tf.float32))
    o_b=tf.Variable(tf.zeros(num_outputs))
    o_output=tf.nn.bias_add(tf.matmul(x_tensor,o_w),o_b)
    return o_output


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
Tests Passed

创建卷积模型

实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型:

  • 应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers)
  • 应用一个扁平层(Flatten Layer)
  • 应用 1、2 或 3 个完全连接层(Fully Connected Layers)
  • 应用一个输出层(Output Layer)
  • 返回输出
  • 使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout
In [80]:
def conv_net(x, keep_prob):
    """
    Create a convolutional neural network model
    : x: Placeholder tensor that holds image data.
    : keep_prob: Placeholder tensor that hold dropout keep probability.
    : return: Tensor that represents logits
    """
    # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
    #    Play around with different number of outputs, kernel size and stride
    # Function Definition from Above:
    #    conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
    conv_layer1=conv2d_maxpool(x,32,(2,2),(1,1),(2,2),(1,1))
    d_layer1=tf.nn.dropout(conv_layer1,keep_prob)
    conv_layer2=conv2d_maxpool(d_layer1,64,(4,4),(1,1),(2,2),(1,1))
    d_layer2=tf.nn.dropout(conv_layer2,keep_prob)
    # TODO: Apply a Flatten Layer
    # Function Definition from Above:
    #   flatten(x_tensor)
    flatten_layer=flatten(d_layer2)
    
    # TODO: Apply 1, 2, or 3 Fully Connected Layers
    #    Play around with different number of outputs
    # Function Definition from Above:
    #   fully_conn(x_tensor, num_outputs)
    full_layer1=fully_conn(flatten_layer,128)
    d_layer3=tf.nn.dropout(full_layer1,keep_prob)
    full_layer2=fully_conn(d_layer3,64)
    
    # TODO: Apply an Output Layer
    #    Set this to the number of classes
    # Function Definition from Above:
    #   output(x_tensor, num_outputs)
    output_layer=output(full_layer2,10)
    
    # TODO: return output
    return output_layer


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""

##############################
## Build the Neural Network ##
##############################

# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()

# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()

# Model
logits = conv_net(x, keep_prob)

# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')

# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')

tests.test_conv_net(conv_net)
Neural Network Built!

训练神经网络

单次优化

实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数:

  • x 表示图片输入
  • y 表示标签
  • keep_prob 表示丢弃的保留率

每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。

注意:不需要返回任何内容。该函数只是用来优化神经网络。

In [81]:
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
    """
    Optimize the session on a batch of images and labels
    : session: Current TensorFlow session
    : optimizer: TensorFlow optimizer function
    : keep_probability: keep probability
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    """
    # TODO: Implement Function
    session.run(optimizer,feed_dict={keep_prob:keep_probability,x:feature_batch,y:label_batch})


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
Tests Passed

显示数据

实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_featuresvalid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。

In [82]:
def print_stats(session, feature_batch, label_batch, cost, accuracy):
    """
    Print information about loss and validation accuracy
    : session: Current TensorFlow session
    : feature_batch: Batch of Numpy image data
    : label_batch: Batch of Numpy label data
    : cost: TensorFlow cost function
    : accuracy: TensorFlow accuracy function
    """
    # TODO: Implement Function
    batch_cost=sess.run(cost,feed_dict={keep_prob:1.0,x:feature_batch,y:label_batch})
    batch_acc=sess.run(accuracy,feed_dict={keep_prob:1.0,x:valid_features,y:valid_labels})
    print('cost:{:.2f},accuracy:{:.2f}'.format(batch_cost,batch_acc))

超参数

调试以下超参数:

  • 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数
  • 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小:

    • 64
    • 128
    • 256
    • ...
  • 设置 keep_probability 表示使用丢弃时保留节点的概率
In [91]:
# TODO: Tune Parameters
epochs = 50
batch_size = 256
keep_probability = 0.5

在单个 CIFAR-10 部分上训练

我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。

In [92]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
config=tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.9
with tf.device('/gpu:1'):
    with tf.Session(config=config) as sess:
        # Initializing the variables
        sess.run(tf.global_variables_initializer())

        # Training cycle
        for epoch in range(epochs):
            batch_i = 1
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
Checking the Training on a Single Batch...
Epoch  1, CIFAR-10 Batch 1:  cost:2.31,accuracy:0.10
Epoch  2, CIFAR-10 Batch 1:  cost:2.30,accuracy:0.16
Epoch  3, CIFAR-10 Batch 1:  cost:2.25,accuracy:0.20
Epoch  4, CIFAR-10 Batch 1:  cost:2.24,accuracy:0.19
Epoch  5, CIFAR-10 Batch 1:  cost:2.13,accuracy:0.27
Epoch  6, CIFAR-10 Batch 1:  cost:2.04,accuracy:0.32
Epoch  7, CIFAR-10 Batch 1:  cost:1.97,accuracy:0.32
Epoch  8, CIFAR-10 Batch 1:  cost:1.88,accuracy:0.36
Epoch  9, CIFAR-10 Batch 1:  cost:1.83,accuracy:0.38
Epoch 10, CIFAR-10 Batch 1:  cost:1.78,accuracy:0.39
Epoch 11, CIFAR-10 Batch 1:  cost:1.75,accuracy:0.40
Epoch 12, CIFAR-10 Batch 1:  cost:1.68,accuracy:0.42
Epoch 13, CIFAR-10 Batch 1:  cost:1.64,accuracy:0.44
Epoch 14, CIFAR-10 Batch 1:  cost:1.60,accuracy:0.44
Epoch 15, CIFAR-10 Batch 1:  cost:1.58,accuracy:0.43
Epoch 16, CIFAR-10 Batch 1:  cost:1.53,accuracy:0.45
Epoch 17, CIFAR-10 Batch 1:  cost:1.46,accuracy:0.48
Epoch 18, CIFAR-10 Batch 1:  cost:1.43,accuracy:0.47
Epoch 19, CIFAR-10 Batch 1:  cost:1.40,accuracy:0.48
Epoch 20, CIFAR-10 Batch 1:  cost:1.38,accuracy:0.48
Epoch 21, CIFAR-10 Batch 1:  cost:1.33,accuracy:0.49
Epoch 22, CIFAR-10 Batch 1:  cost:1.33,accuracy:0.49
Epoch 23, CIFAR-10 Batch 1:  cost:1.34,accuracy:0.47
Epoch 24, CIFAR-10 Batch 1:  cost:1.23,accuracy:0.51
Epoch 25, CIFAR-10 Batch 1:  cost:1.15,accuracy:0.52
Epoch 26, CIFAR-10 Batch 1:  cost:1.14,accuracy:0.52
Epoch 27, CIFAR-10 Batch 1:  cost:1.15,accuracy:0.52
Epoch 28, CIFAR-10 Batch 1:  cost:1.12,accuracy:0.52
Epoch 29, CIFAR-10 Batch 1:  cost:1.06,accuracy:0.53
Epoch 30, CIFAR-10 Batch 1:  cost:1.06,accuracy:0.54
Epoch 31, CIFAR-10 Batch 1:  cost:1.02,accuracy:0.54
Epoch 32, CIFAR-10 Batch 1:  cost:1.03,accuracy:0.54
Epoch 33, CIFAR-10 Batch 1:  cost:0.97,accuracy:0.53
Epoch 34, CIFAR-10 Batch 1:  cost:0.96,accuracy:0.54
Epoch 35, CIFAR-10 Batch 1:  cost:0.95,accuracy:0.54
Epoch 36, CIFAR-10 Batch 1:  cost:0.91,accuracy:0.54
Epoch 37, CIFAR-10 Batch 1:  cost:0.89,accuracy:0.56
Epoch 38, CIFAR-10 Batch 1:  cost:0.82,accuracy:0.54
Epoch 39, CIFAR-10 Batch 1:  cost:0.84,accuracy:0.55
Epoch 40, CIFAR-10 Batch 1:  cost:0.79,accuracy:0.56
Epoch 41, CIFAR-10 Batch 1:  cost:0.76,accuracy:0.56
Epoch 42, CIFAR-10 Batch 1:  cost:0.75,accuracy:0.56
Epoch 43, CIFAR-10 Batch 1:  cost:0.72,accuracy:0.56
Epoch 44, CIFAR-10 Batch 1:  cost:0.71,accuracy:0.56
Epoch 45, CIFAR-10 Batch 1:  cost:0.70,accuracy:0.56
Epoch 46, CIFAR-10 Batch 1:  cost:0.67,accuracy:0.56
Epoch 47, CIFAR-10 Batch 1:  cost:0.66,accuracy:0.56
Epoch 48, CIFAR-10 Batch 1:  cost:0.64,accuracy:0.57
Epoch 49, CIFAR-10 Batch 1:  cost:0.62,accuracy:0.56
Epoch 50, CIFAR-10 Batch 1:  cost:0.61,accuracy:0.56

完全训练模型

现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。

In [93]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'

print('Training...')
with tf.Session() as sess:
    # Initializing the variables
    sess.run(tf.global_variables_initializer())
    
    # Training cycle
    for epoch in range(epochs):
        # Loop over all batches
        n_batches = 5
        for batch_i in range(1, n_batches + 1):
            for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
                train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
            print('Epoch {:>2}, CIFAR-10 Batch {}:  '.format(epoch + 1, batch_i), end='')
            print_stats(sess, batch_features, batch_labels, cost, accuracy)
            
    # Save Model
    saver = tf.train.Saver()
    save_path = saver.save(sess, save_model_path)
Training...
Epoch  1, CIFAR-10 Batch 1:  cost:2.30,accuracy:0.10
Epoch  1, CIFAR-10 Batch 2:  cost:2.31,accuracy:0.10
Epoch  1, CIFAR-10 Batch 3:  cost:2.30,accuracy:0.10
Epoch  1, CIFAR-10 Batch 4:  cost:2.30,accuracy:0.11
Epoch  1, CIFAR-10 Batch 5:  cost:2.27,accuracy:0.15
Epoch  2, CIFAR-10 Batch 1:  cost:2.24,accuracy:0.20
Epoch  2, CIFAR-10 Batch 2:  cost:2.19,accuracy:0.21
Epoch  2, CIFAR-10 Batch 3:  cost:2.04,accuracy:0.22
Epoch  2, CIFAR-10 Batch 4:  cost:2.06,accuracy:0.26
Epoch  2, CIFAR-10 Batch 5:  cost:2.05,accuracy:0.23
Epoch  3, CIFAR-10 Batch 1:  cost:2.01,accuracy:0.31
Epoch  3, CIFAR-10 Batch 2:  cost:1.87,accuracy:0.30
Epoch  3, CIFAR-10 Batch 3:  cost:1.78,accuracy:0.32
Epoch  3, CIFAR-10 Batch 4:  cost:1.92,accuracy:0.35
Epoch  3, CIFAR-10 Batch 5:  cost:1.84,accuracy:0.35
Epoch  4, CIFAR-10 Batch 1:  cost:1.86,accuracy:0.40
Epoch  4, CIFAR-10 Batch 2:  cost:1.68,accuracy:0.40
Epoch  4, CIFAR-10 Batch 3:  cost:1.63,accuracy:0.41
Epoch  4, CIFAR-10 Batch 4:  cost:1.75,accuracy:0.44
Epoch  4, CIFAR-10 Batch 5:  cost:1.74,accuracy:0.40
Epoch  5, CIFAR-10 Batch 1:  cost:1.79,accuracy:0.44
Epoch  5, CIFAR-10 Batch 2:  cost:1.60,accuracy:0.44
Epoch  5, CIFAR-10 Batch 3:  cost:1.52,accuracy:0.44
Epoch  5, CIFAR-10 Batch 4:  cost:1.65,accuracy:0.45
Epoch  5, CIFAR-10 Batch 5:  cost:1.60,accuracy:0.46
Epoch  6, CIFAR-10 Batch 1:  cost:1.67,accuracy:0.46
Epoch  6, CIFAR-10 Batch 2:  cost:1.49,accuracy:0.46
Epoch  6, CIFAR-10 Batch 3:  cost:1.41,accuracy:0.47
Epoch  6, CIFAR-10 Batch 4:  cost:1.55,accuracy:0.51
Epoch  6, CIFAR-10 Batch 5:  cost:1.57,accuracy:0.50
Epoch  7, CIFAR-10 Batch 1:  cost:1.50,accuracy:0.51
Epoch  7, CIFAR-10 Batch 2:  cost:1.45,accuracy:0.47
Epoch  7, CIFAR-10 Batch 3:  cost:1.36,accuracy:0.51
Epoch  7, CIFAR-10 Batch 4:  cost:1.46,accuracy:0.51
Epoch  7, CIFAR-10 Batch 5:  cost:1.49,accuracy:0.51
Epoch  8, CIFAR-10 Batch 1:  cost:1.45,accuracy:0.51
Epoch  8, CIFAR-10 Batch 2:  cost:1.37,accuracy:0.52
Epoch  8, CIFAR-10 Batch 3:  cost:1.34,accuracy:0.52
Epoch  8, CIFAR-10 Batch 4:  cost:1.36,accuracy:0.54
Epoch  8, CIFAR-10 Batch 5:  cost:1.42,accuracy:0.54
Epoch  9, CIFAR-10 Batch 1:  cost:1.36,accuracy:0.53
Epoch  9, CIFAR-10 Batch 2:  cost:1.33,accuracy:0.52
Epoch  9, CIFAR-10 Batch 3:  cost:1.25,accuracy:0.55
Epoch  9, CIFAR-10 Batch 4:  cost:1.37,accuracy:0.56
Epoch  9, CIFAR-10 Batch 5:  cost:1.34,accuracy:0.54
Epoch 10, CIFAR-10 Batch 1:  cost:1.28,accuracy:0.56
Epoch 10, CIFAR-10 Batch 2:  cost:1.31,accuracy:0.54
Epoch 10, CIFAR-10 Batch 3:  cost:1.25,accuracy:0.56
Epoch 10, CIFAR-10 Batch 4:  cost:1.31,accuracy:0.56
Epoch 10, CIFAR-10 Batch 5:  cost:1.29,accuracy:0.56
Epoch 11, CIFAR-10 Batch 1:  cost:1.22,accuracy:0.55
Epoch 11, CIFAR-10 Batch 2:  cost:1.20,accuracy:0.53
Epoch 11, CIFAR-10 Batch 3:  cost:1.15,accuracy:0.57
Epoch 11, CIFAR-10 Batch 4:  cost:1.23,accuracy:0.58
Epoch 11, CIFAR-10 Batch 5:  cost:1.26,accuracy:0.57
Epoch 12, CIFAR-10 Batch 1:  cost:1.19,accuracy:0.55
Epoch 12, CIFAR-10 Batch 2:  cost:1.18,accuracy:0.56
Epoch 12, CIFAR-10 Batch 3:  cost:1.11,accuracy:0.58
Epoch 12, CIFAR-10 Batch 4:  cost:1.22,accuracy:0.60
Epoch 12, CIFAR-10 Batch 5:  cost:1.17,accuracy:0.59
Epoch 13, CIFAR-10 Batch 1:  cost:1.16,accuracy:0.57
Epoch 13, CIFAR-10 Batch 2:  cost:1.14,accuracy:0.57
Epoch 13, CIFAR-10 Batch 3:  cost:1.12,accuracy:0.57
Epoch 13, CIFAR-10 Batch 4:  cost:1.12,accuracy:0.60
Epoch 13, CIFAR-10 Batch 5:  cost:1.10,accuracy:0.60
Epoch 14, CIFAR-10 Batch 1:  cost:1.09,accuracy:0.58
Epoch 14, CIFAR-10 Batch 2:  cost:1.10,accuracy:0.57
Epoch 14, CIFAR-10 Batch 3:  cost:1.08,accuracy:0.57
Epoch 14, CIFAR-10 Batch 4:  cost:1.11,accuracy:0.61
Epoch 14, CIFAR-10 Batch 5:  cost:1.08,accuracy:0.60
Epoch 15, CIFAR-10 Batch 1:  cost:1.07,accuracy:0.58
Epoch 15, CIFAR-10 Batch 2:  cost:1.06,accuracy:0.58
Epoch 15, CIFAR-10 Batch 3:  cost:0.99,accuracy:0.58
Epoch 15, CIFAR-10 Batch 4:  cost:1.04,accuracy:0.59
Epoch 15, CIFAR-10 Batch 5:  cost:1.11,accuracy:0.59
Epoch 16, CIFAR-10 Batch 1:  cost:1.00,accuracy:0.60
Epoch 16, CIFAR-10 Batch 2:  cost:1.07,accuracy:0.58
Epoch 16, CIFAR-10 Batch 3:  cost:1.00,accuracy:0.59
Epoch 16, CIFAR-10 Batch 4:  cost:0.98,accuracy:0.61
Epoch 16, CIFAR-10 Batch 5:  cost:1.01,accuracy:0.60
Epoch 17, CIFAR-10 Batch 1:  cost:1.00,accuracy:0.58
Epoch 17, CIFAR-10 Batch 2:  cost:0.98,accuracy:0.60
Epoch 17, CIFAR-10 Batch 3:  cost:0.92,accuracy:0.60
Epoch 17, CIFAR-10 Batch 4:  cost:0.97,accuracy:0.60
Epoch 17, CIFAR-10 Batch 5:  cost:1.02,accuracy:0.60
Epoch 18, CIFAR-10 Batch 1:  cost:0.98,accuracy:0.58
Epoch 18, CIFAR-10 Batch 2:  cost:0.99,accuracy:0.57
Epoch 18, CIFAR-10 Batch 3:  cost:0.89,accuracy:0.60
Epoch 18, CIFAR-10 Batch 4:  cost:0.96,accuracy:0.61
Epoch 18, CIFAR-10 Batch 5:  cost:0.97,accuracy:0.62
Epoch 19, CIFAR-10 Batch 1:  cost:0.95,accuracy:0.61
Epoch 19, CIFAR-10 Batch 2:  cost:0.98,accuracy:0.60
Epoch 19, CIFAR-10 Batch 3:  cost:0.87,accuracy:0.61
Epoch 19, CIFAR-10 Batch 4:  cost:0.88,accuracy:0.62
Epoch 19, CIFAR-10 Batch 5:  cost:0.97,accuracy:0.61
Epoch 20, CIFAR-10 Batch 1:  cost:0.91,accuracy:0.62
Epoch 20, CIFAR-10 Batch 2:  cost:0.95,accuracy:0.61
Epoch 20, CIFAR-10 Batch 3:  cost:0.82,accuracy:0.63
Epoch 20, CIFAR-10 Batch 4:  cost:0.85,accuracy:0.63
Epoch 20, CIFAR-10 Batch 5:  cost:0.94,accuracy:0.62
Epoch 21, CIFAR-10 Batch 1:  cost:0.85,accuracy:0.61
Epoch 21, CIFAR-10 Batch 2:  cost:0.89,accuracy:0.62
Epoch 21, CIFAR-10 Batch 3:  cost:0.83,accuracy:0.63
Epoch 21, CIFAR-10 Batch 4:  cost:0.81,accuracy:0.65
Epoch 21, CIFAR-10 Batch 5:  cost:0.87,accuracy:0.63
Epoch 22, CIFAR-10 Batch 1:  cost:0.86,accuracy:0.63
Epoch 22, CIFAR-10 Batch 2:  cost:0.86,accuracy:0.63
Epoch 22, CIFAR-10 Batch 3:  cost:0.80,accuracy:0.63
Epoch 22, CIFAR-10 Batch 4:  cost:0.78,accuracy:0.64
Epoch 22, CIFAR-10 Batch 5:  cost:0.84,accuracy:0.63
Epoch 23, CIFAR-10 Batch 1:  cost:0.80,accuracy:0.63
Epoch 23, CIFAR-10 Batch 2:  cost:0.81,accuracy:0.64
Epoch 23, CIFAR-10 Batch 3:  cost:0.82,accuracy:0.63
Epoch 23, CIFAR-10 Batch 4:  cost:0.73,accuracy:0.65
Epoch 23, CIFAR-10 Batch 5:  cost:0.79,accuracy:0.65
Epoch 24, CIFAR-10 Batch 1:  cost:0.77,accuracy:0.65
Epoch 24, CIFAR-10 Batch 2:  cost:0.85,accuracy:0.64
Epoch 24, CIFAR-10 Batch 3:  cost:0.73,accuracy:0.65
Epoch 24, CIFAR-10 Batch 4:  cost:0.74,accuracy:0.65
Epoch 24, CIFAR-10 Batch 5:  cost:0.75,accuracy:0.65
Epoch 25, CIFAR-10 Batch 1:  cost:0.72,accuracy:0.64
Epoch 25, CIFAR-10 Batch 2:  cost:0.79,accuracy:0.62
Epoch 25, CIFAR-10 Batch 3:  cost:0.75,accuracy:0.64
Epoch 25, CIFAR-10 Batch 4:  cost:0.65,accuracy:0.66
Epoch 25, CIFAR-10 Batch 5:  cost:0.75,accuracy:0.65
Epoch 26, CIFAR-10 Batch 1:  cost:0.77,accuracy:0.63
Epoch 26, CIFAR-10 Batch 2:  cost:0.77,accuracy:0.63
Epoch 26, CIFAR-10 Batch 3:  cost:0.64,accuracy:0.66
Epoch 26, CIFAR-10 Batch 4:  cost:0.64,accuracy:0.66
Epoch 26, CIFAR-10 Batch 5:  cost:0.70,accuracy:0.65
Epoch 27, CIFAR-10 Batch 1:  cost:0.70,accuracy:0.65
Epoch 27, CIFAR-10 Batch 2:  cost:0.71,accuracy:0.64
Epoch 27, CIFAR-10 Batch 3:  cost:0.59,accuracy:0.67
Epoch 27, CIFAR-10 Batch 4:  cost:0.62,accuracy:0.66
Epoch 27, CIFAR-10 Batch 5:  cost:0.67,accuracy:0.66
Epoch 28, CIFAR-10 Batch 1:  cost:0.66,accuracy:0.65
Epoch 28, CIFAR-10 Batch 2:  cost:0.66,accuracy:0.64
Epoch 28, CIFAR-10 Batch 3:  cost:0.56,accuracy:0.67
Epoch 28, CIFAR-10 Batch 4:  cost:0.60,accuracy:0.66
Epoch 28, CIFAR-10 Batch 5:  cost:0.69,accuracy:0.66
Epoch 29, CIFAR-10 Batch 1:  cost:0.62,accuracy:0.67
Epoch 29, CIFAR-10 Batch 2:  cost:0.62,accuracy:0.66
Epoch 29, CIFAR-10 Batch 3:  cost:0.60,accuracy:0.66
Epoch 29, CIFAR-10 Batch 4:  cost:0.60,accuracy:0.67
Epoch 29, CIFAR-10 Batch 5:  cost:0.62,accuracy:0.67
Epoch 30, CIFAR-10 Batch 1:  cost:0.63,accuracy:0.67
Epoch 30, CIFAR-10 Batch 2:  cost:0.64,accuracy:0.63
Epoch 30, CIFAR-10 Batch 3:  cost:0.54,accuracy:0.68
Epoch 30, CIFAR-10 Batch 4:  cost:0.54,accuracy:0.67
Epoch 30, CIFAR-10 Batch 5:  cost:0.65,accuracy:0.66
Epoch 31, CIFAR-10 Batch 1:  cost:0.62,accuracy:0.66
Epoch 31, CIFAR-10 Batch 2:  cost:0.60,accuracy:0.65
Epoch 31, CIFAR-10 Batch 3:  cost:0.51,accuracy:0.68
Epoch 31, CIFAR-10 Batch 4:  cost:0.48,accuracy:0.67
Epoch 31, CIFAR-10 Batch 5:  cost:0.63,accuracy:0.66
Epoch 32, CIFAR-10 Batch 1:  cost:0.53,accuracy:0.66
Epoch 32, CIFAR-10 Batch 2:  cost:0.59,accuracy:0.67
Epoch 32, CIFAR-10 Batch 3:  cost:0.46,accuracy:0.68
Epoch 32, CIFAR-10 Batch 4:  cost:0.47,accuracy:0.67
Epoch 32, CIFAR-10 Batch 5:  cost:0.59,accuracy:0.66
Epoch 33, CIFAR-10 Batch 1:  cost:0.54,accuracy:0.66
Epoch 33, CIFAR-10 Batch 2:  cost:0.53,accuracy:0.67
Epoch 33, CIFAR-10 Batch 3:  cost:0.48,accuracy:0.69
Epoch 33, CIFAR-10 Batch 4:  cost:0.46,accuracy:0.68
Epoch 33, CIFAR-10 Batch 5:  cost:0.55,accuracy:0.67
Epoch 34, CIFAR-10 Batch 1:  cost:0.52,accuracy:0.68
Epoch 34, CIFAR-10 Batch 2:  cost:0.46,accuracy:0.67
Epoch 34, CIFAR-10 Batch 3:  cost:0.49,accuracy:0.68
Epoch 34, CIFAR-10 Batch 4:  cost:0.43,accuracy:0.68
Epoch 34, CIFAR-10 Batch 5:  cost:0.46,accuracy:0.68
Epoch 35, CIFAR-10 Batch 1:  cost:0.51,accuracy:0.68
Epoch 35, CIFAR-10 Batch 2:  cost:0.46,accuracy:0.66
Epoch 35, CIFAR-10 Batch 3:  cost:0.39,accuracy:0.69
Epoch 35, CIFAR-10 Batch 4:  cost:0.42,accuracy:0.68
Epoch 35, CIFAR-10 Batch 5:  cost:0.47,accuracy:0.68
Epoch 36, CIFAR-10 Batch 1:  cost:0.48,accuracy:0.68
Epoch 36, CIFAR-10 Batch 2:  cost:0.44,accuracy:0.67
Epoch 36, CIFAR-10 Batch 3:  cost:0.40,accuracy:0.69
Epoch 36, CIFAR-10 Batch 4:  cost:0.43,accuracy:0.67
Epoch 36, CIFAR-10 Batch 5:  cost:0.46,accuracy:0.68
Epoch 37, CIFAR-10 Batch 1:  cost:0.49,accuracy:0.67
Epoch 37, CIFAR-10 Batch 2:  cost:0.49,accuracy:0.66
Epoch 37, CIFAR-10 Batch 3:  cost:0.35,accuracy:0.68
Epoch 37, CIFAR-10 Batch 4:  cost:0.37,accuracy:0.69
Epoch 37, CIFAR-10 Batch 5:  cost:0.47,accuracy:0.67
Epoch 38, CIFAR-10 Batch 1:  cost:0.45,accuracy:0.68
Epoch 38, CIFAR-10 Batch 2:  cost:0.46,accuracy:0.65
Epoch 38, CIFAR-10 Batch 3:  cost:0.32,accuracy:0.69
Epoch 38, CIFAR-10 Batch 4:  cost:0.37,accuracy:0.68
Epoch 38, CIFAR-10 Batch 5:  cost:0.48,accuracy:0.67
Epoch 39, CIFAR-10 Batch 1:  cost:0.44,accuracy:0.68
Epoch 39, CIFAR-10 Batch 2:  cost:0.41,accuracy:0.67
Epoch 39, CIFAR-10 Batch 3:  cost:0.32,accuracy:0.71
Epoch 39, CIFAR-10 Batch 4:  cost:0.35,accuracy:0.68
Epoch 39, CIFAR-10 Batch 5:  cost:0.40,accuracy:0.69
Epoch 40, CIFAR-10 Batch 1:  cost:0.44,accuracy:0.68
Epoch 40, CIFAR-10 Batch 2:  cost:0.35,accuracy:0.68
Epoch 40, CIFAR-10 Batch 3:  cost:0.35,accuracy:0.70
Epoch 40, CIFAR-10 Batch 4:  cost:0.33,accuracy:0.68
Epoch 40, CIFAR-10 Batch 5:  cost:0.37,accuracy:0.69
Epoch 41, CIFAR-10 Batch 1:  cost:0.45,accuracy:0.69
Epoch 41, CIFAR-10 Batch 2:  cost:0.34,accuracy:0.67
Epoch 41, CIFAR-10 Batch 3:  cost:0.27,accuracy:0.70
Epoch 41, CIFAR-10 Batch 4:  cost:0.31,accuracy:0.69
Epoch 41, CIFAR-10 Batch 5:  cost:0.38,accuracy:0.68
Epoch 42, CIFAR-10 Batch 1:  cost:0.39,accuracy:0.68
Epoch 42, CIFAR-10 Batch 2:  cost:0.31,accuracy:0.69
Epoch 42, CIFAR-10 Batch 3:  cost:0.28,accuracy:0.70
Epoch 42, CIFAR-10 Batch 4:  cost:0.32,accuracy:0.68
Epoch 42, CIFAR-10 Batch 5:  cost:0.32,accuracy:0.69
Epoch 43, CIFAR-10 Batch 1:  cost:0.42,accuracy:0.70
Epoch 43, CIFAR-10 Batch 2:  cost:0.32,accuracy:0.69
Epoch 43, CIFAR-10 Batch 3:  cost:0.26,accuracy:0.71
Epoch 43, CIFAR-10 Batch 4:  cost:0.31,accuracy:0.68
Epoch 43, CIFAR-10 Batch 5:  cost:0.32,accuracy:0.69
Epoch 44, CIFAR-10 Batch 1:  cost:0.39,accuracy:0.70
Epoch 44, CIFAR-10 Batch 2:  cost:0.33,accuracy:0.68
Epoch 44, CIFAR-10 Batch 3:  cost:0.28,accuracy:0.70
Epoch 44, CIFAR-10 Batch 4:  cost:0.25,accuracy:0.69
Epoch 44, CIFAR-10 Batch 5:  cost:0.32,accuracy:0.68
Epoch 45, CIFAR-10 Batch 1:  cost:0.38,accuracy:0.70
Epoch 45, CIFAR-10 Batch 2:  cost:0.28,accuracy:0.70
Epoch 45, CIFAR-10 Batch 3:  cost:0.26,accuracy:0.71
Epoch 45, CIFAR-10 Batch 4:  cost:0.29,accuracy:0.69
Epoch 45, CIFAR-10 Batch 5:  cost:0.31,accuracy:0.70
Epoch 46, CIFAR-10 Batch 1:  cost:0.35,accuracy:0.70
Epoch 46, CIFAR-10 Batch 2:  cost:0.26,accuracy:0.69
Epoch 46, CIFAR-10 Batch 3:  cost:0.23,accuracy:0.71
Epoch 46, CIFAR-10 Batch 4:  cost:0.28,accuracy:0.68
Epoch 46, CIFAR-10 Batch 5:  cost:0.26,accuracy:0.70
Epoch 47, CIFAR-10 Batch 1:  cost:0.34,accuracy:0.69
Epoch 47, CIFAR-10 Batch 2:  cost:0.24,accuracy:0.70
Epoch 47, CIFAR-10 Batch 3:  cost:0.21,accuracy:0.71
Epoch 47, CIFAR-10 Batch 4:  cost:0.24,accuracy:0.70
Epoch 47, CIFAR-10 Batch 5:  cost:0.27,accuracy:0.69
Epoch 48, CIFAR-10 Batch 1:  cost:0.33,accuracy:0.70
Epoch 48, CIFAR-10 Batch 2:  cost:0.23,accuracy:0.69
Epoch 48, CIFAR-10 Batch 3:  cost:0.25,accuracy:0.71
Epoch 48, CIFAR-10 Batch 4:  cost:0.24,accuracy:0.68
Epoch 48, CIFAR-10 Batch 5:  cost:0.28,accuracy:0.68
Epoch 49, CIFAR-10 Batch 1:  cost:0.35,accuracy:0.70
Epoch 49, CIFAR-10 Batch 2:  cost:0.24,accuracy:0.68
Epoch 49, CIFAR-10 Batch 3:  cost:0.24,accuracy:0.70
Epoch 49, CIFAR-10 Batch 4:  cost:0.28,accuracy:0.66
Epoch 49, CIFAR-10 Batch 5:  cost:0.28,accuracy:0.69
Epoch 50, CIFAR-10 Batch 1:  cost:0.31,accuracy:0.70
Epoch 50, CIFAR-10 Batch 2:  cost:0.21,accuracy:0.70
Epoch 50, CIFAR-10 Batch 3:  cost:0.22,accuracy:0.70
Epoch 50, CIFAR-10 Batch 4:  cost:0.21,accuracy:0.68
Epoch 50, CIFAR-10 Batch 5:  cost:0.22,accuracy:0.69

检查点

模型已保存到本地。

测试模型

利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。

In [94]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

import tensorflow as tf
import pickle
import helper
import random

# Set batch size if not already set
try:
    if batch_size:
        pass
except NameError:
    batch_size = 64

save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3

def test_model():
    """
    Test the saved model against the test dataset
    """

    test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
    loaded_graph = tf.Graph()

    with tf.Session(graph=loaded_graph) as sess:
        # Load model
        loader = tf.train.import_meta_graph(save_model_path + '.meta')
        loader.restore(sess, save_model_path)

        # Get Tensors from loaded model
        loaded_x = loaded_graph.get_tensor_by_name('x:0')
        loaded_y = loaded_graph.get_tensor_by_name('y:0')
        loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
        loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
        loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
        
        # Get accuracy in batches for memory limitations
        test_batch_acc_total = 0
        test_batch_count = 0
        
        for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
            test_batch_acc_total += sess.run(
                loaded_acc,
                feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
            test_batch_count += 1

        print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))

        # Print Random Samples
        random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
        random_test_predictions = sess.run(
            tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
            feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
        helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)


test_model()
INFO:tensorflow:Restoring parameters from ./image_classification
Testing Accuracy: 0.6859375

为何准确率只有50-80%?

你可能想问,为何准确率不能更高了?首先,对于简单的 CNN 网络来说,50% 已经不低了。纯粹猜测的准确率为10%。但是,你可能注意到有人的准确率远远超过 80%。这是因为我们还没有介绍所有的神经网络知识。我们还需要掌握一些其他技巧。

提交项目

提交项目时,确保先运行所有单元,然后再保存记事本。将 notebook 文件另存为“dlnd_image_classification.ipynb”,再在目录 "File" -> "Download as" 另存为 HTML 格式。请在提交的项目中包含 “helper.py” 和 “problem_unittests.py” 文件。

In [ ]: