深度学习的数学基础

首先准备数据

In [32]:
import numpy as np
x = np.array([[0,0,1],
            [0,1,1],
            [1,0,1],
            [1,1,1]])
                
y = np.array([[0],
            [1],
            [1],
            [0]])
print("x=",x)

print ("y=",y)
x= [[0 0 1]
 [0 1 1]
 [1 0 1]
 [1 1 1]]
y= [[0]
 [1]
 [1]
 [0]]

建立模型

In [36]:
num_epochs = 600000
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
print("syn0=",syn0)
print("syn1=",syn1)

#定义sigmoid函数和导数
def nonlin(x,deriv=False):
    #可以这么求导是因为后面经过激活函数求值以后,实际上就是本身函数的值
    if(deriv==True):
        return x*(1-x)

    return 1/(1+np.exp(-x))

xx=np.arange(-10,10,0.1)
ys=nonlin(xx)
yd=nonlin(xx)*(1-nonlin(xx))

import matplotlib.pyplot as plt
plt.plot(xx,ys)
plt.title("sigmoid")
plt.show()
plt.plot(xx,yd)
plt.title("derivative of sigmoid")
plt.show()
syn0= [[-0.96426842  0.0244601  -0.04091192 -0.54321767]
 [ 0.36686209 -0.67076889 -0.26320355 -0.81425823]
 [-0.53655715 -0.31986942  0.20432336  0.40184354]]
syn1= [[-0.99192681]
 [ 0.24650373]
 [ 0.40585751]
 [ 0.68321834]]

从上面可知,sigmoid函数经过求导后,每次损失0.75的值,可见网络层次深的时候很容易造成梯度消失

训练模型

In [39]:
for j in range(num_epochs):
    #训练三层的神经网络
    k0 = x
    k1 = nonlin(np.dot(k0, syn0))
    k2 = nonlin(np.dot(k1, syn1))
    
    #计算损失
    k2_error = y - k2
    
    if (j% 100000) == 0:
        print ("Error:" ,str(np.mean(np.abs(k2_error))))
        print ("k2:",k2)
    
    #计算反向传播的方向
    k2_delta = k2_error*nonlin(k2, deriv=True)
    #计算损失
    k1_error = k2_delta.dot(syn1.T)
    #计算反向传播的方向
    k1_delta= k1_error * nonlin(k1,deriv=True)
    #修改权重
    syn1 += k1.T.dot(k2_delta)
    syn0 += k0.T.dot(k1_delta)
Error: 0.000954941374078
k2: [[  8.86208575e-04]
 [  9.99277898e-01]
 [  9.98927522e-01]
 [  1.13897673e-03]]
Error: 0.000916569602339
k2: [[  8.51217625e-04]
 [  9.99309542e-01]
 [  9.98968663e-01]
 [  1.09326541e-03]]
Error: 0.000882420307181
k2: [[  8.20057144e-04]
 [  9.99337635e-01]
 [  9.99005332e-01]
 [  1.05259044e-03]]
Error: 0.000851773532862
k2: [[  7.92075922e-04]
 [  9.99362788e-01]
 [  9.99038286e-01]
 [  1.01609228e-03]]
Error: 0.00082407008728
k2: [[  7.66767870e-04]
 [  9.99385476e-01]
 [  9.99068115e-01]
 [  9.83103487e-04]]
Error: 0.000798868170643
k2: [[  7.43732932e-04]
 [  9.99406073e-01]
 [  9.99095285e-01]
 [  9.53096817e-04]]

结论

经过600000轮的反向传播训练,最终的预测值K2和要预测的值y基本一致了,这就是神经网络的基础

参考文献

how_to_do_math_for_deep_learning

In [ ]: