TF之LiR:基于tensorflow实现机器学习之线性回归算法
生活随笔
收集整理的這篇文章主要介紹了
TF之LiR:基于tensorflow实现机器学习之线性回归算法
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
TF之LiR:基于tensorflow實現機器學習之線性回歸算法
?
?
目錄
輸出結果
代碼設計
?
?
?
?
輸出結果
代碼設計
# -*- coding: utf-8 -*-#TF之LiR:基于tensorflow實現機器學習之線性回歸算法 import tensorflow as tf import numpy import matplotlib.pyplot as pltrng =numpy.random#參數設定 learning_rate=0.01 training_epochs=10000 display_step=50 #每隔50次迭代輸出一次 #訓練數據 train_X=numpy.asarray([……]) train_Y=numpy.asarray([……]) n_samples=train_X.shape[0] print("train_X:",train_X) print("train_Y:",train_Y) #設置placeholder X=tf.placeholder("float") Y=tf.placeholder("float")#設置模型的權重和偏置,因為是不斷更新的所以采用Variable定義 W=tf.Variable(rng.randn(),name="weight") b=tf.Variable(rng.randn(),name="bias")#設置線性回歸方程LiR:w*x+b pred=tf.add(tf.multiply(X,W),b) cost=tf.reduce_sum(tf.pow(pred-Y,2))/(2*n_samples) #設置cost為均方差即reduce_sum函數 optimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #梯度下降,minimize函數默認下自動修正w和b init=tf.global_variables_initializer() #在session運算時初始化所有變量 #開始訓練 with tf.Session() as sess:sess.run(init) #運行一下初始化的變量for epoch in range(training_epochs): #輸入所有訓練數據for(x,y) in zip(train_X,train_Y):sess.run(optimizer,feed_dict={X:x,Y:y})#打印出每次迭代的log日志,每隔50個打印一次if (epoch+1) % display_step ==0:c=sess.run(cost,feed_dict={X:train_X,Y:train_Y})print("迭代次數Epoch:","%04d" % (epoch+1),"下降值cost=","{:.9f}".format(c),"W=",sess.run(W),"b=",sess.run(b))print("Optimizer Finished!")training_cost=sess.run(cost,feed_dict={X:train_X,Y:train_Y})print("Training cost=",training_cost,"W=",sess.run(W),"b=",sess.run(b))#繪圖plt.rcParams['font.sans-serif']=['SimHei']plt.subplot(121)plt.plot(train_X, train_Y, 'ro', label='Original data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')plt.legend() plt.title("TF之LiR:Original data")#測試樣本test_X = numpy.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1]) test_Y = numpy.asarray([1.84, 2.273, 3.2, 2.831,2.92, 3.24, 1.35, 1.03])print("Testing... (Mean square loss Comparison)") testing_cost = sess.run(tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]), feed_dict={X:test_X,Y:test_Y}) # same function as cost above print("Testing cost=", testing_cost) print("Absolute mean square loss difference:", abs( training_cost - testing_cost)) #繪圖plt.subplot(122)plt.plot(test_X, test_Y, 'bo', label='Testing data') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')plt.legend() plt.title("TF之LiR:Testing data")plt.show() 迭代次數Epoch: 6300 下降值cost= 0.076938324 W= 0.25199208 b= 0.8008495 …… 迭代次數Epoch: 10000 下降值cost= 0.076965131 W= 0.24998894 b= 0.80145526 迭代次數Epoch: 10000 下降值cost= 0.076942705 W= 0.25047526 b= 0.80151606 迭代次數Epoch: 10000 下降值cost= 0.076929517 W= 0.25114807 b= 0.801635 迭代次數Epoch: 10000 下降值cost= 0.076958008 W= 0.25011322 b= 0.8015234 迭代次數Epoch: 10000 下降值cost= 0.076990739 W= 0.24960834 b= 0.80136055 Optimizer Finished! Training cost= 0.07699074 W= 0.24960834 b= 0.80136055 Testing... (Mean square loss Comparison) Testing cost= 0.07910849 Absolute mean square loss difference: 0.002117753?
?
?
總結
以上是生活随笔為你收集整理的TF之LiR:基于tensorflow实现机器学习之线性回归算法的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: TF:利用TF的train.Saver将
- 下一篇: DL之LSTM:LSTM算法论文简介(原