Rbf神经网络使用Tensorflow实现
生活随笔
收集整理的這篇文章主要介紹了
Rbf神经网络使用Tensorflow实现
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
本文用tensorflow實(shí)現(xiàn)rbf神經(jīng)網(wǎng)絡(luò):
主要內(nèi)容:
1、rbf神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)步驟
2、tensorflow實(shí)現(xiàn)rbf神經(jīng)網(wǎng)絡(luò)分類
**
1、rbf神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)步驟
**
(1)定義隱藏層神經(jīng)元個(gè)數(shù)為hidden=20(神經(jīng)元個(gè)數(shù)是隨便選的),選擇每個(gè)神經(jīng)元對(duì)應(yīng)的中心點(diǎn)center,中心點(diǎn)的選擇方法:將輸入樣本x的的每個(gè)特征的最大值max與最小值min的差值分為hidden份(max - min)/hidden,每一個(gè)中心點(diǎn)的坐標(biāo)就是:
(2)使用徑向基函數(shù)計(jì)算隱藏層的值
(3)使用softmax計(jì)算輸出層的值
**
2、tensorflow實(shí)現(xiàn)rbf神經(jīng)網(wǎng)絡(luò)分類
**
import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import LabelEncoder # 由于我下的tensorflow是2.0版本,下面這句話為了防止在2.0下使用1.0版本報(bào)錯(cuò) tf.compat.v1.disable_eager_execution()class RbfClassification:def __init__(self, learning_rate=0.01, hidden=10):# 學(xué)習(xí)率self.lr = learning_rate# 隱藏層神經(jīng)元個(gè)數(shù)self.hidden = hiddendef center_mat(self, x):# 求x所有特征的最大值與最小值t_max = np.max(x, axis=0)t_min = np.min(x, axis=0)# 將最大值和最小值分為self.hidden份,平均分配中心center = []for i in range(self.hidden):center.append(i * ((t_max - t_min)/self.hidden) + t_min)center = np.array(center)# 求解||x-c||2,將結(jié)果存儲(chǔ)到mat中mat = []for i in x:temp = []for j in center - i:temp.append(np.dot(j, j.T))mat.append(temp)return np.array(mat)def run(self, x, y):# 1、定義輸入和輸出self.x = tf.compat.v1.placeholder(tf.float32, [None, x.shape[1]])self.y = tf.compat.v1.placeholder(tf.float32, [None, y.shape[1]])self.mat = tf.compat.v1.placeholder(tf.float32, [None, self.hidden])# 2、構(gòu)建輸入層到隱藏層beta = tf.Variable(tf.random.normal([1, self.hidden])) # 徑向基函數(shù)為e^-beta*(||x-c||^2)L1 = tf.math.exp(self.mat*-beta) # 使用徑向基函數(shù)計(jì)算隱藏層的值# 3、構(gòu)建隱藏層到輸出層weight = tf.Variable(tf.random.normal([self.hidden, self.y.shape[1]])) # 權(quán)值basic = tf.Variable(tf.random.normal([1, self.y.shape[1]])) # 偏置值output = tf.matmul(L1, weight) + basic # 輸出值# 4、損失函數(shù)loss = tf.reduce_mean(tf.compat.v1.nn.softmax_cross_entropy_with_logits(labels=self.y, logits=output))# 5、梯度下降法predict = tf.compat.v1.train.GradientDescentOptimizer(self.lr).minimize(loss)with tf.compat.v1.Session() as sess:# 初始化變量sess.run(tf.compat.v1.global_variables_initializer())# 求mat,存儲(chǔ)到m1m1 = self.center_mat(x)for step in range(20):sess.run(predict, feed_dict={self.x:x, self.y:y, self.mat:m1})result = sess.run([loss, output], feed_dict={self.x:x, self.y:y, self.mat:m1})# 輸出每一步的損失函數(shù)loss的值print(f"step:{step} loss:{result[0]}")# 循環(huán)500次后,輸出output的值print(f"output:{result[1]}")# 計(jì)算精確度acc = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(self.y, 1), tf.argmax(result[1], 1)), tf.float32))re = sess.run(acc, feed_dict={self.x:x, self.y:y})print(f"準(zhǔn)確率:{re}")使用數(shù)據(jù)驗(yàn)證程序正確性
def main():# 制作數(shù)據(jù)x = []y = []for i in range(2):for _ in range(20):a = np.random.normal(i + 5, 0.2)b = np.random.normal(i, 0.2)x.append([a, b])y.append([i])x = np.array(x)# 將y轉(zhuǎn)變?yōu)閛nehot編碼,并轉(zhuǎn)換為數(shù)組類型oht = OneHotEncoder()y = oht.fit_transform(y).toarray()# 調(diào)用自己寫的rbf程序lc = RbfClassification(0.2, 20)result = lc.run(x, y)if __name__ == "__main__":main()程序結(jié)果:
step:0 loss:0.009847460314631462 step:1 loss:0.0063354759477078915 step:2 loss:0.004663033410906792 step:3 loss:0.00367681123316288 step:4 loss:0.003026006743311882 step:5 loss:0.002564833965152502 step:6 loss:0.0022214394994080067 step:7 loss:0.001956184161826968 step:8 loss:0.0017454007174819708 step:9 loss:0.0015740934759378433 step:10 loss:0.0014322480419650674 step:11 loss:0.0013130262959748507 step:12 loss:0.0012114696437492967 step:13 loss:0.0011240066960453987 step:14 loss:0.0010479569900780916 step:15 loss:0.0009812603238970041 step:16 loss:0.0009223271044902503 step:17 loss:0.000869911746121943 step:18 loss:0.0008229954401031137 step:19 loss:0.0007807918009348214 output:[[ -5.6905594 -23.674717 ][ -8.617398 -38.798042 ][ -4.1444187 -15.849449 ][ -3.6620772 -13.654512 ][ -8.246327 -37.815224 ][ -4.8025513 -19.621326 ][ -4.6801014 -19.308613 ][ -4.043401 -12.66901 ][ -3.275519 -11.271512 ][ -8.364632 -36.719036 ][ -3.449597 -12.38932 ][ -4.537184 -18.557596 ][ -3.8763468 -14.292524 ][ -4.3595047 -16.981241 ][ -4.385331 -17.0463 ][ -3.290766 -11.16351 ][ -6.2468643 -27.3551 ][ -4.4771547 -18.061419 ][ -7.7329607 -33.897655 ][ -4.5932465 -18.70145 ][ -4.0903406 2.3125944 ][-33.720146 8.19898 ][ -6.0941734 2.4650843 ][ -7.911108 8.0174885 ][-14.664846 9.801966 ][ -3.646073 1.3106902 ][ -3.3612907 0.48594096][-26.83491 9.89762 ][ -5.46649 5.226462 ][-17.203758 11.449418 ][ -5.746872 5.3967757 ][ -7.02769 6.4736304 ][ -4.9688196 4.3449683 ][ -8.87957 8.680702 ][ -9.566407 9.235068 ][-40.259476 6.399553 ][ -9.986042 7.9442725 ][ -7.6550283 7.5912905 ][ -5.986595 5.7204523 ][-11.090441 9.66671 ]] 準(zhǔn)確率:1.0總結(jié)
以上是生活随笔為你收集整理的Rbf神经网络使用Tensorflow实现的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 关于图像分割Snake算法(c#)的一些
- 下一篇: 项目中常用字典表 —— 各个国家简称映射