日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

tensorflow就该这么学--6(多层神经网络)

發(fā)布時(shí)間:2025/4/5 编程问答 18 豆豆
生活随笔 收集整理的這篇文章主要介紹了 tensorflow就该这么学--6(多层神经网络) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一、線性問(wèn)題和非線性問(wèn)題

1、線性問(wèn)題

某醫(yī)院想用神經(jīng)網(wǎng)絡(luò)對(duì)已經(jīng)有的病例進(jìn)行分類,數(shù)據(jù)樣本特征x包括病人的年齡x1和腫瘤的大小x2,(x[x1,x2]),對(duì)應(yīng)的標(biāo)簽為良性或惡性(0、1)

二分類:

(1)生成數(shù)據(jù)集

  • import?tensorflow?as?tf??
  • import?matplotlib.pyplot?as?plt??
  • import?numpy?as?np??
  • from?sklearn.utils?import?shuffle??
  • ??
  • ??
  • #模擬數(shù)據(jù)點(diǎn)?????
  • def?generate(sample_size,?mean,?cov,?diff,regression):?????
  • ????num_classes?=?2?#len(diff)??
  • ????samples_per_class?=?int(sample_size/2)??
  • ??
  • ????X0?=?np.random.multivariate_normal(mean,?cov,?samples_per_class)??
  • ????Y0?=?np.zeros(samples_per_class)??
  • ??????
  • ????for?ci,?d?in?enumerate(diff):??
  • ????????#?ci=0?d=3??
  • ????????X1?=?np.random.multivariate_normal(mean+d,?cov,?samples_per_class)??
  • ????????Y1?=?(ci+1)*np.ones(samples_per_class)??
  • ??????
  • ????????X0?=?np.concatenate((X0,X1))??
  • ????????Y0?=?np.concatenate((Y0,Y1))??
  • ??????????
  • ????if?regression==False:?#one-hot??0?into?the?vector?"1?0??
  • ????????class_ind?=?[Y==class_number?for?class_number?in?range(num_classes)]??
  • ????????Y?=?np.asarray(np.hstack(class_ind),?dtype=np.float32)??
  • ????X,?Y?=?shuffle(X0,?Y0)??
  • ??????
  • ????return?X,Y??????
  • ??
  • ??
  • input_dim?=?2??????????????????????
  • np.random.seed(10)??
  • num_classes?=2??
  • mean?=?np.random.randn(num_classes)??
  • cov?=?np.eye(num_classes)???
  • X,?Y?=?generate(1000,?mean,?cov,?[3.0],True)??
  • colors?=?['r'?if?l?==?0?else?'b'?for?l?in?Y[:]]??
  • plt.scatter(X[:,0],?X[:,1],?c=colors)??
  • plt.xlabel("Scaled?age?(in?yrs)")??
  • plt.ylabel("Tumor?size?(in?cm)")??
  • plt.show()??
  • (2)構(gòu)建網(wǎng)絡(luò)模型

  • lab_dim?=?1??
  • #?tf?Graph?Input??
  • input_features?=?tf.placeholder(tf.float32,?[None,?input_dim])??
  • input_lables?=?tf.placeholder(tf.float32,?[None,?lab_dim])??
  • #?Set?model?weights??
  • W?=?tf.Variable(tf.random_normal([input_dim,lab_dim]),?name="weight")??
  • b?=?tf.Variable(tf.zeros([lab_dim]),?name="bias")??
  • ??
  • output?=tf.nn.sigmoid(?tf.matmul(input_features,?W)?+?b)??
  • cross_entropy?=?-(input_lables?*?tf.log(output)?+?(1?-?input_lables)?*?tf.log(1?-?output))??
  • ser=?tf.square(input_lables?-?output)??
  • loss?=?tf.reduce_mean(cross_entropy)??
  • err?=?tf.reduce_mean(ser)??
  • optimizer?=?tf.train.AdamOptimizer(0.04)?#盡量用這個(gè)--收斂快,會(huì)動(dòng)態(tài)調(diào)節(jié)梯度??
  • train?=?optimizer.minimize(loss)??#?let?the?optimizer?train??
  • (3)訓(xùn)練

  • maxEpochs?=?50??
  • minibatchSize?=?25??
  • ??
  • #?啟動(dòng)session??
  • with?tf.Session()?as?sess:??
  • ????sess.run(tf.global_variables_initializer())??
  • ??
  • ????for?epoch?in?range(maxEpochs):??
  • ????????sumerr=0??
  • ????????for?i?in?range(np.int32(len(Y)/minibatchSize)):??
  • ????????????x1?=?X[i*minibatchSize:(i+1)*minibatchSize,:]??
  • ????????????y1?=?np.reshape(Y[i*minibatchSize:(i+1)*minibatchSize],[-1,1])??
  • ????????????tf.reshape(y1,[-1,1])??
  • ????????????_,lossval,?outputval,errval?=?sess.run([train,loss,output,err],?feed_dict={input_features:?x1,?input_lables:y1})??
  • ????????????sumerr?=sumerr+errval??
  • ??
  • ????????print?("Epoch:",?'%04d'?%?(epoch+1),?"cost=","{:.9f}".format(lossval),"err=",sumerr/minibatchSize)??
  • ? ? ? ? ??
  • (4)可視化

  • train_X,?train_Y?=?generate(100,?mean,?cov,?[3.0],True)??
  • ????colors?=?['r'?if?l?==?0?else?'b'?for?l?in?train_Y[:]]??
  • ????plt.scatter(train_X[:,0],?train_X[:,1],?c=colors)??
  • ????#plt.scatter(train_X[:,?0],?train_X[:,?1],?c=train_Y)??
  • ????#plt.colorbar()??
  • ??
  • ??
  • #????x1w1+x2*w2+b=0??
  • #????x2=-x1*?w1/w2-b/w2??
  • #?????a*x+b*y+c?=?0??
  • ??????
  • ????x?=?np.linspace(-1,8,200)???
  • ????y=-x*(sess.run(W)[0]/sess.run(W)[1])-sess.run(b)/sess.run(W)[1]??
  • ????plt.plot(x,y,?label='Fitted?line')??
  • ????plt.legend()??
  • ????plt.show()? ?
  • 多分類:

  • import?tensorflow?as?tf??
  • import?numpy?as?np??
  • import?matplotlib.pyplot?as?plt??
  • ??
  • from?sklearn.utils?import?shuffle??
  • from?matplotlib.colors?import?colorConverter,?ListedColormap???
  • ??????
  • #?對(duì)于上面的fit可以這么擴(kuò)展變成動(dòng)態(tài)的??
  • from?sklearn.preprocessing?import?OneHotEncoder??
  • def?onehot(y,start,end):??
  • ????ohe?=?OneHotEncoder()??
  • ????a?=?np.linspace(start,end-1,end-start)??
  • ????b?=np.reshape(a,[-1,1]).astype(np.int32)??
  • ????ohe.fit(b)??
  • ????c=ohe.transform(y).toarray()????
  • ????return?c???????
  • #??
  • ??????
  • def?generate(sample_size,?num_classes,?diff,regression=False):??
  • ????np.random.seed(10)??
  • ????mean?=?np.random.randn(2)??
  • ????cov?=?np.eye(2)????
  • ??????
  • ????#len(diff)??
  • ????samples_per_class?=?int(sample_size/num_classes)??
  • ??
  • ????X0?=?np.random.multivariate_normal(mean,?cov,?samples_per_class)??
  • ????Y0?=?np.zeros(samples_per_class)??
  • ??????
  • ????for?ci,?d?in?enumerate(diff):??
  • ????????X1?=?np.random.multivariate_normal(mean+d,?cov,?samples_per_class)??
  • ????????Y1?=?(ci+1)*np.ones(samples_per_class)??
  • ??????
  • ????????X0?=?np.concatenate((X0,X1))??
  • ????????Y0?=?np.concatenate((Y0,Y1))??
  • ????????#print(X0,?Y0)??
  • ??????
  • ????
  • ????if?regression==False:?#one-hot??0?into?the?vector?"1?0??
  • ????????Y0?=?np.reshape(Y0,[-1,1])??????????
  • ????????#print(Y0.astype(np.int32))??
  • ????????Y0?=?onehot(Y0.astype(np.int32),0,num_classes)??
  • ????????#print(Y0)??
  • ????X,?Y?=?shuffle(X0,?Y0)??
  • ????#print(X,?Y)??
  • ????return?X,Y??????
  • ??
  • ???
  • #?Ensure?we?always?get?the?same?amount?of?randomness??
  • np.random.seed(10)??
  • ??
  • input_dim?=?2??
  • num_classes?=3???
  • X,?Y?=?generate(2000,num_classes,??[[3.0],[3.0,0]],False)??
  • aa?=?[np.argmax(l)?for?l?in?Y]??
  • colors?=['r'?if?l?==?0?else?'b'?if?l==1?else?'y'?for?l?in?aa[:]]??
  • ??
  • plt.scatter(X[:,0],?X[:,1],?c=colors)??
  • plt.xlabel("Scaled?age?(in?yrs)")??
  • plt.ylabel("Tumor?size?(in?cm)")??
  • plt.show()??
  • ??
  • lab_dim?=?num_classes??
  • #?tf?Graph?Input??
  • input_features?=?tf.placeholder(tf.float32,?[None,?input_dim])??
  • input_lables?=?tf.placeholder(tf.float32,?[None,?lab_dim])??
  • #?Set?model?weights??
  • W?=?tf.Variable(tf.random_normal([input_dim,lab_dim]),?name="weight")??
  • b?=?tf.Variable(tf.zeros([lab_dim]),?name="bias")??
  • output?=?tf.matmul(input_features,?W)?+?b??
  • ??
  • z?=?tf.nn.softmax(?output?)??
  • ??
  • a1?=?tf.argmax(tf.nn.softmax(?output?),?axis=1)#按行找出最大索引,生成數(shù)組??
  • b1?=?tf.argmax(input_lables,?axis=1)??
  • err?=?tf.count_nonzero(a1-b1)?#兩個(gè)數(shù)組相減,不為0的就是錯(cuò)誤個(gè)數(shù)??
  • ??
  • cross_entropy?=?tf.nn.softmax_cross_entropy_with_logits(?labels=input_lables,logits=output)??
  • loss?=?tf.reduce_mean(cross_entropy)#對(duì)交叉熵取均值很有必要??
  • ??
  • ??
  • ??
  • optimizer?=?tf.train.AdamOptimizer(0.04)?#盡量用這個(gè)--收斂快,會(huì)動(dòng)態(tài)調(diào)節(jié)梯度??
  • train?=?optimizer.minimize(loss)??#?let?the?optimizer?train??
  • ??
  • maxEpochs?=?50??
  • minibatchSize?=?25??
  • ??
  • #?啟動(dòng)session??
  • with?tf.Session()?as?sess:??
  • ????sess.run(tf.global_variables_initializer())??
  • ??????
  • ????for?epoch?in?range(maxEpochs):??
  • ????????sumerr=0??
  • ????????for?i?in?range(np.int32(len(Y)/minibatchSize)):??
  • ????????????x1?=?X[i*minibatchSize:(i+1)*minibatchSize,:]??
  • ????????????y1?=?Y[i*minibatchSize:(i+1)*minibatchSize,:]??
  • ??
  • ????????????_,lossval,?outputval,errval?=?sess.run([train,loss,output,err],?feed_dict={input_features:?x1,?input_lables:y1})??
  • ????????????sumerr?=sumerr+(errval/minibatchSize)??
  • ??
  • ????????print?("Epoch:",?'%04d'?%?(epoch+1),?"cost=","{:.9f}".format(lossval),"err=",sumerr/minibatchSize)??
  • ???
  • ????train_X,?train_Y?=?generate(200,num_classes,??[[3.0],[3.0,0]],False)??
  • ????aa?=?[np.argmax(l)?for?l?in?train_Y]??????????
  • ????colors?=['r'?if?l?==?0?else?'b'?if?l==1?else?'y'?for?l?in?aa[:]]??
  • ????plt.scatter(train_X[:,0],?train_X[:,1],?c=colors)??
  • ??????
  • ????x?=?np.linspace(-1,8,200)???
  • ??
  • ????y=-x*(sess.run(W)[0][0]/sess.run(W)[1][0])-sess.run(b)[0]/sess.run(W)[1][0]??
  • ????plt.plot(x,y,?label='first?line',lw=3)??
  • ??
  • ????y=-x*(sess.run(W)[0][1]/sess.run(W)[1][1])-sess.run(b)[1]/sess.run(W)[1][1]??
  • ????plt.plot(x,y,?label='second?line',lw=2)??
  • ??
  • ????y=-x*(sess.run(W)[0][2]/sess.run(W)[1][2])-sess.run(b)[2]/sess.run(W)[1][2]??
  • ????plt.plot(x,y,?label='third?line',lw=1)??
  • ??????
  • ????plt.legend()??
  • ????plt.show()???
  • ????print(sess.run(W),sess.run(b))??
  • ??????
  • ??
  • ????train_X,?train_Y?=?generate(200,num_classes,??[[3.0],[3.0,0]],False)??
  • ????aa?=?[np.argmax(l)?for?l?in?train_Y]??????????
  • ????colors?=['r'?if?l?==?0?else?'b'?if?l==1?else?'y'?for?l?in?aa[:]]??
  • ????plt.scatter(train_X[:,0],?train_X[:,1],?c=colors)??????
  • ??????
  • ????nb_of_xs?=?200??
  • ????xs1?=?np.linspace(-1,?8,?num=nb_of_xs)??
  • ????xs2?=?np.linspace(-1,?8,?num=nb_of_xs)??
  • ????xx,?yy?=?np.meshgrid(xs1,?xs2)?#?create?the?grid??
  • ????#?Initialize?and?fill?the?classification?plane??
  • ????classification_plane?=?np.zeros((nb_of_xs,?nb_of_xs))??
  • ????for?i?in?range(nb_of_xs):??
  • ????????for?j?in?range(nb_of_xs):??
  • ????????????#classification_plane[i,j]?=?nn_predict(xx[i,j],?yy[i,j])??
  • ????????????classification_plane[i,j]?=?sess.run(a1,?feed_dict={input_features:?[[?xx[i,j],?yy[i,j]?]]}?)??
  • ??????
  • ??????
  • ????#?Create?a?color?map?to?show?the?classification?colors?of?each?grid?point??
  • ????cmap?=?ListedColormap([??
  • ????????????colorConverter.to_rgba('r',?alpha=0.30),??
  • ????????????colorConverter.to_rgba('b',?alpha=0.30),??
  • ????????????colorConverter.to_rgba('y',?alpha=0.30)])??
  • ????#?Plot?the?classification?plane?with?decision?boundary?and?input?samples??
  • ????plt.contourf(xx,?yy,?classification_plane,?cmap=cmap)??
  • ????plt.show()??????
  • ? ? ??



  • 2、非線性問(wèn)題:利用隱藏層的神經(jīng)網(wǎng)絡(luò)擬合操作

  • import?tensorflow?as?tf??
  • import?numpy?as?np??
  • ??
  • #?網(wǎng)絡(luò)結(jié)構(gòu):2維輸入?-->?2維隱藏層?-->?1維輸出??
  • ??
  • learning_rate?=?1e-4??
  • n_input??=?2??
  • n_label??=?1??
  • n_hidden?=?2??
  • ??
  • ??
  • x?=?tf.placeholder(tf.float32,?[None,n_input])??
  • y?=?tf.placeholder(tf.float32,?[None,?n_label])??
  • ??
  • weights?=?{??
  • ????'h1':?tf.Variable(tf.truncated_normal([n_input,?n_hidden],?stddev=0.1)),??
  • ????'h2':?tf.Variable(tf.random_normal([n_hidden,?n_label],?stddev=0.1))??
  • ????}???
  • biases?=?{??
  • ????'h1':?tf.Variable(tf.zeros([n_hidden])),??
  • ????'h2':?tf.Variable(tf.zeros([n_label]))??
  • ????}??????
  • ??
  • ??
  • layer_1?=?tf.nn.relu(tf.add(tf.matmul(x,?weights['h1']),?biases['h1']))??
  • #y_pred?=?tf.nn.tanh(tf.add(tf.matmul(layer_1,?weights['h2']),biases['h2']))??
  • #y_pred?=?tf.nn.relu(tf.add(tf.matmul(layer_1,?weights['h2']),biases['h2']))#局部最優(yōu)解??
  • ??
  • #y_pred?=?tf.nn.sigmoid(tf.add(tf.matmul(layer_1,?weights['h2']),biases['h2']))??
  • ??
  • #Leaky?relus??40000次?ok??
  • layer2?=tf.add(tf.matmul(layer_1,?weights['h2']),biases['h2'])??
  • y_pred?=?tf.maximum(layer2,0.01*layer2)??
  • ????
  • loss=tf.reduce_mean((y_pred-y)**2)??
  • train_step?=?tf.train.AdamOptimizer(learning_rate).minimize(loss)??
  • ??
  • #生成數(shù)據(jù)??
  • X=[[0,0],[0,1],[1,0],[1,1]]??
  • Y=[[0],[1],[1],[0]]??
  • X=np.array(X).astype('float32')??
  • Y=np.array(Y).astype('int16')??
  • ??
  • #加載??
  • sess?=?tf.InteractiveSession()??
  • sess.run(tf.global_variables_initializer())??
  • ??
  • #訓(xùn)練??
  • for?i?in?range(10000):??
  • ????sess.run(train_step,feed_dict={x:X,y:Y}?)??
  • ??
  • ???????
  • #計(jì)算預(yù)測(cè)值??
  • print(sess.run(y_pred,feed_dict={x:X}))??
  • #輸出:已訓(xùn)練100000次??
  • ??
  • ?????????
  • #查看隱藏層的輸出??
  • print(sess.run(layer_1,feed_dict={x:X}))??
  • 二、網(wǎng)絡(luò)模型訓(xùn)練過(guò)程中可能存在的問(wèn)題

    1、欠擬合

    擬合效果沒(méi)有完全擬合到想要得到的真實(shí)數(shù)據(jù)情況

    解決辦法:增加節(jié)點(diǎn)或增加神經(jīng)層

    2、過(guò)擬合

    擬合程度過(guò)優(yōu)

    解決辦法:early stopping 數(shù)據(jù)集擴(kuò)增、正則化、dropout


    總結(jié)

    以上是生活随笔為你收集整理的tensorflow就该这么学--6(多层神经网络)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

    如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

    主站蜘蛛池模板: 国产伦精品一区二区免费 | 精品无人国产偷自产在线 | 夜间福利在线观看 | 麻豆av网站 | 色婷婷综合视频 | 男人天堂视频网 | 不卡视频一区 | 国产又粗又猛又爽视频 | 深夜福利网站在线观看 | 国产精品久久久久久av | 欧美1区2区| 国产一级二级毛片 | 亚洲精品免费在线播放 | 日本一区二区成人 | 中国成人av | 青青草在线观看视频 | 久久夜夜夜 | 九九亚洲精品 | 亚洲老老头同性老头交j | 少妇一级淫片日本 | 欧美69囗交视频 | 91精品久久久久久综合五月天 | 日韩精品免费 | 亚洲免费自拍 | 国产精品无码专区av在线播放 | 欧美综合亚洲 | 阿娇全套94张未删图久久 | 亚洲av熟女高潮一区二区 | 国产做爰全过程免费视频 | 欧产日产国产精品98 | 日本久久视频 | 亚洲国产精品无码专区 | 精品人妻一区二区三区四区五区 | 熟女少妇一区二区 | 成人少妇影院yyyy | 69人妻一区二区三区 | 黄色影音| 欧美人与禽性xxxxx杂性 | 丰满熟女人妻一区二区三区 | 亚洲爱色 | 欧美日韩中文字幕视频 | 国产xx在线观看 | 午夜精品福利影院 | 深夜福利一区二区三区 | 另类综合在线 | 欧美xxxx非洲 | 亚洲欧美激情一区二区三区 | 欧美群妇大交乱 | 黄视频在线播放 | 国产夫妻自拍小视频 | 久久av免费观看 | 九九九视频在线观看 | 簧片av | 九草网| 日本四虎影院 | 久久四色| 男人与雌性宠物交啪啪 | 午夜视频导航 | 91国内视频 | 91视频导航 | av中文字 | 国产精品欧美激情在线播放 | 真人真事免费毛片 | 国产鲁鲁 | 污污污www精品国产网站 | 小俊大肉大捧一进一出好爽 | 久久亚洲无码视频 | 中文字幕人妻一区二区三区在线视频 | 四虎精品一区二区 | 欧美一卡二卡在线 | 蜜桃免费在线视频 | 农村少妇久久久久久久 | 欧美精品久久久久久久久老牛影院 | 国产精品com | 久久99精品久久久久婷婷 | 男女扒开双腿猛进入爽爽免费 | 一级视频毛片 | 国产九九九九九 | 欧美视频不卡 | 综合av网 | 337p嫩模大胆色肉噜噜噜 | 精品久久人人妻人人做人人 | 黄网在线播放 | 欧美123区 | 久久免费电影 | 嫩草影院一区二区三区 | 久久九九国产精品 | 九九九久久久精品 | 日本免费一二区 | 丰满女人又爽又紧又丰满 | 福利一区福利二区 | 久久久www成人免费毛片 | 中文成人在线 | 丰满岳妇伦在线播放 | 亚洲精品国产精品国自产观看 | 男人撒尿视频xvideos | 伊人婷婷综合 | 骚视频在线观看 | 欧美丝袜脚交 |