日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

神经网络python识别词语_请教关于python的手写数字识别神经网络问题~~~~

發(fā)布時間:2025/3/15 python 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 神经网络python识别词语_请教关于python的手写数字识别神经网络问题~~~~ 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

該樓層疑似違規(guī)已被系統(tǒng)折疊?隱藏此樓查看此樓

"""network.py~~~~~~~~~~

A module to implement the stochastic gradient descent learningalgorithm for a feedforward neural network. Gradients are calculatedusing backpropagation. Note that I have focused on making the codesimple, easily readable, and easily modifiable. It is not optimized,and omits many desirable features."""

#### Libraries

# Standard library

import random

# Third-party libraries

import numpy as np

class Network(object):

def __init__(self, sizes): """The list ``sizes`` contains the number of neurons in the respective layers of the network. For example, if the list was [2, 3, 1] then it would be a three-layer network, with the first layer containing 2 neurons, the second layer 3 neurons, and the third layer 1 neuron. The biases and weights for the network are initialized randomly, using a Gaussian distribution with mean 0, and variance 1. Note that the first layer is assumed to be an input layer, and by convention we won't set any biases for those neurons, since biases are only ever used in computing the outputs from later layers.""" self.num_layers = len(sizes) self.sizes = sizes self.biases = [np.random.randn(y, 1) for y in sizes[1:]] self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] def feedforward(self, a): """Return the output of the network if ``a`` is input.""" for b, w in zip(self.biases, self.weights): a = sigmoid(np.dot(w, a)+b) return a def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None): """Train the neural network using mini-batch stochastic gradient descent. The ``training_data`` is a list of tuples ``(x, y)`` representing the training inputs and the desired outputs. The other non-optional parameters are self-explanatory. If ``test_data`` is provided then the network will be evaluated against the test data after each epoch, and partial progress printed out. This is useful for tracking progress, but slows things down substantially.""" if test_data: n_test = len(test_data) n = len(training_data) for j in xrange(epochs): random.shuffle(training_data) mini_batches = [ training_data[k:k+mini_batch_size] for k in xrange(0, n, mini_batch_size)] for mini_batch in mini_batches: self.update_mini_batch(mini_batch, eta) if test_data: print "Epoch {0}: {1} / {2}".format( j, self.evaluate(test_data), n_test) else: print "Epoch {0} complete".format(j) def update_mini_batch(self, mini_batch, eta): """Update the network's weights and biases by applying gradient descent using backpropagation to a single mini batch. The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta`` is the learning rate.""" nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] for x, y in mini_batch: delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)] self.biases = [b-(eta/len(mini_batch))*nb for b, nb in zip(self.biases, nabla_b)] def backprop(self, x, y): """Return a tuple ``(nabla_b, nabla_w)`` representing the gradient for the cost function C_x. ``nabla_b`` and ``nabla_w`` are layer-by-layer lists of numpy arrays, similar to ``self.biases`` and ``self.weights``.""" nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] # feedforward activation = x activations = [x] # list to store all the activations, layer by layer zs = [] # list to store all the z vectors, layer by layer for b, w in zip(self.biases, self.weights): z = np.dot(w, activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) # backward pass delta = self.cost_derivative(activations[-1], y) * \ sigmoid_prime(zs[-1]) nabla_b[-1] = delta nabla_w[-1] = np.dot(delta, activations[-2].transpose()) # Note that the variable l in the loop below is used a little # differently to the notation in Chapter 2 of the book. Here, # l = 1 means the last layer of neurons, l = 2 is the # second-last layer, and so on. It's a renumbering of the # scheme in the book, used here to take advantage of the fact # that Python can use negative indices in lists. for l in xrange(2, self.num_layers): z = zs[-l] sp = sigmoid_prime(z) delta = np.dot(self.weights[-l+1].transpose(), delta) * sp nabla_b[-l] = delta nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) return (nabla_b, nabla_w) def evaluate(self, test_data): """Return the number of test inputs for which the neural network outputs the correct result. Note that the neural network's output is assumed to be the index of whichever neuron in the final layer has the highest activation.""" test_results = [(np.argmax(self.feedforward(x)), y) for (x, y) in test_data] return sum(int(x == y) for (x, y) in test_results) def cost_derivative(self, output_activations, y): """Return the vector of partial derivatives \partial C_x / \partial a for the output activations.""" return (output_activations-y)#### Miscellaneous functionsdef sigmoid(z): """The sigmoid function.""" return 1.0/(1.0+np.exp(-z))def sigmoid_prime(z): """Derivative of the sigmoid function.""" return sigmoid(z)*(1-sigmoid(z))

總結(jié)

以上是生活随笔為你收集整理的神经网络python识别词语_请教关于python的手写数字识别神经网络问题~~~~的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 午夜精品福利在线观看 | 亚洲h | 啪视频在线观看 | 亚洲色图在线视频 | 久久伊人成人网 | 青娱乐超碰在线 | 五月天综合视频 | 尤物自拍 | 日日爽夜夜爽 | 可以在线看黄的网站 | 欧美又粗又长 | 国产亚洲精品久久久久丝瓜 | 91欧美激情一区二区三区成人 | 成人在线手机视频 | 亚洲一区二区三区欧美 | 最新日韩中文字幕 | 国产精品毛片久久久久久久 | 巨胸喷奶水www久久久免费动漫 | 中国浓毛少妇毛茸茸 | 久草福利视频 | 国产黄色a | 日韩福利 | 91丨九色丨海角社区 | 超碰麻豆 | 在线视频91| 久久福利视频导航 | 黄色亚洲视频 | 日韩1页 | 国产色影院 | 午夜久久久 | 一卡二卡三卡在线观看 | 精品人妻一区二区三区蜜桃视频 | www污网站| 国产videos| 尤物在线精品 | 成人午夜精品一区二区 | 四虎影视永久免费观看 | 夜夜嗨av一区二区三区 | 国产免费视频一区二区三区 | 久久日本精品字幕区二区 | www.在线观看麻豆 | 天天操夜夜草 | 亚洲国产欧美日韩在线 | 精品动漫一区 | 国产二区av | 国产精品老女人 | 欧美精品乱码99久久蜜桃 | www.夜色| 国产剧情av引诱维修工 | 精品在线99| 手机看片日本 | 黄色小毛片| 午夜婷婷网 | 久久久久女教师免费一区 | 国产亚洲一区二区三区在线观看 | 欧美少妇15p| 欧美夜夜操 | 全肉的吸乳文 | 欧美丝袜一区二区 | 男女网站在线观看 | 日日干天天干 | 精品久久久久久久久久久久久久久 | 亚洲激情六月 | 69pao| 黄色免费91| 亚洲高清福利 | 午夜aaa片一区二区专区 | 日韩啊啊啊 | 1769国产精品视频 | 国产人妻精品午夜福利免费 | 迈开腿让我尝尝你的小草莓 | 久久这里只有精品久久 | 日韩av三级在线观看 | 欧州一区二区三区 | 久久国产视频一区 | 三级黄色片免费观看 | 欧美日韩激情在线一区二区三区 | 美女被草出白浆 | 爱福利视频一区 | 成人在线视频一区二区 | 亚洲精品理论 | 人人妻人人澡人人爽人人dvd | 3d毛片| 中文字幕乱码一区 | 黄网站免费入口 | 精品一区二区三区四区五区六区 | 国产色播av在线 | 顶级嫩模啪啪呻吟不断好爽 | 久久久久国产一区二区三区潘金莲 | 蜜桃av网站 | 尤物精品在线观看 | 91精品福利在线 | 性欧美日本| 欧美精品一区二区三区四区 | 欧美特一级 | 国产真实的和子乱拍在线观看 | 欧美a√ | 久久久久女 | 精品美女久久久久 |