2026-02-19 23:11 5
读者需要了解神经网络的基础知识可以参考神经网络深度学习计算机视觉得分函数损失函数前向传播反向传播激活函数

本文为大家详细的描述了实现神经网络的逻辑代码。
并且用手写识别来实验结果基本实现了神经网络的要求。
Neural_Network_Lab.utils.features
Neural_Network_Lab.utils.hypothesis
features_split[1](num_examples_1,
dataset_1.shape(num_examples_2,
normalize(polynomials)[0]return
sinusoid_degree):sin(x).num_examples
normalize(features):features_normalized
np.copy(features).astype(float)#
防止除以0features_deviation[features_deviation
normalize_data:(data_normalized,features_mean,features_deviation)
normalize(data_processed)data_processed
generate_sinusoids(data_normalized,
np.concatenate((data_processed,
generate_polynomials(data_normalized,
np.concatenate((data_processed,
np.hstack((np.ones((num_examples,
初始化数据标签网络层次(用列表表示如三层[784,25,10]表示输入层784个神经元25个隐藏层神经元10个输出层神经元)数据是否标准化处理。
__init__(self,data,labels,layers,normalize_dataFalse):data_processed
prepare_for_training(data,normalize_datanormalize_data)[0]self.data
MultilayerPerceptron.thetas_init(layers)3、训练函数
输入迭代次数学习率进行梯度下降算法更新权重参数矩阵得到最终的权重参数矩阵和损失值。
矩阵不好进行更新操作可以把它拉成向量。
MultilayerPerceptron.thetas_unroll(self.thetas)(optimized_theta,
MultilayerPerceptron.gradient_descent(self.data,self.labels,unrolled_theta,self.layers,max_ietrations,alpha)self.thetas
MultilayerPerceptron.thetas_roll(optimized_theta,self.layers)return
初始值小#这里需要考虑偏置项偏置的个数与输出的个数一样thetas[layer_index]np.random.rand(out_count,in_count1)
thetas_unroll(thetas):#拼接成一个向量num_theta_layers
range(num_theta_layers):unrolled_theta
np.hstack((unrolled_theta,thetas[theta_layer_index].flatten()))return
thetas_roll(unrolled_theta,layers):num_layers
thetas_volumelayer_theta_unrolled
unrolled_theta[start_index:end_index]thetas[layer_index]
layer_theta_unrolled.reshape((thetas_height,thetas_width))unrolled_shift
feedforword_propagation(data,thetas,layers):num_layers
data.shape[0]in_layer_activation
thetas[layer_index]out_layer_activation
sigmoid(np.dot(in_layer_activation,theta.T))
np.hstack((np.ones((num_examples,1)),out_layer_activation))in_layer_activation
out_layer_activation#返回输出层结果,不要偏置项return
cost_function(data,labels,thetas,layers):num_layers
MultilayerPerceptron.feedforword_propagation(data,thetas,layers)#制作标签每一个样本的标签都是one-dotbitwise_labels
np.zeros((num_examples,num_labels))for
range(num_examples):bitwise_labels[example_index][labels[example_index][0]]
np.sum(np.log(predictions[bitwise_labels
(bit_set_costbit_not_set_cost)return
在梯度下降的过程中要实现参数矩阵的更新必须要实现反向传播。
利用上述的公式进行运算即可得到。
back_propagation(data,labels,thetas,layers):num_layers
len(layers)(num_examples,num_features)
np.zeros((out_count,in_count1))
range(num_examples):layers_inputs
data[example_index,:].reshape((num_features,1))layers_activations[0]
np.dot(layer_theta,layers_activation)
np.vstack((np.array([[1]]),sigmoid(layer_input)))
#完成激活函数加上一个偏置参数layers_inputs[layer_index1]
后一层计算结果layers_activations[layer_index
后一层完成激活的结果output_layer_activation
layers_activation[1:,:]#计算输出层和结果的差异delta
np.zeros((num_label_types,1))bitwise_label[labels[example_index][0]]
1#计算输出结果和真实值之间的差异delta[num_layers-1]
layers_inputs[layer_index]layer_input
np.vstack((np.array((1)),layer_input))#按照公式计算delta[layer_index]
np.dot(layer_theta.T,next_delta)*sigmoid(layer_input)#过滤掉偏置参数delta[layer_index]
delta[layer_index][1:,:]#计算梯度值for
range(num_layers-1):layer_delta
np.dot(delta[layer_index1],layers_activations[layer_index].T)
range(num_layers-1):deltas[layer_index]
gradient_step(data,labels,optimized_theta,layers):theta
MultilayerPerceptron.thetas_roll(optimized_theta,layers)#反向传播BPthetas_rolled_gradinets
MultilayerPerceptron.back_propagation(data,labels,theta,layers)thetas_unrolled_gradinets
MultilayerPerceptron.thetas_unroll(thetas_rolled_gradinets)return
gradient_descent(data,labels,unrolled_theta,layers,max_ietrations,alpha):#1.
MultilayerPerceptron.cost_function(data,labels,MultilayerPerceptron.thetas_roll(optimized_theta,layers),layers)cost_history.append(cost)theta_gradient
MultilayerPerceptron.gradient_step(data,labels,optimized_theta,layers)optimized_theta
predict(self,data):data_processed
prepare_for_training(data,normalize_data
self.normalize_data)[0]num_examples
data_processed.shape[0]predictions
MultilayerPerceptron.feedforword_propagation(data_processed,self.thetas,self.layers)return
np.argmax(predictions,axis1).reshape((num_examples,1))
Neural_Network_Lab.utils.features
Neural_Network_Lab.utils.hypothesis
__init__(self,data,labels,layers,normalize_dataFalse):data_processed
prepare_for_training(data,normalize_datanormalize_data)[0]self.data
MultilayerPerceptron.thetas_init(layers)def
predict(self,data):data_processed
prepare_for_training(data,normalize_data
self.normalize_data)[0]num_examples
data_processed.shape[0]predictions
MultilayerPerceptron.feedforword_propagation(data_processed,self.thetas,self.layers)return
np.argmax(predictions,axis1).reshape((num_examples,1))def
MultilayerPerceptron.thetas_unroll(self.thetas)(optimized_theta,
MultilayerPerceptron.gradient_descent(self.data,self.labels,unrolled_theta,self.layers,max_ietrations,alpha)self.thetas
MultilayerPerceptron.thetas_roll(optimized_theta,self.layers)return
self.thetas,cost_historystaticmethoddef
gradient_descent(data,labels,unrolled_theta,layers,max_ietrations,alpha):#1.
MultilayerPerceptron.cost_function(data,labels,MultilayerPerceptron.thetas_roll(optimized_theta,layers),layers)cost_history.append(cost)theta_gradient
MultilayerPerceptron.gradient_step(data,labels,optimized_theta,layers)optimized_theta
optimized_theta,cost_historystaticmethoddef
gradient_step(data,labels,optimized_theta,layers):theta
MultilayerPerceptron.thetas_roll(optimized_theta,layers)#反向传播BPthetas_rolled_gradinets
MultilayerPerceptron.back_propagation(data,labels,theta,layers)thetas_unrolled_gradinets
MultilayerPerceptron.thetas_unroll(thetas_rolled_gradinets)return
thetas_unrolled_gradinetsstaticmethoddef
back_propagation(data,labels,thetas,layers):num_layers
len(layers)(num_examples,num_features)
np.zeros((out_count,in_count1))
range(num_examples):layers_inputs
data[example_index,:].reshape((num_features,1))layers_activations[0]
np.dot(layer_theta,layers_activation)
np.vstack((np.array([[1]]),sigmoid(layer_input)))
#完成激活函数加上一个偏置参数layers_inputs[layer_index1]
后一层计算结果layers_activations[layer_index
后一层完成激活的结果output_layer_activation
layers_activation[1:,:]#计算输出层和结果的差异delta
np.zeros((num_label_types,1))bitwise_label[labels[example_index][0]]
1#计算输出结果和真实值之间的差异delta[num_layers-1]
layers_inputs[layer_index]layer_input
np.vstack((np.array((1)),layer_input))#按照公式计算delta[layer_index]
np.dot(layer_theta.T,next_delta)*sigmoid(layer_input)#过滤掉偏置参数delta[layer_index]
delta[layer_index][1:,:]#计算梯度值for
range(num_layers-1):layer_delta
np.dot(delta[layer_index1],layers_activations[layer_index].T)
range(num_layers-1):deltas[layer_index]
cost_function(data,labels,thetas,layers):num_layers
MultilayerPerceptron.feedforword_propagation(data,thetas,layers)#制作标签每一个样本的标签都是one-dotbitwise_labels
np.zeros((num_examples,num_labels))for
range(num_examples):bitwise_labels[example_index][labels[example_index][0]]
np.sum(np.log(predictions[bitwise_labels
(bit_set_costbit_not_set_cost)return
feedforword_propagation(data,thetas,layers):num_layers
data.shape[0]in_layer_activation
thetas[layer_index]out_layer_activation
sigmoid(np.dot(in_layer_activation,theta.T))
np.hstack((np.ones((num_examples,1)),out_layer_activation))in_layer_activation
out_layer_activation#返回输出层结果,不要偏置项return
in_layer_activation[:,1:]staticmethoddef
thetas_roll(unrolled_theta,layers):num_layers
thetas_volumelayer_theta_unrolled
unrolled_theta[start_index:end_index]thetas[layer_index]
layer_theta_unrolled.reshape((thetas_height,thetas_width))unrolled_shift
thetas_unroll(thetas):#拼接成一个向量num_theta_layers
range(num_theta_layers):unrolled_theta
np.hstack((unrolled_theta,thetas[theta_layer_index].flatten()))return
初始值小#这里需要考虑偏置项偏置的个数与输出的个数一样thetas[layer_index]np.random.rand(out_count,in_count1)
共一万个样本第一列为标签值一列表示像素点的值共28*28共784个像素点。
Neural_Network_Lab.Multilayer_Perceptron
pd.read_csv(../Neural_Network_Lab/data/mnist-demo.csv)
math.ceil(math.sqrt(numbers_to_display))
range(numbers_to_display):digit
data[plot_index:plot_index1].valuesdigit_label
int(math.sqrt(digit_pixels.shape[0]))frame
digit_pixels.reshape((image_size,image_size))plt.subplot(num_cells,num_cells,plot_index1)plt.imshow(frame,cmap
plt.subplots_adjust(wspace0.5,hspace0.5)
data.drop(train_data.index)train_data
test_data.valuesnum_training_examples
train_data[:num_training_examples,1:]
train_data[:num_training_examples,[0]]X_test
MultilayerPerceptron(X_train,y_train,layers,normalize_data)
multilayerperceptron.train(max_iteration,alpha)
plt.plot(range(len(cost_history)),cost_history)
multilayerperceptron.predict(X_train)
multilayerperceptron.predict(X_test)train_p
print(测试集准确率,test_p)numbers_to_display
math.ceil(math.sqrt(numbers_to_display))
range(numbers_to_display):digit_label
y_test[plot_index,0]digit_pixels
X_test[plot_index,:]predicted_label
y_test_predictions[plot_index][0]image_size
int(math.sqrt(digit_pixels.shape[0]))frame
digit_pixels.reshape((image_size,image_size))plt.subplot(num_cells,num_cells,plot_index1)color_map
color_map)plt.title(predicted_label)plt.tick_params(axisboth,whichboth,bottomFalse,leftFalse,labelbottomFalse)plt.subplots_adjust(wspace0.5,hspace0.5)
这里准确率不高读者可以自行调整参数改变迭代次数网络层次都可以哦。
作为专业的SEO优化服务提供商,我们致力于通过科学、系统的搜索引擎优化策略,帮助企业在百度、Google等搜索引擎中获得更高的排名和流量。我们的服务涵盖网站结构优化、内容优化、技术SEO和链接建设等多个维度。
| 服务项目 | 基础套餐 | 标准套餐 | 高级定制 |
|---|---|---|---|
| 关键词优化数量 | 10-20个核心词 | 30-50个核心词+长尾词 | 80-150个全方位覆盖 |
| 内容优化 | 基础页面优化 | 全站内容优化+每月5篇原创 | 个性化内容策略+每月15篇原创 |
| 技术SEO | 基本技术检查 | 全面技术优化+移动适配 | 深度技术重构+性能优化 |
| 外链建设 | 每月5-10条 | 每月20-30条高质量外链 | 每月50+条多渠道外链 |
| 数据报告 | 月度基础报告 | 双周详细报告+分析 | 每周深度报告+策略调整 |
| 效果保障 | 3-6个月见效 | 2-4个月见效 | 1-3个月快速见效 |
我们的SEO优化服务遵循科学严谨的流程,确保每一步都基于数据分析和行业最佳实践:
全面检测网站技术问题、内容质量、竞争对手情况,制定个性化优化方案。
基于用户搜索意图和商业目标,制定全面的关键词矩阵和布局策略。
解决网站技术问题,优化网站结构,提升页面速度和移动端体验。
创作高质量原创内容,优化现有页面,建立内容更新机制。
获取高质量外部链接,建立品牌在线影响力,提升网站权威度。
持续监控排名、流量和转化数据,根据效果调整优化策略。
基于我们服务的客户数据统计,平均优化效果如下:
我们坚信,真正的SEO优化不仅仅是追求排名,而是通过提供优质内容、优化用户体验、建立网站权威,最终实现可持续的业务增长。我们的目标是与客户建立长期合作关系,共同成长。
Demand feedback