-
當(dāng)前位置:首頁 > 創(chuàng)意學(xué)院 > 技術(shù) > 專題列表 > 正文
PSO算法優(yōu)化BP神經(jīng)網(wǎng)絡(luò)代碼(bp神經(jīng)網(wǎng)絡(luò)的優(yōu)化算法)
大家好!今天讓創(chuàng)意嶺的小編來大家介紹下關(guān)于PSO算法優(yōu)化BP神經(jīng)網(wǎng)絡(luò)代碼的問題,以下是小編對此問題的歸納整理,讓我們一起來看看吧。
開始之前先推薦一個非常厲害的Ai人工智能工具,一鍵生成原創(chuàng)文章、方案、文案、工作計劃、工作報告、論文、代碼、作文、做題和對話答疑等等
只需要輸入關(guān)鍵詞,就能返回你想要的內(nèi)容,越精準(zhǔn),寫出的就越詳細,有微信小程序端、在線網(wǎng)頁版、PC客戶端
官網(wǎng):https://ai.de1919.com。
創(chuàng)意嶺作為行業(yè)內(nèi)優(yōu)秀的企業(yè),服務(wù)客戶遍布全球各地,如需了解SEO相關(guān)業(yè)務(wù)請撥打電話175-8598-2043,或添加微信:1454722008
本文目錄:
一、c++ bp神經(jīng)網(wǎng)絡(luò) 自己寫了個程序,好像有點問題,精度很差,求大神幫助
牛逼呀!要不你再寫個pso什么的程序優(yōu)化一下……
二、求基于BP神經(jīng)網(wǎng)絡(luò)的圖像復(fù)原算法的matlab代碼
function Solar_SAE
tic;
n = 300;
m=20;
train_x = [];
test_x = [];
for i = 1:n
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3train\' num2str(i,'%03d') '.bmp']);
%filename = strcat(['E:\matlab\work\c0\TrainImage' num2str(i,'%03d') '.bmp']);
filename = strcat(['E:\image restoration\3-(' num2str(i) ')-4.jpg']);
b = imread(filename);
%c = rgb2gray(b);
c=b;
[ImageRow ImageCol] = size(c);
c = reshape(c,[1,ImageRow*ImageCol]);
train_x = [train_x;c];
end
for i = 1:m
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3test\' num2str(i,'%03d') '.bmp']);
%filename = strcat(['E:\matlab\work\c0\TestImage' num2str(i+100,'%03d') '-1.bmp']);
filename = strcat(['E:\image restoration\3-(' num2str(i+100) ').jpg']);
b = imread(filename);
%c = rgb2gray(b);
c=b;
[ImageRow ImageCol] = size(c);
c = reshape(c,[1,ImageRow*ImageCol]);
test_x = [test_x;c];
end
train_x = double(train_x)/255;
test_x = double(test_x)/255;
%train_y = double(train_y);
%test_y = double(test_y);
% Setup and train a stacked denoising autoencoder (SDAE)
rng(0);
%sae = saesetup([4096 500 200 50]);
%sae.ae{1}.activation_function = 'sigm';
%sae.ae{1}.learningRate = 0.5;
%sae.ae{1}.inputZeroMaskedFraction = 0.0;
%sae.ae{2}.activation_function = 'sigm';
%sae.ae{2}.learningRate = 0.5
%%sae.ae{2}.inputZeroMaskedFraction = 0.0;
%sae.ae{3}.activation_function = 'sigm';
%sae.ae{3}.learningRate = 0.5;
%sae.ae{3}.inputZeroMaskedFraction = 0.0;
%sae.ae{4}.activation_function = 'sigm';
%sae.ae{4}.learningRate = 0.5;
%sae.ae{4}.inputZeroMaskedFraction = 0.0;
%opts.numepochs = 10;
%opts.batchsize = 50;
%sae = saetrain(sae, train_x, opts);
%visualize(sae.ae{1}.W{1}(:,2:end)');
% Use the SDAE to initialize a FFNN
nn = nnsetup([4096 1500 500 200 50 200 500 1500 4096]);
nn.activation_function = 'sigm';
nn.learningRate = 0.03;
nn.output = 'linear'; % output unit 'sigm' (=logistic), 'softmax' and 'linear'
%add pretrained weights
%nn.W{1} = sae.ae{1}.W{1};
%nn.W{2} = sae.ae{2}.W{1};
%nn.W{3} = sae.ae{3}.W{1};
%nn.W{4} = sae.ae{3}.W{2};
%nn.W{5} = sae.ae{2}.W{2};
%nn.W{6} = sae.ae{1}.W{2};
%nn.W{7} = sae.ae{2}.W{2};
%nn.W{8} = sae.ae{1}.W{2};
% Train the FFNN
opts.numepochs = 30;
opts.batchsize = 150;
tx = test_x(14,:);
nn1 = nnff(nn,tx,tx);
ty1 = reshape(nn1.a{9},64,64);
nn = nntrain(nn, train_x, train_x, opts);
toc;
tic;
nn2 = nnff(nn,tx,tx);
toc;
tic;
ty2 = reshape(nn2.a{9},64,64);
tx = reshape(tx,64,64);
tz = tx - ty2;
tz = im2bw(tz,0.1);
%imshow(tx);
%figure,imshow(ty2);
%figure,imshow(tz);
ty = cat(2,tx,ty2,tz);
montage(ty);
filename3 = strcat(['E:\image restoration\3.jpg']);
e=imread(filename3);
f= rgb2gray(e);
f=imresize(f,[64,64]);
%imshow(ty2);
f=double (f)/255;
[PSNR, MSE] = psnr(ty2,f)
imwrite(ty2,'E:\image restoration\bptest.jpg','jpg');
toc;
%visualize(ty);
%[er, bad] = nntest(nn, tx, tx);
%assert(er < 0.1, 'Too big error');
三、采用bp神經(jīng)網(wǎng)絡(luò)建立了一個鍋爐參數(shù)的數(shù)學(xué)模型,然后我用粒子群算法對參數(shù)進行尋優(yōu),優(yōu)化的目標(biāo)函數(shù)是鍋
PSO只是訓(xùn)練網(wǎng)絡(luò)的方法,最終PSO結(jié)束時,適應(yīng)度最大的粒子代表的網(wǎng)絡(luò)就是最佳的粒子,也就是你訓(xùn)練完成的網(wǎng)絡(luò)。你說每次的結(jié)果都不一樣,這是肯定的,因為PSO是中隨機搜索算法,它的初值、搜索過程都是隨機的,既然是隨機的,那肯定每次訓(xùn)練結(jié)果都不一樣。你又問哪組解最好,不是定義了適應(yīng)度函數(shù)嗎,每次訓(xùn)練給出的最佳粒子,肯定是本次訓(xùn)練的最佳。這個結(jié)果每次都是不一樣的,但是它們都是本次訓(xùn)練的最佳結(jié)果,你隨意拿任意一組結(jié)果都行。
四、matlab BP神經(jīng)網(wǎng)絡(luò)的訓(xùn)練算法中訓(xùn)練函數(shù)(traingdm 、trainlm、trainbr)的實現(xiàn)過程及相應(yīng)的VC源代碼
VC源代碼?你很搞笑嘛。。
給你trainlm的m碼
function [out1,out2] = trainlm(varargin)
%TRAINLM Levenberg-Marquardt backpropagation.
%
% <a href="matlab:doc trainlm">trainlm</a> is a network training function that updates weight and
% bias states according to Levenberg-Marquardt optimization.
%
% <a href="matlab:doc trainlm">trainlm</a> is often the fastest backpropagation algorithm in the toolbox,
% and is highly recommended as a first choice supervised algorithm,
% although it does require more memory than other algorithms.
%
% [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T) takes a network NET, input data X
% and target data T and returns the network after training it, and a
% a training record TR.
%
% [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T,Xi,Ai,EW) takes additional optional
% arguments suitable for training dynamic networks and training with
% error weights. Xi and Ai are the initial input and layer delays states
% respectively and EW defines error weights used to indicate
% the relative importance of each target value.
%
% Training occurs according to training parameters, with default values.
% Any or all of these can be overridden with parameter name/value argument
% pairs appended to the input argument list, or by appending a structure
% argument with fields having one or more of these names.
% show 25 Epochs between displays
% showCommandLine 0 generate command line output
% showWindow 1 show training GUI
% epochs 100 Maximum number of epochs to train
% goal 0 Performance goal
% max_fail 5 Maximum validation failures
% min_grad 1e-10 Minimum performance gradient
% mu 0.001 Initial Mu
% mu_dec 0.1 Mu decrease factor
% mu_inc 10 Mu increase factor
% mu_max 1e10 Maximum Mu
% time inf Maximum time to train in seconds
%
% To make this the default training function for a network, and view
% and/or change parameter settings, use these two properties:
%
% net.<a href="matlab:doc nnproperty.net_trainFcn">trainFcn</a> = 'trainlm';
% net.<a href="matlab:doc nnproperty.net_trainParam">trainParam</a>
%
% See also trainscg, feedforwardnet, narxnet.
% Mark Beale, 11-31-97, ODJ 11/20/98
% Updated by Orlando De Jes鷖, Martin Hagan, Dynamic Training 7-20-05
% Copyright 1992-2010 The MathWorks, Inc.
% $Revision: 1.1.6.11.2.2 $ $Date: 2010/07/23 15:40:16 $
%% =======================================================
% BOILERPLATE_START
% This code is the same for all Training Functions.
persistent INFO;
if isempty(INFO), INFO = get_info; end
nnassert.minargs(nargin,1);
in1 = varargin{1};
if ischar(in1)
switch (in1)
case 'info'
out1 = INFO;
case 'check_param'
nnassert.minargs(nargin,2);
param = varargin{2};
err = nntest.param(INFO.parameters,param);
if isempty(err)
err = check_param(param);
end
if nargout > 0
out1 = err;
elseif ~isempty(err)
nnerr.throw('Type',err);
end
otherwise,
try
out1 = eval(['INFO.' in1]);
catch me, nnerr.throw(['Unrecognized first argument: ''' in1 ''''])
end
end
return
end
nnassert.minargs(nargin,2);
net = nn.hints(nntype.network('format',in1,'NET'));
oldTrainFcn = net.trainFcn;
oldTrainParam = net.trainParam;
if ~strcmp(net.trainFcn,mfilename)
net.trainFcn = mfilename;
net.trainParam = INFO.defaultParam;
end
[args,param] = nnparam.extract_param(varargin(2:end),net.trainParam);
err = nntest.param(INFO.parameters,param);
if ~isempty(err), nnerr.throw(nnerr.value(err,'NET.trainParam')); end
if INFO.isSupervised && isempty(net.performFcn) % TODO - fill in MSE
nnerr.throw('Training function is supervised but NET.performFcn is undefined.');
end
if INFO.usesGradient && isempty(net.derivFcn) % TODO - fill in
nnerr.throw('Training function uses derivatives but NET.derivFcn is undefined.');
end
if net.hint.zeroDelay, nnerr.throw('NET contains a zero-delay loop.'); end
[X,T,Xi,Ai,EW] = nnmisc.defaults(args,{},{},{},{},{1});
X = nntype.data('format',X,'Inputs X');
T = nntype.data('format',T,'Targets T');
Xi = nntype.data('format',Xi,'Input states Xi');
Ai = nntype.data('format',Ai,'Layer states Ai');
EW = nntype.nndata_pos('format',EW,'Error weights EW');
% Prepare Data
[net,data,tr,~,err] = nntraining.setup(net,mfilename,X,Xi,Ai,T,EW);
if ~isempty(err), nnerr.throw('Args',err), end
% Train
net = struct(net);
fcns = nn.subfcns(net);
[net,tr] = train_network(net,tr,data,fcns,param);
tr = nntraining.tr_clip(tr);
if isfield(tr,'perf')
tr.best_perf = tr.perf(tr.best_epoch+1);
end
if isfield(tr,'vperf')
tr.best_vperf = tr.vperf(tr.best_epoch+1);
end
if isfield(tr,'tperf')
tr.best_tperf = tr.tperf(tr.best_epoch+1);
end
net.trainFcn = oldTrainFcn;
net.trainParam = oldTrainParam;
out1 = network(net);
out2 = tr;
end
% BOILERPLATE_END
%% =======================================================
% TODO - MU => MU_START
% TODO - alternate parameter names (i.e. MU for MU_START)
function info = get_info()
info = nnfcnTraining(mfilename,'Levenberg-Marquardt',7.0,true,true,...
[ ...
nnetParamInfo('showWindow','Show Training Window Feedback','nntype.bool_scalar',true,...
'Display training window during training.'), ...
nnetParamInfo('showCommandLine','Show Command Line Feedback','nntype.bool_scalar',false,...
'Generate command line output during training.'), ...
nnetParamInfo('show','Command Line Frequency','nntype.strict_pos_int_inf_scalar',25,...
'Frequency to update command line.'), ...
...
nnetParamInfo('epochs','Maximum Epochs','nntype.pos_int_scalar',1000,...
'Maximum number of training iterations before training is stopped.'), ...
nnetParamInfo('time','Maximum Training Time','nntype.pos_inf_scalar',inf,...
'Maximum time in seconds before training is stopped.'), ...
...
nnetParamInfo('goal','Performance Goal','nntype.pos_scalar',0,...
'Performance goal.'), ...
nnetParamInfo('min_grad','Minimum Gradient','nntype.pos_scalar',1e-5,...
'Minimum performance gradient before training is stopped.'), ...
nnetParamInfo('max_fail','Maximum Validation Checks','nntype.strict_pos_int_scalar',6,...
'Maximum number of validation checks before training is stopped.'), ...
...
nnetParamInfo('mu','Mu','nntype.pos_scalar',0.001,...
'Mu.'), ...
nnetParamInfo('mu_dec','Mu Decrease Ratio','nntype.real_0_to_1',0.1,...
'Ratio to decrease mu.'), ...
nnetParamInfo('mu_inc','Mu Increase Ratio','nntype.over1',10,...
'Ratio to increase mu.'), ...
nnetParamInfo('mu_max','Maximum mu','nntype.strict_pos_scalar',1e10,...
'Maximum mu before training is stopped.'), ...
], ...
[ ...
nntraining.state_info('gradient','Gradient','continuous','log') ...
nntraining.state_info('mu','Mu','continuous','log') ...
nntraining.state_info('val_fail','Validation Checks','discrete','linear') ...
]);
end
function err = check_param(param)
err = '';
end
function [net,tr] = train_network(net,tr,data,fcns,param)
% Checks
if isempty(net.performFcn)
warning('nnet:trainlm:Performance',nnwarning.empty_performfcn_corrected);
net.performFcn = 'mse';
net.performParam = mse('defaultParam');
tr.performFcn = net.performFcn;
tr.performParam = net.performParam;
end
if isempty(strmatch(net.performFcn,{'sse','mse'},'exact'))
warning('nnet:trainlm:Performance',nnwarning.nonjacobian_performfcn_replaced);
net.performFcn = 'mse';
net.performParam = mse('defaultParam');
tr.performFcn = net.performFcn;
tr.performParam = net.performParam;
end
% Initialize
startTime = clock;
original_net = net;
[perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);
[best,val_fail] = nntraining.validation_start(net,perf,vperf);
WB = getwb(net);
lengthWB = length(WB);
ii = sparse(1:lengthWB,1:lengthWB,ones(1,lengthWB));
mu = param.mu;
% Training Record
tr.best_epoch = 0;
tr.goal = param.goal;
tr.states = {'epoch','time','perf','vperf','tperf','mu','gradient','val_fail'};
% Status
status = ...
[ ...
nntraining.status('Epoch','iterations','linear','discrete',0,param.epochs,0), ...
nntraining.status('Time','seconds','linear','discrete',0,param.time,0), ...
nntraining.status('Performance','','log','continuous',perf,param.goal,perf) ...
nntraining.status('Gradient','','log','continuous',gradient,param.min_grad,gradient) ...
nntraining.status('Mu','','log','continuous',mu,param.mu_max,mu) ...
nntraining.status('Validation Checks','','linear','discrete',0,param.max_fail,0) ...
];
nn_train_feedback('start',net,status);
% Train
for epoch = 0:param.epochs
% Stopping Criteria
current_time = etime(clock,startTime);
[userStop,userCancel] = nntraintool('check');
if userStop, tr.stop = 'User stop.'; net = best.net;
elseif userCancel, tr.stop = 'User cancel.'; net = original_net;
elseif (perf <= param.goal), tr.stop = 'Performance goal met.'; net = best.net;
elseif (epoch == param.epochs), tr.stop = 'Maximum epoch reached.'; net = best.net;
elseif (current_time >= param.time), tr.stop = 'Maximum time elapsed.'; net = best.net;
elseif (gradient <= param.min_grad), tr.stop = 'Minimum gradient reached.'; net = best.net;
elseif (mu >= param.mu_max), tr.stop = 'Maximum MU reached.'; net = best.net;
elseif (val_fail >= param.max_fail), tr.stop = 'Validation stop.'; net = best.net;
end
% Feedback
tr = nntraining.tr_update(tr,[epoch current_time perf vperf tperf mu gradient val_fail]);
nn_train_feedback('update',net,status,tr,data, ...
[epoch,current_time,best.perf,gradient,mu,val_fail]);
% Stop
if ~isempty(tr.stop), break, end
% Levenberg Marquardt
while (mu <= param.mu_max)
% CHECK FOR SINGULAR MATRIX
[msgstr,msgid] = lastwarn;
lastwarn('MATLAB:nothing','MATLAB:nothing')
warnstate = warning('off','all');
dWB = -(jj+ii*mu) \ je;
[~,msgid1] = lastwarn;
flag_inv = isequal(msgid1,'MATLAB:nothing');
if flag_inv, lastwarn(msgstr,msgid); end;
warning(warnstate)
WB2 = WB + dWB;
net2 = setwb(net,WB2);
perf2 = nntraining.train_perf(net2,data,fcns);
% TODO - possible speed enhancement
% - retain intermediate variables for Memory Reduction = 1
if (perf2 < perf) && flag_inv
WB = WB2; net = net2;
mu = max(mu*param.mu_dec,1e-20);
break
end
mu = mu * param.mu_inc;
end
% Validation
[perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);
[best,tr,val_fail] = nntraining.validation(best,tr,val_fail,net,perf,vperf,epoch);
end
end
以上就是關(guān)于PSO算法優(yōu)化BP神經(jīng)網(wǎng)絡(luò)代碼相關(guān)問題的回答。希望能幫到你,如有更多相關(guān)問題,您也可以聯(lián)系我們的客服進行咨詢,客服也會為您講解更多精彩的知識和內(nèi)容。
推薦閱讀: