MATLAB基于格拉姆角场与2DCNN-BiGRU的轴承故障诊断模型
本博客来源于CSDN机器鱼,未同意任何人转载。
更多内容,欢迎点击本专栏目录,查看更多内容。
目录
0 引言
1 格拉姆角场原理
2 2DCNN-BiGRU网络结构
3 应用实例
3.1 数据准备
3.2 格拉姆角场数据提取
3.3 网络模型搭建-重中之重
3.4 使用训练好的网络
4 结语
0 引言
为确保对滚动轴承故障诊断的有效性,结合卷积神经网络(CNN)在图像特征提取与分类识别的优势,利用格拉姆角场(CAF)将滚动轴承一维振动信号转换为二维图像数据,既保留了信号完整的信息,也保持着信号对于时间的依赖性。并由此提出基于卷积神经网络与双向门控循环单元(BiGRU)的诊断模型。首先,将转换得到的二维图像作为模型的输人数据,通过卷积神经网络提取图像的空间特征,再由双向门控循环单元筛选其时间特征,最终由分类器完成模式识别。
1 格拉姆角场原理
具体原理可以看下面这篇文章,另外【博客】提供了Python代码。
2 2DCNN-BiGRU网络结构
结构很简单,就是先将振动信号转换为GADF或者GASF图片,然后resize成100*100*3输入进2DCNN-BiGRU进行特征提取,最后接一个全连接层做分类层,训练的时候损失函数为交叉熵函数。
在MATLAB中建立这种网络比较复杂,自带的深度学习工具箱没有BiGRU这种函数,我是参考【这里】手动建立的BiGRUlayer,顺便说一下MATLAB深度学习工具箱里面很多经典网络,并且带有数据,可直接跑。我代码中具体实现是,分别建立一个CNN与一个BiGRU,先把数据输入进CNN,再提取出来转成BiGRU的格式输入进BiGRU。
%% 模型建立
cnn = [imageInputLayer([100 100 3],'Normalization','none','Name','cnn_channel')convolution2dLayer(3,32,'Padding','same','Name','conv1')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool1')dropoutLayer(0.5,'Name','dropout1')reluLayer('Name','relu1')convolution2dLayer(3,64,'Padding','same','Name','conv2')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool2')dropoutLayer(0.5,'Name','dropout2')reluLayer('Name','relu2')convolution2dLayer(3,64,'Padding','same','Name','conv3')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool3')dropoutLayer(0.5,'Name','dropout3')reluLayer('Name','relu3')convolution2dLayer(3,64,'Padding','same','Name','conv4')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool4')dropoutLayer(0.5,'Name','dropout4')reluLayer('Name','relu4')];% fullyConnectedLayer(384,'Name','fc1')
lgraph1 = layerGraph(cnn);
dlnet1 = dlnetwork(lgraph1);% bigru层与输出层 bigru参考
% https://ww2.mathworks.cn/support/search.html/answers/1567853-is-a-bi-gru-available-bidirectional-gated-recurrent-unit-gru-or-a-way-to-implement-a-bi-gru.html?fq[]=asset_type_name:answer&fq[]=category:deeplearning/deep-learning-with-time-series-sequences-and-text&page=1lg=layerGraph();
lg = addLayers(lg, [sequenceInputLayer(448, "Name", "bigruinput")gruLayer( 512, 'OutputMode', 'sequence', ..."Name", "gru1")concatenationLayer(1, 2, "Name", "cat1")gruLayer( 512, 'OutputMode', 'last', ..."Name", "gru3")concatenationLayer(1, 2, "Name", "cat2")fullyConnectedLayer(10, 'Name', 'output')
] );
lg = addLayers( lg, [FlipLayer("flip1")gruLayer( 512, 'OutputMode', 'sequence', ..."Name", "gru2" )FlipLayer("flip2")] );
lg = addLayers(lg, [FlipLayer("flip3")gruLayer( 512, 'OutputMode', 'last', ..."Name", "gru4" )] );lg = connectLayers(lg, "bigruinput", "flip1");
lg = connectLayers(lg, "flip2", "cat1/in2");
lg = connectLayers(lg, "cat1", "flip3");
lg = connectLayers(lg, "gru4", "cat2/in2");
dlnet2 = dlnetwork(lg);
3 应用实例
3.1 数据准备
采用西储大学轴承故障诊断数据集,48K/0HP数据,共10类故障(正常作为一类特殊的故障类型),划分后每个样本的采样点为864,每类故障各200个样本,一共得到2000个样本。博客中用到的数据可在【这里】下载。最后按照7:2:1的比例划分训练集、验证集、测试集。
%% 数据预处理(训练集 验证集 测试集划分)
clc;clear;close all%% 加载原始数据
load 0HP/48k_Drive_End_B007_0_122; a1=X122_DE_time'; %1
load 0HP/48k_Drive_End_B014_0_189; a2=X189_DE_time'; %2
load 0HP/48k_Drive_End_B021_0_226; a3=X226_DE_time'; %3
load 0HP/48k_Drive_End_IR007_0_109; a4=X109_DE_time'; %4
load 0HP/48k_Drive_End_IR014_0_174 ; a5=X173_DE_time';%5
load 0HP/48k_Drive_End_IR021_0_213 ; a6=X213_DE_time';%6
load 0HP/48k_Drive_End_OR007@6_0_135 ;a7=X135_DE_time';%7
load 0HP/48k_Drive_End_OR014@6_0_201 ;a8=X201_DE_time';%8
load 0HP/48k_Drive_End_OR021@6_0_238 ;a9=X238_DE_time';%9
load 0HP/normal_0_97 ;a10=X097_DE_time';%10%%
N=200;
L=864;% 每种状态取N个样本 每个样本长度为L
data=[];label=[];
for i=1:10if i==1;ori_data=a1;endif i==2;ori_data=a2;endif i==3;ori_data=a3;endif i==4;ori_data=a4;endif i==5;ori_data=a5;endif i==6;ori_data=a6;endif i==7;ori_data=a7;endif i==8;ori_data=a8;endif i==9;ori_data=a9;endif i==10;ori_data=a10;endfor j=1:Nstart_point=randi(length(ori_data)-L);%随机取一个起点end_point=start_point+L-1;data=[data ;ori_data(start_point:end_point)];label=[label;i];end
end
%% 标签转换 onehot编码
N=size(data,1);
output=zeros(N,10);
for i = 1:Noutput(i,label(i))=1;
end
%% 划分训练集 验证集与测试集 7:2:1比例
n=randperm(N);
m1=round(0.7*N);
m2=round(0.9*N);
train_X=data(n(1:m1),:);
train_Y=output(n(1:m1),:);valid_X=data(n(m1+1:m2),:);
valid_Y=output(n(m1+1:m2),:);test_X=data(n(m2+1:end),:);
test_Y=output(n(m2+1:end),:);save data_process train_X train_Y valid_X valid_Y test_X test_Y
3.2 格拉姆角场数据提取
%% 使用格拉姆角场(GAF)以将时间序列数据转换为图像
clear;clc;close all
%% 加载信号
load data_process.mat
s=train_X(1,:);
N=864;
fs=48000;
t=0:1/fs:(N-1)/fs;
%%
figure
plot(t,s)
xlabel('t/s')
ylabel('幅值/A')
grid on%%
window_size = 2;%定义窗口大小为2
transformer = PPA(s,window_size); %分段聚合
transformer = MinMaxScaler(transformer);%归一化
arccos_X = acos(transformer);%转换成极坐标
X=GADF(arccos_X);
figure
imagesc(t,t,abs(X)/max(max(abs(X))));
set(gca,'YDir','normal')
axis ij; %从上到下坐标递增
xlabel('时间 t/s');
ylabel('时间 t/s');
title('GSDF');X=GASF(arccos_X);
figure
imagesc(t,t,abs(X)/max(max(abs(X))));
set(gca,'YDir','normal')
axis ij; %从上到下坐标递增
xlabel('时间 t/s');
ylabel('时间 t/s');
title('GASF');
下面是求角场需要的一些函数,建立4个函数分别保存,不然有时会报错找不到函数。
function y=PPA(x,window)
if size(x,1)==1x=x';
end
while mod(size(x,1),window)~=0x=[x; 0];
end
y=reshape(x,[window,length(x)/window])';
y=sum(y,2)/window;
endfunction y=MinMaxScaler(data)
max_data = max(data);
min_data = min(data);
y = ((data-max_data)+(data-min_data))/(max_data-min_data); %得到了一个归一化后的矩阵
end
function X=GADF(arccos_X)X = ones(length(arccos_X),length(arccos_X));
for i = 1:length(arccos_X)for j = 1:length(arccos_X)X(i,j) = arccos_X(i)-arccos_X(j); end
end
X=sin(X);
end
function X=GASF(arccos_X)X = ones(length(arccos_X),length(arccos_X));
for i = 1:length(arccos_X)for j = 1:length(arccos_X)X(i,j) = arccos_X(i)+arccos_X(j); end
end
X=cos(X);
end
举例得到的GADF、GASF如图所示。
然后再写一个循环,分别提取训练集、验证集、测试集的所有数据格拉姆角场图片并保存下来,这里我们提取的是GADF数据,并采用多线程的方式求每个图片,速度快一点,代码如下:
%% 批处理GSDF图
clear;clc;close all;format compact
if exist('GSDF')rmdir('GSDF','s')
end
mkdir('GSDF');
mkdir('GSDF/train_img/');
mkdir('GSDF/valid_img/');
mkdir('GSDF/test_img/');
%% 加载信号
load data_process.mat
N=864;
fs=48000;
t=0:1/fs:(N-1)/fs;
window_size = 2;%定义窗口大小为2
%% 训练集
[m,n]=size(train_X);
[~,label]=max(train_Y,[],2);
p=parpool(maxNumCompThreads);%图片太多 我们将采用多线程生成图片 大大节约时间
%如果电脑不支持多线程报错 就把p=parpool(maxNumCompThreads) 与delete(p) delete(gcp('nocreate'))删掉 把3个地方的parfor改成for
parfor i=1:mitransformer = PPA(train_X(i,:),window_size); %分段聚合transformer = MinMaxScaler(transformer);%归一化arccos_X = acos(transformer);%转换成极坐标coefs=GADF(arccos_X);imagesc(t,t,abs(coefs)/max(max(abs(coefs))));
% set(gcf,'PaperPositionMode','auto');set(gca,'position',[0 0 1 1])set(gcf,'position',[0,0,250,250]);fname=['GSDF/train_img/',num2str(i),'-',num2str(label(i)),'.jpg'];saveas(gcf,fname);
end%% 验证集
[m,n]=size(valid_X);
[~,label]=max(valid_Y,[],2);
parfor i=1:mitransformer = PPA(valid_X(i,:),window_size); %分段聚合transformer = MinMaxScaler(transformer);%归一化arccos_X = acos(transformer);%转换成极坐标coefs=GADF(arccos_X);imagesc(t,t,abs(coefs)/max(max(abs(coefs))));set(gca,'position',[0 0 1 1])set(gcf,'position',[0,0,250,250]);fname=['GSDF/valid_img/',num2str(i),'-',num2str(label(i)),'.jpg'];saveas(gcf,fname);
end
%% 测试集
[m,n]=size(test_X);
[~,label]=max(test_Y,[],2);
parfor i=1:mitransformer = PPA(test_X(i,:),window_size); %分段聚合transformer = MinMaxScaler(transformer);%归一化arccos_X = acos(transformer);%转换成极坐标coefs=GADF(arccos_X);imagesc(t,t,abs(coefs)/max(max(abs(coefs))));set(gca,'position',[0 0 1 1])set(gcf,'position',[0,0,250,250]);fname=['GSDF/test_img/',num2str(i),'-',num2str(label(i)),'.jpg'];saveas(gcf,fname);
end
delete(p);
delete(gcp('nocreate'))
3.3 网络模型搭建-重中之重
MATLAB搭建复合网络比较复杂,主函数如下所示:
%% 1,清空变量
clc;clear;close all;format compact
%% 2.加载数据
disp('读取训练集图片用于训练模型')
[traindata,traintarget]=readimgdata('GSDF/train_img',[100 100],1);
disp('读取验证集图片用于训练过程中进行验证')
[XValidation,YValidation]=readimgdata('GSDF/valid_img',[100 100],1);trainT = onehotencode(traintarget',2)';
validT = onehotencode(YValidation',2)';
%% 模型建立
cnn = [imageInputLayer([100 100 3],'Normalization','none','Name','cnn_channel')convolution2dLayer(3,32,'Padding','same','Name','conv1')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool1')dropoutLayer(0.5,'Name','dropout1')reluLayer('Name','relu1')convolution2dLayer(3,64,'Padding','same','Name','conv2')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool2')dropoutLayer(0.5,'Name','dropout2')reluLayer('Name','relu2')convolution2dLayer(3,64,'Padding','same','Name','conv3')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool3')dropoutLayer(0.5,'Name','dropout3')reluLayer('Name','relu3')convolution2dLayer(3,64,'Padding','same','Name','conv4')maxPooling2dLayer(2,'Stride',2,'Padding','same','Name','pool4')dropoutLayer(0.5,'Name','dropout4')reluLayer('Name','relu4')];% fullyConnectedLayer(384,'Name','fc1')
lgraph1 = layerGraph(cnn);
dlnet1 = dlnetwork(lgraph1);% bigru层与输出层 bigru参考
% https://ww2.mathworks.cn/support/search.html/answers/1567853-is-a-bi-gru-available-bidirectional-gated-recurrent-unit-gru-or-a-way-to-implement-a-bi-gru.html?fq[]=asset_type_name:answer&fq[]=category:deeplearning/deep-learning-with-time-series-sequences-and-text&page=1lg=layerGraph();
lg = addLayers(lg, [sequenceInputLayer(448, "Name", "bigruinput")gruLayer( 512, 'OutputMode', 'sequence', ..."Name", "gru1")concatenationLayer(1, 2, "Name", "cat1")gruLayer( 512, 'OutputMode', 'last', ..."Name", "gru3")concatenationLayer(1, 2, "Name", "cat2")fullyConnectedLayer(10, 'Name', 'output')
] );
lg = addLayers( lg, [FlipLayer("flip1")gruLayer( 512, 'OutputMode', 'sequence', ..."Name", "gru2" )FlipLayer("flip2")] );
lg = addLayers(lg, [FlipLayer("flip3")gruLayer( 512, 'OutputMode', 'last', ..."Name", "gru4" )] );lg = connectLayers(lg, "bigruinput", "flip1");
lg = connectLayers(lg, "flip2", "cat1/in2");
lg = connectLayers(lg, "cat1", "flip3");
lg = connectLayers(lg, "gru4", "cat2/in2");
dlnet2 = dlnetwork(lg);
%% 网络训练参数
numEpochs = 150;% 训练代数
miniBatchSize = 128;%batchsize
learnRate = 0.01;%初始学习率
decay = 0.9;%学习率衰减
momentum = 0.9;%动量
%建一个图 来展示实时loss曲线
figure
lineLossTrain = animatedline('Color',[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss")
grid on
%% 开始训练
train_again=1;% 为1就代码重新训练模型,为0就是调用训练好的网络
if train_again==1velocity1 = [];velocity2 = [];iteration = 0;start = tic;trainloss=zeros(numEpochs,1);for epoch = 1:numEpochs% 每一代 都先打乱数据,然后分批次遍历所有数据[x,y]=shuffle_data(traindata,trainT);batchs=ceil(size(y,2)/miniBatchSize);% 计算总批次all_loss=0;for i=1:batchsiteration = iteration + 1;% 读取第i个批次的数据[dX,T] = next_batch(x,y,miniBatchSize,i);% 计算梯度 lossdX=gpuArray(dX);T=gpuArray(T);[gradients1, gradients2,loss] = dlfeval(@modelGradients,dlnet1,dlnet2,dX,T,miniBatchSize);% 利用 SGDM 优化器更新网络参数.[dlnet1.Learnables, velocity1] = sgdmupdate(dlnet1.Learnables, gradients1, velocity1, learnRate, momentum);[dlnet2.Learnables, velocity2] = sgdmupdate(dlnet2.Learnables, gradients2, velocity2, learnRate, momentum);all_loss = all_loss+loss;endif mod(epoch,100)==0% 学习率衰减learnRate = learnRate*decay;endtrainloss(epoch)=all_loss;% 实时显示lossD = duration(0,0,toc(start),'Format','hh:mm:ss');addpoints(lineLossTrain,epoch,all_loss/size(y,2))title("Epoch: " + epoch + ", Elapsed: " + string(D))drawnowendsave result/net dlnet1 dlnet2
elseload result/net
end%% 测试集正确率
[XValidation,YValidation]=readimgdata('GSDF/test_img',[100 100],1);
pred=modelPredict(dlnet1,dlnet2,XValidation,miniBatchSize);
label=onehotdecode(pred,[1,2,3,4,5,6,7,8,9,10],1);
acc=sum(label==YValidation)/length(label)
figure
cm = confusionchart(YValidation,label,'Normalization','row-normalized');
xlabel('真实标签')
ylabel('预测标签')
save result/cnnigruresult acc%% 可视化
xb = dlarray(XValidation,'SSCB');
xb=gpuArray(xb);
dlYPred1 = predict(dlnet1,xb);%predict
dlYPred1=reshape(dlYPred1,[7*64,7,size(xb,4)]);
dlYPred2=dlarray(dlYPred1,'CTB');
testFeatures= predict(dlnet2,dlYPred2);
testFeatures=gather(extractdata(testFeatures));
Y = tsne(testFeatures');figure
gscatter(Y(:,1),Y(:,2),YValidation)
legend('1','2','3','4','5','6','7','8','9','10')
title('测试集特征可视化')
然后定义需要用到的子函数:
1.读取图片数据的子函数
function [data,label]=readimgdata(path,size,regular)
tic
list=dir([path,'/','*.jpg']);
N=length(list);
for i=1:N% 提取图片,并转换为 64*64*3(size)filepath=[path,'/',list(i).name];img=imread(filepath);img=double(imresize(img,size));
% img1(1,:,:,:)=img;data(:,:,:,i)=img;%提取标签,10-1.ipg 10代表第10张,1代表其标签为1类S1 = regexp(list(i).name, '\-', 'split');S=regexp(S1{2}, '\.', 'split');label(:,i)=str2num(S{1});
endif regulardata=data/255;
end
label=categorical(label);
disp('读取图片耗时')
toc
2.打乱图像的子函数
function [x,y]=shuffle_data(x,y)
n=length(y);
a=randperm(n);x=x(:,:,:,a);
y=y(:,a);
end
3.将数据转换为CNN输入格式的函数
function [xb,yb] = next_batch(x,y,bs,i)
start=(i-1)*bs+1;
ennd=start+bs-1;
if ennd>length(y)start=length(y)-bs;ennd=start+bs-1;
end
xb=x(:,:,:,start:ennd);
yb=y(:,start:ennd);% 转换为matlab深度学习工具箱的输入格式,这个是必须的
xb = dlarray(xb,'SSCB');
yb = dlarray(yb,'CB');
end
4.FlipLayer这个函数是用来构建BiGRU的子函数,详细原理看【这里】,代码如下:
classdef FlipLayer < nnet.layer.Layermethodsfunction layer = FlipLayer(name)layer.Name = name;endfunction Y = predict(~, X)Y = flip(X,3);endend
end
5.计算梯度的函数,用于反向更新网络参数,就是将数据输入到CNN中,然后提取CNN的输出转换成BiGRU的输入,再输入进BiGRU中,最后提取BiGRU的输出与真实值计算交叉熵损失,代码如下:
function [gradients1, gradients2,loss] = modelGradients(dlnet1,dlnet2,X, T,miniBatchSize)
% CNN通道前向计算
dlYPred1 = forward(dlnet1,X);
% size(dlYPred1)
dlYPred1=reshape(dlYPred1,[7*64,7,miniBatchSize]);%time distributed 论文里面算出来是6*6*64 matlab出来是7*7*64
dlYPred1=dlarray(dlYPred1,'CTB');
% BiGRU通道前向计算
output= forward(dlnet2,dlYPred1);
% size(output)
output=dlarray(output,'CB');
output = softmax(output);
loss = crossentropy(output,T);% 计算梯度
[gradients1,gradients2] = dlgradient(loss,dlnet1.Learnables,dlnet2.Learnables);
loss = double(gather(extractdata(loss)));
end
6.模型预测函数
function pred=modelPredict(dlnet1,dlnet2,data,batchsize)
N_samples=size(data,4);
batchs=ceil(N_samples/batchsize);% 计算总批次
pred=[];
for i =1:batchsxb = dlarray(data(:,:,:, (i-1)*batchsize+1: min(i*batchsize,N_samples)),'SSCB');xb=gpuArray(xb);dlYPred1 = predict(dlnet1,xb);%predictdlYPred1=reshape(dlYPred1,[7*64,7,size(xb,4)]);dlYPred1=dlarray(dlYPred1,'CTB');output= predict(dlnet2,dlYPred1);output=extractdata(output);%提取数据 将dlarray转为一般的数组pred =[pred softmax(output)];
end
end
3.4 使用训练好的网络
clc;clear;close all;format compact
%% 加载训练好的网络
load result/net
%% 读取一个信号样本作为待测样本 转换为图片 并输入训练好的分类网络中
load data_process
data=test_X(1,:);
N=864;
fs=48000;
t=0:1/fs:(N-1)/fs;%%
window_size = 2;%定义窗口大小为2
transformer = PPA(data,window_size); %分段聚合
transformer = MinMaxScaler(transformer);%归一化
arccos_X = acos(transformer);%转换成极坐标
X=GADF(arccos_X);
imagesc(t,t,abs(X)/max(max(abs(X))));
set(gca,'position',[0 0 1 1])
set(gcf,'position',[0,0,250,250]);
saveas(gcf,'test.jpg');
%% 转为file4中cnn的输入形式
testD=double(imresize(imread('test.jpg'),[100,100]))/255;
pred=modelPredict(dlnet1,dlnet2,testD,1);
label=onehotdecode(pred,[1,2,3,4,5,6,7,8,9,10],1);
disp(['该待测样本标签为:',num2str(double(label))])
4 结语
更多内容请点击【专栏】获取。您的点赞是我更新【MATLAB神经网络1000个案例分析】的动力。