DDcGAN_codes
train.py代码修改:
1、将os.mkdir改为os.makedirs。
【原因】os.mkdir不能生成多级目录。
try:os.makedirs(f'./weights/{project_name}/')os.makedirs(f'./weights/{project_name}/Generator/')os.makedirs(f'./weights/{project_name}/Discriminator/')
except:pass
2、权重保存
源代码中epoch为100,每个epoch都保存了权重,我改为10次保存一次。修改代码如下:
if epoch % 10 == 0:torch.save(Generator, f'./weights/{project_name}/Generator/Generator_{epoch}.pth')torch.save(Discriminator,
f'./weights/{project_name}/Discriminator/Discriminator_{epoch}.pth')
2、权重保存
源代码中epoch为100,每个epoch都保存了权重,我改为10次保存一次。修改代码如下:
if epoch % 10 == 0:torch.save(Generator, f'./weights/{project_name}/Generator/Generator_{epoch}.pth')torch.save(Discriminator,
f'./weights/{project_name}/Discriminator/Discriminator_{epoch}.pth')
此时,根目录会生成如下权重文件:包括判别器和生成器
test.py代码修改
import torch
from PIL import Image
from torchvision import transforms
import sys
sys.path.append(".")
from core.model import build_model
from core.utils import load_config
# from core.model.build import build_model
# from core.utils.config import load_config# config = load_config('../config/Pan-GAN.yaml')
config = load_config('./config/GAN_G1_D2.yaml')
GAN_Model = build_model(config)
vis_img = Image.open('./demo/test_vis.jpg')
inf_img = Image.open('./demo/test_inf.jpg')
# trans = transforms.Compose([transforms.Resize((256, 256)), transforms.ToTensor()])
trans = transforms.Compose([transforms.Resize((512, 512)), transforms.ToTensor()])
vis_img = trans(vis_img)
inf_img = trans(inf_img)
data = {'Vis': vis_img.unsqueeze(0), 'Inf': inf_img.unsqueeze(0)}
GAN_Model.Generator.load_state_dict(torch.load('./weights/GAN_G1_D2/Generator/Generator_50.pth').state_dict())
Generator_feats, Discriminator_feats, confidence = GAN_Model(data)
untrans = transforms.Compose([transforms.ToPILImage()])img = untrans(Generator_feats['Generator_1'][0])
print(img.size)
img.save('test_result.jpg')
出现的主要错误如下:
1、ModuleNotFoundError: No module named 'core'
1、ModuleNotFoundError:没有名为 'core' 的模块
这部分对应修改如下:
import sys
sys.path.append(".")
from core.model import build_model
from core.utils import load_config
# from core.model.build import build_model
# from core.utils.config import load_config
2、FileNotFoundError: [Errno 2] No such file or directory: '../config/GAN_G1_D2.yaml'
FileNotFoundError: [Errno 2] 没有这样的文件或目录: '../config/GAN_G1_D2.yaml 中”
这部分对应修改如下:
# config = load_config('../config/Pan-GAN.yaml')
config = load_config('./config/GAN_G1_D2.yaml')
GAN_Model = build_model(config)
vis_img = Image.open('./demo/test_vis.jpg')
inf_img = Image.open('./demo/test_inf.jpg')
3、No such file or directory: './weights/Generator/Generator_100.pth'
没有这样的文件或目录:'./weights/Generator/Generator_100.pth'
权重文件保存错误 修改即可。修改如下 weights/GAN_G1_D2/Generator/Generator_100.pth
4、RuntimeError: Calculated padded input size per channel: (2 x 2). Kernel size: (4 x 4). Kernel size can't be greater than actual input size
RuntimeError:计算出每个通道的填充输入大小:(2 x 2)。内核大小:(4 x 4)。内核大小不能大于实际输入大小
这部分修改如下:将256*256改为512*512。
# trans = transforms.Compose([transforms.Resize((256, 256)), transforms.ToTensor()])
trans = transforms.Compose([transforms.Resize((512, 512)), transforms.ToTensor()])
5、img = untrans(Generator_feats['Generator'][0]) KeyError: 'Generator'
img = untrans(Generator_feats['Generator'][0]) KeyError: '生成器'
这部分对应将'Generator'改为'Generator_1'。
# img = untrans(Generator_feats['Generator'][0])
img = untrans(Generator_feats['Generator_1'][0])
改正 :
我结合debug函数中的相关代码,在测试代码中添加了部分操作:
GAN_Model = build_model(config)
vis_img = Image.open('./demo/test_vis.jpg') #test_vis.jpg
inf_img = Image.open('./demo/test_inf.jpg')#test_inf.jpg
# trans = transforms.Compose([transforms.Resize((256, 256)), transforms.ToTensor()])
trans = transforms.Compose([transforms.Resize((512, 512)), transforms.ToTensor()])
vis_img = trans(vis_img)
inf_img = trans(inf_img)
data = {'Vis': vis_img.unsqueeze(0), 'Inf': inf_img.unsqueeze(0)}
GAN_Model.Generator.load_state_dict(torch.load('./weights/GAN_G1_D2/Generator/Generator_100.pth').state_dict()) #./weights/GAN_G1_D2/Generator/Generator_50.pth
Generator_feats, Discriminator_feats, confidence = GAN_Model(data)#------新增,实现图像显示------
mean=Generator_Train_config['mean']
std=Generator_Train_config['std']
mean_t = torch.FloatTensor(mean).view(3, 1, 1).expand(vis_img.shape)
std_t = torch.FloatTensor(std).view(3, 1, 1).expand(vis_img.shape)
Generator_feats['Generator_1'][0] = Generator_feats['Generator_1'][0] * std_t + mean_tuntrans = transforms.Compose([transforms.ToPILImage()])
img = untrans(Generator_feats['Generator_1'][0])
在项目根目录下,有一个debug文件夹,里面能看到不同epoch的融合效果。