第J2周:ResNet50V2算法实战与解析

news/2024/9/28 9:56:31 标签: 算法, 神经网络
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

    文章目录

    • 一、前期工作
      • 1、ResNetV2结构与ResNet结构对比
      • 2、关于残差结构的不同尝试
      • 3、关于激活的尝试
    • 二、模型复现
      • 1、设置GPU
      • 2、导入数据
      • 3、数据预处理
      • 4、导入模型
        • 1、Residual Block
        • 2、堆叠Residual Block
        • 3、ResNet50V2架构复线
        • 4、训练函数和测试函数
        • 5、模型训练
        • 6、结果可视化
    • 三、总结

电脑环境:
语言环境:Python 3.8.0
编译器:Jupyter Notebook
深度学习环境:tensorflow 2.17.0

一、前期工作

1、ResNetV2结构与ResNet结构对比

在这里插入图片描述
改进点:(a)original表示原始的ResNet的残差结构,(b)proposed表示新的ResNet残差结构。主要差别就是(a)结构先卷积后进行BN和激活函数计算,最后执行addition后再进行ReLU计算;(b)结构先进行BN和激活函数计算后卷积,把addition后的ReLU计算放到了残差结构内部。

改进结果:作者使用这两种不同的结构在CiFAR-10数据集上做测试,模型使用的是1001层的ResNet模型。从图中我们可以看出,(b)proposed的测试集错误率明显更低,达到了4.92%的错误率,(a)original的测试集错误率为7.61%。

2、关于残差结构的不同尝试

在这里插入图片描述
(b-f)中的快捷连接被不同的组件障碍。为了简化插图,我们不显示BN层,这里属所有单位均采用权值层后的BN层。图中(a-f)都是作者对残差结构的shortcut部分进行的不同尝试,作者对不同shortcut结构的尝试结构如下表所示:
在这里插入图片描述
作者用不同的shortcut结构的ResNet-110在CIFAR-10数据集上做测试,发现原始的(a)original结构是最好的,也就是identity mapping 恒等映射是最好的。

3、关于激活的尝试

在这里插入图片描述
在这里插入图片描述
可以得出最好的结果是(e)full pre-activation,其次是(a)original。

二、模型复现

1、设置GPU

import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device

2、导入数据

import matplotlib.pyplot as plt
import os, PIL, pathlib
import numpy as np

data_dir = './bird_photos'
dat3a_dir = pathlib.Path(data_dir)
data_path = list(data_dir.glob('*'))
classeNames = [str(path).split('/')[1] for path in data_path]
classeNames

3、数据预处理

import torchvision
from torchvision import transforms, datasets
import torchvision.transforms as transforms

train_transforms = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(
        mean=[0.485, 0.456, 0.406],
        std=[0.229, 0.224, 0.225]
        )
])

total_data = datasets.ImageFolder('./bird_photos', transform=train_transforms)

# 划分数据集
train_size = int(0.8 * len(total_data))
test_size = len(total_data) - train_size

train_dataset, test_dataset = torch.utils.data.random_split(total_data, [train_size, test_size])

batch_size = 8
train_dl = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_dl = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)

4、导入模型

注意:Resnet50V2、ResNet101V2与ResNet152V2的搭建方式完全一样,区别在于堆叠residual block的数量不同。

1、Residual Block

import torch
import torch.nn as nn

class Block2(nn.Module):
    def __init__(self, filters, kernel_size=3, stride=1, conv_shortcut=False):
        super(Block2, self).__init__()
        self.conv_shortcut = conv_shortcut

        self.bn1 = nn.BatchNorm2d(filters)
        self.relu = nn.ReLU(inplace=True)

        if conv_shortcut:
            self.shortcut = nn.Conv2d(4 * filters, kernel_size=1, stride=stride, bias=False)
        else:
            if stride > 1:
                self.shortcut = nn.MaxPool2d(kernel_size=1, stride=stride)
            else:
                self.shortcut = nn.Identity()

        self.conv1 = nn.Conv2d(filters, filters, kernel_size=1, stride=1, bias=False)
        self.bn2 = nn.BatchNorm2d(filters)
        self.relu2 = nn.ReLU(inplace=True)

        self.padding = nn.ZeroPad2d(1)  

        self.conv2 = nn.Conv2d(filters, filters, kernel_size=kernel_size, stride=stride, padding=0, bias=False)
        self.bn3 = nn.BatchNorm2d(filters)
        self.relu3 = nn.ReLU(inplace=True)

        self.conv3 = nn.Conv2d(filters, 4 * filters, kernel_size=1, bias=False)

        self.add = nn.Identity() 

    def forward(self, x):
        preact = self.bn1(x)
        preact = self.relu(preact)

        if self.conv_shortcut:
            shortcut = self.shortcut(preact)
        else:
            shortcut = self.shortcut(x)

        out = self.conv1(preact)
        out = self.bn2(out)
        out = self.relu2(out)
        out = self.padding(out)
        out = self.conv2(out)
        out = self.bn3(out)
        out = self.relu3(out)

        out = self.conv3(out)

        out += shortcut

        return out

2、堆叠Residual Block

import torch.nn as nn

class Stack2(nn.Module):
    def __init__(self, block, filters, blocks, stride1=2):
        super(Stack2, self).__init__()
        self.layers = nn.ModuleList()
        self.layers.append(block(filters, stride=stride1, conv_shortcut=True))

        for i in range(2, blocks):
            self.layers.append(block(filters))

        self.layers.append(block(filters, stride=stride1))

    def forward(self, x):
        for layer in self.layers:
            x = layer(x)
        return x

3、ResNet50V2架构复线

在这里插入图片描述

代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F

class ResNet50V2(nn.Module):
    def __init__(self, num_classes=1000, include_top=True, preact=False, pooling='avg'):
        super(ResNet50V2, self).__init__()
        self.include_top = include_top
        self.preact = preact

        # conv1
        self.conv1_pad = nn.ZeroPad2d((3, 3, 3, 3))
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)

        # conv2_x
        self.pool1_pad = nn.ZeroPad2d((1, 1, 1, 1))
        self.pool1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)

        # Residual Blocks (stack layers)
        self.layer1 = self._make_stack_layer(64, 64, 3, stride=1, name='conv2')
        self.layer2 = self._make_stack_layer(64*4, 128, 4, stride=2, name='conv3')
        self.layer3 = self._make_stack_layer(128*4, 256, 6, stride=2, name='conv4')
        self.layer4 = self._make_stack_layer(256*4, 512, 3, stride=2, name='conv5')

        # BatchNorm and relu for post-processing
        self.post_bn = nn.BatchNorm2d(512 * 4)
        self.post_relu = nn.ReLU(inplace=True)

        # Pooling and Fully Connected Layer
        if include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
            self.fc = nn.Linear(512 * 4, num_classes)
        else:
            if pooling == 'avg':
                self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
            elif pooling == 'max':
                self.avgpool = nn.AdaptiveMaxPool2d((1, 1))

    def _make_stack_layer(self, in_planes, planes, blocks, stride=1, name=None):
        layers = []
        # First block with shortcut
        layers.append(Bottleneck(in_planes, planes, stride, conv_shortcut=True))

        # Remaining blocks
        for _ in range(1, blocks):
            layers.append(Bottleneck(planes * 4, planes))
        return nn.Sequential(*layers)

    def forward(self, x):
        # Initial layers (conv1)
        x = self.conv1_pad(x)
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)

        # MaxPool layer (conv2_x)
        x = self.pool1_pad(x)
        x = self.pool1(x)

        # Residual blocks (stack layers)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        # Optional post-bn and relu if preact is True
        if self.preact:
            x = self.post_bn(x)
            x = self.post_relu(x)

        # Pooling layer
        x = self.avgpool(x)
        x = torch.flatten(x, 1)

        if self.include_top:
            x = self.fc(x)

        return x

# Bottleneck Block used in ResNet
class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, in_planes, planes, stride=1, conv_shortcut=False):
        super(Bottleneck, self).__init__()
        self.conv_shortcut = conv_shortcut

        self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)

        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)

        self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(planes * 4)

        if self.conv_shortcut:
            self.shortcut = nn.Sequential(
                nn.Conv2d(in_planes, planes * 4, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * 4)
            )
        else:
            self.shortcut = nn.Identity()

    def forward(self, x):
        shortcut = self.shortcut(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = F.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = F.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += shortcut
        out = F.relu(out)

        return out

# Instantiate the model
def ResNet50V2_instance(include_top=True, num_classes=1000, preact=False, pooling='avg'):
    return ResNet50V2(num_classes=num_classes, include_top=include_top, preact=preact, pooling=pooling)

model = ResNet50V2_instance()
print(model)

4、训练函数和测试函数

# 训练循环
def train(dataloader, model, loss_fn, optimizer):
    size = len(dataloader.dataset)  # 训练集的大小
    num_batches = len(dataloader)   # 批次数目, (size/batch_size,向上取整)

    train_loss, train_acc = 0, 0  # 初始化训练损失和正确率

    for X, y in dataloader:  # 获取图片及其标签
        X, y = X.to(device), y.to(device)

        # 计算预测误差
        pred = model(X)          # 网络输出
        loss = loss_fn(pred, y)  # 计算网络输出和真实值之间的差距,targets为真实值,计算二者差值即为损失

        # 反向传播
        optimizer.zero_grad()  # grad属性归零
        loss.backward()        # 反向传播
        optimizer.step()       # 每一步自动更新

        # 记录acc与loss
        train_acc  += (pred.argmax(1) == y).type(torch.float).sum().item()
        train_loss += loss.item()

    train_acc  /= size
    train_loss /= num_batches

    return train_acc, train_loss


def test (dataloader, model, loss_fn):
    size        = len(dataloader.dataset)  # 测试集的大小
    num_batches = len(dataloader)          # 批次数目, (size/batch_size,向上取整)
    test_loss, test_acc = 0, 0

    # 当不进行训练时,停止梯度更新,节省计算内存消耗
    with torch.no_grad():
        for imgs, target in dataloader:
            imgs, target = imgs.to(device), target.to(device)

            # 计算loss
            target_pred = model(imgs)
            loss        = loss_fn(target_pred, target)

            test_loss += loss.item()
            test_acc  += (target_pred.argmax(1) == target).type(torch.float).sum().item()

    test_acc  /= size
    test_loss /= num_batches

    return test_acc, test_loss

5、模型训练

import copy

optimizer  = torch.optim.Adam(model.parameters(), lr= 1e-4)
loss_fn    = nn.CrossEntropyLoss() # 创建损失函数

epochs     = 10

train_loss = []
train_acc  = []
test_loss  = []
test_acc   = []

for epoch in range(epochs):

    model.train()
    epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)

    model.eval()
    epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)

    train_acc.append(epoch_train_acc)
    train_loss.append(epoch_train_loss)
    test_acc.append(epoch_test_acc)
    test_loss.append(epoch_test_loss)

    # 获取当前的学习率
    lr = optimizer.state_dict()['param_groups'][0]['lr']

    template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}, Lr:{:.2E}')
    print(template.format(epoch+1, epoch_train_acc*100, epoch_train_loss,
                          epoch_test_acc*100, epoch_test_loss, lr))

Epoch: 1, Train_acc:51.1%, Train_loss:1.920, Test_acc:61.9%, Test_loss:0.954, Lr:1.00E-04
Epoch: 2, Train_acc:69.5%, Train_loss:0.829, Test_acc:73.5%, Test_loss:1.099, Lr:1.00E-04
Epoch: 3, Train_acc:75.9%, Train_loss:0.638, Test_acc:62.8%, Test_loss:1.229, Lr:1.00E-04
Epoch: 4, Train_acc:81.2%, Train_loss:0.476, Test_acc:77.9%, Test_loss:0.494, Lr:1.00E-04
Epoch: 5, Train_acc:89.2%, Train_loss:0.363, Test_acc:78.8%, Test_loss:0.605, Lr:1.00E-04
Epoch: 6, Train_acc:87.4%, Train_loss:0.373, Test_acc:84.1%, Test_loss:0.495, Lr:1.00E-04
Epoch: 7, Train_acc:90.0%, Train_loss:0.318, Test_acc:78.8%, Test_loss:0.885, Lr:1.00E-04
Epoch: 8, Train_acc:92.7%, Train_loss:0.215, Test_acc:84.1%, Test_loss:0.475, Lr:1.00E-04
Epoch: 9, Train_acc:91.4%, Train_loss:0.248, Test_acc:87.6%, Test_loss:0.643, Lr:1.00E-04
Epoch:10, Train_acc:89.6%, Train_loss:0.282, Test_acc:78.8%, Test_loss:0.553, Lr:1.00E-04

6、结果可视化

# coding=utf-8
import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息
plt.rcParams['figure.dpi']         = 100        #分辨率

epochs_range = range(epochs)

plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)

plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

在这里插入图片描述

三、总结

学习了resent V2与resent网络之间的结构差异。


http://www.niftyadmin.cn/n/5680855.html

相关文章

利用模糊综合评价法进行数值评分计算——代码实现

1、前情回顾 之前的文章,我们详细介绍了模糊评分法的具体计算过程(不清楚的读者可点击此处的传送门《利用模糊综合评价法进行数值评分计算——算法过程》)。本篇文章我们展示模糊评分法的主要实现代码(Java版本,实际上…

C++ day07

C笔试题合集: 1、什么是虚函数?什么是纯虚函数? 1>在类中定义函数时,在函数名前加上virtual关键字,该函数就是虚函数,虚函数可以保证在父子类中只有一个该函数。 2>当虚函数头 0;时该函…

【算法】堆排之LCR 159.库存管理 Ⅲ(easy)

系列专栏 双指针 模拟算法 分治思想 目录 1、题目链接 2、题目介绍 3、解法 选择合适的算法: 实现堆排序: 4、代码 1、题目链接 LCR 159. 库存管理 III - 力扣(LeetCode) 2、题目介绍 仓库管理员以数组 stock 形式记录…

MySQl查询分析工具 EXPLAIN ANALYZE

文章目录 EXPLAIN ANALYZE是什么Iterator 输出内容解读EXPLAIN ANALYZE和EXPLAIN FORMATTREE的区别单个 Iterator 内容解读 案例分析案例1 文件排序案例2 简单的JOIN查询 参考资料:https://hackmysql.com/book-2/ EXPLAIN ANALYZE是什么 EXPLAIN ANALYZE是MySQL8.…

《Linux从小白到高手》理论篇(三):vi/vim编辑器和Linux文件处理“三剑客”(sed/grep/awk)

List item 本篇介绍vi/vim编辑器和Linux文件处理“三剑客”(sed/grep/awk),这5个工具命令可能是Linux最最常用的,而且功能超级强大。 vi/vim vi和vim的基本介绍 所有的 Linux 系统都会内建 vi 文本编辑器。Vim 具有程序编辑的…

进阶数据库系列(十三):PostgreSQL 分区分表

概述 在组件开发迭代的过程中,随着使用时间的增加,数据库中的数据量也不断增加,因此数据库查询越来越慢。 通常加速数据库的方法很多,如添加特定的索引,将日志目录换到单独的磁盘分区,调整数据库引擎的参…

【软件工程】可行性研究

一、目的 二、任务 三、步骤 四、结果:可行性研究报告 例题 选择题

【深度学习】低维向量映射到高维空间的方法

低维向量映射到高维空间的方法 1、全连接层(线性层): 全连接层,也称为线性层,是神经网络中最基本的组件之一。它通过一个权重矩阵和一个偏置向量对输入向量进行线性变换。如果输入向量的维度是 din,而我们…