3. 如何添加新的评测算法

3.1. 攻击算法存储位置

├── EvalBox
│   ├── **Attack**
│   │   ├──__init__.py
│   │   ├──attack.py
│   │   ├──fgsm.py
│   │   ├──pgd.py
│   │   ├──....py
├── Models
├── utils
├── test
├── Datasets

3.2. 扩展实例——FGSM算法

FGSM算法路径为:

~/AI-Testing/audio/EvalBox/Attack/fgsm.py

FGSM算法源代码:

import torch
import torch.nn as nn
from audio.EvalBox.Attack.attack import Attacker
from audio.EvalBox.Attack.utils import target_sentence_to_label


class FGSMAttacker(Attacker):
    def __init__(self, model, device, **kwargs):
        super(FGSMAttacker, self).__init__(model, device)
        self._parse_params(**kwargs)
        self.criterion = nn.CTCLoss()

    def _parse_params(self, **kwargs):
        self.eps = kwargs.get("eps", 0.025)

    def generate(self, sounds, targets):
        targets = target_sentence_to_label(targets)
        targets = targets.view(1, -1).to(self.device).detach()
        target_lengths = torch.IntTensor([targets.shape[1]]).view(1, -1)
        advs = sounds.clone().detach().to(self.device).requires_grad_(True)
        self.model.zero_grad()
        with torch.backends.cudnn.flags(enabled=False):
            out, output_sizes = self.model(advs)
            out = out.transpose(0, 1).log()
            loss = self.criterion(out, targets, output_sizes, target_lengths)
            loss.backward()
            data_grad = advs.grad.data.nan_to_num(nan=0)
            advs = advs - self.eps * data_grad.sign()
            advs = advs.detach().requires_grad_(True)
        advs = torch.clamp(advs, min=-1, max=1)
        return advs

3.3. 扩展说明

  1. 用户需要实现个人攻击算法,并继承基础的Attack类

  2. 用户需要将待扩展的攻击算法对应文件,如new_attack_method.py,放置于以下路径中

~/AI-Testing/audio/EvalBox/Attack/
  1. 用户需要在2中路径下的__init__.py文件中,添加用户攻击算法类的引用:

from .attack import Attack
from .fgsm import FGSM
...
from .new_attack_method import NEW_ATTACK_METHOD