【Transformer】TNT: Transformer iN Transformer
文章目錄
- 一、背景
- 二、動機
- 三、方法
- 3.1 Transformer in Transformer
- 3.2 Network Architecture
- 四、效果
- 五、代碼
論文鏈接:https://arxiv.org/pdf/2103.00112.pdf
代碼鏈接:https://github.com/huawei-noah/CV-Backbones/tree/master/tnt_pytorch
一、背景
Transformer 是一種主要基于注意力機制的網絡結構,能提取輸入數據的特征。計算機視覺中的 Transformer 首先將輸入圖像均分為多個圖像塊,然后提取器特征和關系。因為圖像數據有很多了細節紋理和顏色等信息,所以目前方法所分的圖像塊的粒度還不夠細,以至于難以挖掘不同尺度和位置特征。
二、動機
計算機視覺使用的是圖像作為輸入,所以輸入和輸出在語義上有很大的 gap,所以 ViT 將圖像分成了幾塊組成一個 sequence 作為輸入,然后使用 attention 計算不同 patch 之間的關系,來生成圖像特征表達。
雖然 ViT 及其變體能夠較好的提取圖像特征來實現識別的任務,但數據集中的圖像是很多樣的,雖然將圖像分塊能夠很好的找到不同 patch 之間的關系和相似性,但不同圖像塊內的小圖像塊也存在很高的相似性。
作者受此啟發,提出了一個更加精細的圖像劃分方法,來生成 visual sequence 并且提升效果。
三、方法
本文中,作者指出,每個 patch 內部的 attention 對網絡效果的影響是非常重要的,所以提出了一種新的結構 TNT。
- 將 patch(16x16)看做 “visual sentence”
- 將 patch 再切分成更細的 patch(4x4),看做 “visual words”
- 會計算 sentence 中每個 words 和其他 words 的 attention
- 然后把 sentences 和 words 的特征進行聚合來提高網絡特征的表達能力
- TNT 在 ImageNet 上達到了 81.5% 的 top-1 acc,比當前 SOTA 提升了 1.7%
3.1 Transformer in Transformer
1、輸入 2D 圖像,切分成 nnn 個 patches
2、將每個 patch 切分成 mmm 個 sub-patches,xi,jx^{i,j}xi,j 是第 i 個 visual sentence 中的第 j 個word
3、對 visual sentence 和 visual word 分別進行處理
inner transformer block 用來對 word 之間建模,outer transformer block 捕捉 sentence 之間的關系。
通過對 TNT block 的 L 次堆疊,得到 transformer-in-transformer network。類似于 ViT,classification token 在這里也作為圖像整體特征的表達。
① 對于 word embedding
使用線性映射將 visual word 進行 embedding,得到如下的 YYY
使用 transformer 來抽取不同 words 之間的關系,L 是 block 的總數,第一個 block Y0iY_0^iY0i? 的輸入是上面的 YiY^iYi。
經過 transformer 之后,圖像中的所有 word embedding 可以表示為 Yl=[Yl1,Yl2,...,Yln]Y_l=[Y_l^1, Y_l^2, ..., Y_l^n]Yl?=[Yl1?,Yl2?,...,Yln?],這也可以看成一個內部 transformer block,叫做 TinT_{in}Tin?,這個過程給每個 sentence 內部的所有 word 兩兩之間建立了關系特征。
舉例來說,就是在一個人臉圖分割的patch中,一個眼睛對應的 word 和另外一個眼睛對應的 word 關聯更高,而與前額關聯更少一些。
② 對于 sentence embedding
作者建立了一個 sentence embedding memories 來存儲 sentence 層面的序列表達,ZclassZ_{class}Zclass? 是 class token,用來分類:
在每一層 sentence embedding,會將 word embedding 的序列結果經過線性變換后加到 sentence embedding上,WWW 和 bbb 分別為權重和偏置
之后,使用標準的 transformer 來進行特征提取,即使用 outer transformer ToutT_{out}Tout? 來對不同的 sentence embedding 進行關系建模。
總之,TNT block 的輸入和輸出都包括了 word embedding 和 sentence embedding,如圖1b所示,所以 TNT 總體表達如下:
3、position encoding
對于 word 和 sentence,分別都要使用位置編碼來保持空間信息。作者在這里使用的是可學習的一維位置編碼。
對于 sentence,每個 sentence 都被分配了一個位置編碼,Esentence∈R(n+1)×dE_{sentence}\in R^{(n+1)\times d}Esentence?∈R(n+1)×d 是 sentence 位置編碼。
對于sentence 中的 word,也給每個都加上了位置編碼,EwordE_{word}Eword? 是 word 位置編碼,所有 sentence 內的 word 編碼是共享的。
3.2 Network Architecture
- patch size:16x16
- sun-patch size:m=4x4=16
- TNT-S(Small, 23.8M) 和 TNT-B(Base, 65.6M) 的結構
四、效果
消融實驗:
可視化:
1、特征圖可視化
① DeiT 和 TNT 學習到的特征圖可視化如下,第 1、6、12 個block 的特征圖如圖3a所示,TNT 的位置信息保存的更好。
圖3b 使用 t-SNE 可視化了第 12 個 block 的所有 384 個 feature map,可以看出來 TNT 的特征圖更加多樣,保存了更豐富的信息,這可以歸功于 inner transformer 對局部特征的建模能力。
② 除了 patch-level 外,作者還可視化了 pixel-level 的 embedding 如圖4,對于每個 patch,作者根據空間位置進行了 reshape,然后把所有通道進行了平均。平均后的特征圖大小為 14x14,可以看出在淺層局部特征保存的較好,深層特征越來越抽象。
2、Attention map 的可視化
在 TNT block 里邊有兩個不同的 self-attention,inner self-attention 和 outer self-attention,圖5展示了 inner transformer 的不同 query 的 attention map。
① visual word 可視化
對于一個給定的 visual word,和該 visual word 想外觀越相似的 word 的 attention value 越高,這也表示他們的特征將和 query 進行更密切的交互,但 ViT 和 DeiT 就沒有這種特性。
② visual sentence 可視化
圖6展示了某個 patch 對其他所有 patch 的 attention map,隨著層的加深,更多的 patch 會有響應。這是因為在越深的層,patch 之間的信息會更好的被關聯起來。
在 block-12,TNT 能夠關注到有用的 patch,而 DeiT 仍然會關注到和 panda 無關的 patch 上去。
3、class token 和 patch 之間的 attention
圖7可視化了 class token 和圖像中的所有 patches 之間的關系,可以看出輸出特征會更加注意到和目標位置關聯的patch
五、代碼
import torch from tnt import TNT tnt = TNT() tnt.eval() inputs = torch.randn(1, 3, 224, 224) logits = tnt(inputs)tnt.py 代碼如下:
import torch import torch.nn as nn from functools import partial import mathfrom timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD from timm.models.helpers import load_pretrained from timm.models.layers import DropPath, to_2tuple, trunc_normal_ from timm.models.resnet import resnet26d, resnet50d from timm.models.registry import register_modeldef _cfg(url='', **kwargs):return {'url': url,'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,'crop_pct': .9, 'interpolation': 'bicubic','mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,'first_conv': 'patch_embed.proj', 'classifier': 'head',**kwargs}default_cfgs = {'tnt_s_patch16_224': _cfg(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5),),'tnt_b_patch16_224': _cfg(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5),), }def make_divisible(v, divisor=8, min_value=None):min_value = min_value or divisornew_v = max(min_value, int(v + divisor / 2) // divisor * divisor)# Make sure that round down does not go down by more than 10%.if new_v < 0.9 * v:new_v += divisorreturn new_vclass Mlp(nn.Module):def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):super().__init__()out_features = out_features or in_featureshidden_features = hidden_features or in_featuresself.fc1 = nn.Linear(in_features, hidden_features)self.act = act_layer()self.fc2 = nn.Linear(hidden_features, out_features)self.drop = nn.Dropout(drop)def forward(self, x):x = self.fc1(x)x = self.act(x)x = self.drop(x)x = self.fc2(x)x = self.drop(x)return xclass SE(nn.Module):def __init__(self, dim, hidden_ratio=None):super().__init__()hidden_ratio = hidden_ratio or 1self.dim = dimhidden_dim = int(dim * hidden_ratio)self.fc = nn.Sequential(nn.LayerNorm(dim),nn.Linear(dim, hidden_dim),nn.ReLU(inplace=True),nn.Linear(hidden_dim, dim),nn.Tanh())def forward(self, x):a = x.mean(dim=1, keepdim=True) # B, 1, Ca = self.fc(a)x = a * xreturn xclass Attention(nn.Module):def __init__(self, dim, hidden_dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):super().__init__()self.hidden_dim = hidden_dimself.num_heads = num_headshead_dim = hidden_dim // num_headsself.head_dim = head_dim# NOTE scale factor was wrong in my original version, can set manually to be compat with prev weightsself.scale = qk_scale or head_dim ** -0.5self.qk = nn.Linear(dim, hidden_dim * 2, bias=qkv_bias)self.v = nn.Linear(dim, dim, bias=qkv_bias)self.attn_drop = nn.Dropout(attn_drop, inplace=True)self.proj = nn.Linear(dim, dim)self.proj_drop = nn.Dropout(proj_drop, inplace=True)def forward(self, x):B, N, C = x.shapeqk = self.qk(x).reshape(B, N, 2, self.num_heads, self.head_dim).permute(2, 0, 3, 1, 4)q, k = qk[0], qk[1] # make torchscript happy (cannot use tensor as tuple)v = self.v(x).reshape(B, N, self.num_heads, -1).permute(0, 2, 1, 3)attn = (q @ k.transpose(-2, -1)) * self.scaleattn = attn.softmax(dim=-1)attn = self.attn_drop(attn)x = (attn @ v).transpose(1, 2).reshape(B, N, -1)x = self.proj(x)x = self.proj_drop(x)return xclass Block(nn.Module):""" TNT Block"""def __init__(self, outer_dim, inner_dim, outer_num_heads, inner_num_heads, num_words, mlp_ratio=4.,qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., drop_path=0., act_layer=nn.GELU,norm_layer=nn.LayerNorm, se=0):super().__init__()self.has_inner = inner_dim > 0if self.has_inner:# Innerself.inner_norm1 = norm_layer(inner_dim)self.inner_attn = Attention(inner_dim, inner_dim, num_heads=inner_num_heads, qkv_bias=qkv_bias,qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)self.inner_norm2 = norm_layer(inner_dim)self.inner_mlp = Mlp(in_features=inner_dim, hidden_features=int(inner_dim * mlp_ratio),out_features=inner_dim, act_layer=act_layer, drop=drop)self.proj_norm1 = norm_layer(num_words * inner_dim)self.proj = nn.Linear(num_words * inner_dim, outer_dim, bias=False)self.proj_norm2 = norm_layer(outer_dim)# Outerself.outer_norm1 = norm_layer(outer_dim)self.outer_attn = Attention(outer_dim, outer_dim, num_heads=outer_num_heads, qkv_bias=qkv_bias,qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()self.outer_norm2 = norm_layer(outer_dim)self.outer_mlp = Mlp(in_features=outer_dim, hidden_features=int(outer_dim * mlp_ratio),out_features=outer_dim, act_layer=act_layer, drop=drop)# SEself.se = seself.se_layer = Noneif self.se > 0:self.se_layer = SE(outer_dim, 0.25)def forward(self, inner_tokens, outer_tokens):if self.has_inner:inner_tokens = inner_tokens + self.drop_path(self.inner_attn(self.inner_norm1(inner_tokens))) # B*N, k*k, cinner_tokens = inner_tokens + self.drop_path(self.inner_mlp(self.inner_norm2(inner_tokens))) # B*N, k*k, cB, N, C = outer_tokens.size()outer_tokens[:,1:] = outer_tokens[:,1:] + self.proj_norm2(self.proj(self.proj_norm1(inner_tokens.reshape(B, N-1, -1)))) # B, N, Cif self.se > 0:outer_tokens = outer_tokens + self.drop_path(self.outer_attn(self.outer_norm1(outer_tokens)))tmp_ = self.outer_mlp(self.outer_norm2(outer_tokens))outer_tokens = outer_tokens + self.drop_path(tmp_ + self.se_layer(tmp_))else:outer_tokens = outer_tokens + self.drop_path(self.outer_attn(self.outer_norm1(outer_tokens)))outer_tokens = outer_tokens + self.drop_path(self.outer_mlp(self.outer_norm2(outer_tokens)))return inner_tokens, outer_tokensclass PatchEmbed(nn.Module):""" Image to Visual Word Embedding"""def __init__(self, img_size=224, patch_size=16, in_chans=3, outer_dim=768, inner_dim=24, inner_stride=4):super().__init__()img_size = to_2tuple(img_size)patch_size = to_2tuple(patch_size)num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])self.img_size = img_sizeself.patch_size = patch_sizeself.num_patches = num_patchesself.inner_dim = inner_dimself.num_words = math.ceil(patch_size[0] / inner_stride) * math.ceil(patch_size[1] / inner_stride)self.unfold = nn.Unfold(kernel_size=patch_size, stride=patch_size)self.proj = nn.Conv2d(in_chans, inner_dim, kernel_size=7, padding=3, stride=inner_stride)def forward(self, x):B, C, H, W = x.shape# FIXME look at relaxing size constraintsassert H == self.img_size[0] and W == self.img_size[1], \f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."# self.unfold = Unfold(kernel_size=(16, 16), dilation=1, padding=0, stride=(16, 16))x = self.unfold(x) # B, Ck2, N [1, 768, 196] # 輸出行數為卷積核橫縱大小相乘,每列為每次卷積核卷過的元素x = x.transpose(1, 2).reshape(B * self.num_patches, C, *self.patch_size) # B*N, C, 16, 16 [196, 3, 16, 16]x = self.proj(x) # [196, 48, 4, 4]x = x.reshape(B * self.num_patches, self.inner_dim, -1).transpose(1, 2) # [196, 16, 48]return xclass TNT(nn.Module):""" TNT (Transformer in Transformer) for computer vision"""def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, outer_dim=768, inner_dim=48,depth=12, outer_num_heads=12, inner_num_heads=4, mlp_ratio=4., qkv_bias=False, qk_scale=None,drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=nn.LayerNorm, inner_stride=4, se=0):super().__init__()self.num_classes = num_classes # 1000self.num_features = self.outer_dim = outer_dim # num_features for consistency with other modelsself.patch_embed = PatchEmbed(img_size=img_size, patch_size=patch_size, in_chans=in_chans, outer_dim=outer_dim,inner_dim=inner_dim, inner_stride=inner_stride)self.num_patches = num_patches = self.patch_embed.num_patches # 196num_words = self.patch_embed.num_words # 16self.proj_norm1 = norm_layer(num_words * inner_dim) # LayerNorm((768,), eps=1e-05, elementwise_affine=True)self.proj = nn.Linear(num_words * inner_dim, outer_dim) # Linear(in_features=768, out_features=768, bias=True)self.proj_norm2 = norm_layer(outer_dim) # LayerNorm((768,), eps=1e-05, elementwise_affine=True)self.cls_token = nn.Parameter(torch.zeros(1, 1, outer_dim)) # [1, 1, 768]self.outer_tokens = nn.Parameter(torch.zeros(1, num_patches, outer_dim), requires_grad=False) # [1, 196, 768]self.outer_pos = nn.Parameter(torch.zeros(1, num_patches + 1, outer_dim)) # [1, 196, 768]self.inner_pos = nn.Parameter(torch.zeros(1, num_words, inner_dim)) # [1, 16, 48]self.pos_drop = nn.Dropout(p=drop_rate)dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rulevanilla_idxs = []blocks = []for i in range(depth):if i in vanilla_idxs:blocks.append(Block(outer_dim=outer_dim, inner_dim=-1, outer_num_heads=outer_num_heads, inner_num_heads=inner_num_heads,num_words=num_words, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, drop=drop_rate,attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, se=se))else:blocks.append(Block(outer_dim=outer_dim, inner_dim=inner_dim, outer_num_heads=outer_num_heads, inner_num_heads=inner_num_heads,num_words=num_words, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, drop=drop_rate,attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer, se=se))self.blocks = nn.ModuleList(blocks)self.norm = norm_layer(outer_dim)# NOTE as per official impl, we could have a pre-logits representation dense layer + tanh here#self.repr = nn.Linear(outer_dim, representation_size)#self.repr_act = nn.Tanh()# Classifier headself.head = nn.Linear(outer_dim, num_classes) if num_classes > 0 else nn.Identity() # Linear(in_features=768, out_features=1000, bias=True)trunc_normal_(self.cls_token, std=.02)trunc_normal_(self.outer_pos, std=.02)trunc_normal_(self.inner_pos, std=.02)self.apply(self._init_weights)# import pdb; pdb.set_trace()def _init_weights(self, m):if isinstance(m, nn.Linear):trunc_normal_(m.weight, std=.02)if isinstance(m, nn.Linear) and m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.LayerNorm):nn.init.constant_(m.bias, 0)nn.init.constant_(m.weight, 1.0)@torch.jit.ignoredef no_weight_decay(self):return {'outer_pos', 'inner_pos', 'cls_token'}def get_classifier(self):return self.headdef reset_classifier(self, num_classes, global_pool=''):self.num_classes = num_classesself.head = nn.Linear(self.outer_dim, num_classes) if num_classes > 0 else nn.Identity()def forward_features(self, x):# import pdb; pdb.set_trace()# x.shape=[1, 3, 224, 224]B = x.shape[0] # 1inner_tokens = self.patch_embed(x) + self.inner_pos # self.patch_embed(x) = [196, 16, 48], self.inner_pos=[1, 16, 48]outer_tokens = self.proj_norm2(self.proj(self.proj_norm1(inner_tokens.reshape(B, self.num_patches, -1)))) # [1, 196, 768]outer_tokens = torch.cat((self.cls_token.expand(B, -1, -1), outer_tokens), dim=1) # [1, 197, 768]outer_tokens = outer_tokens + self.outer_pos # [1, 197, 768]outer_tokens = self.pos_drop(outer_tokens) # [1, 197, 768]for blk in self.blocks:inner_tokens, outer_tokens = blk(inner_tokens, outer_tokens) # inner_tokens.shape=[196, 16, 48] outer_tokens.shape=[1, 197, 768]outer_tokens = self.norm(outer_tokens)return outer_tokens[:, 0] # [1, 768]def forward(self, x):x = self.forward_features(x) # [1, 768]x = self.head(x)return xdef _conv_filter(state_dict, patch_size=16):""" convert patch embedding weight from manual patchify + linear proj to conv"""out_dict = {}for k, v in state_dict.items():if 'patch_embed.proj.weight' in k:v = v.reshape((v.shape[0], 3, patch_size, patch_size))out_dict[k] = vreturn out_dict@register_model def tnt_s_patch16_224(pretrained=False, **kwargs):patch_size = 16inner_stride = 4outer_dim = 384inner_dim = 24outer_num_heads = 6inner_num_heads = 4outer_dim = make_divisible(outer_dim, outer_num_heads)inner_dim = make_divisible(inner_dim, inner_num_heads)model = TNT(img_size=224, patch_size=patch_size, outer_dim=outer_dim, inner_dim=inner_dim, depth=12,outer_num_heads=outer_num_heads, inner_num_heads=inner_num_heads, qkv_bias=False,inner_stride=inner_stride, **kwargs)model.default_cfg = default_cfgs['tnt_s_patch16_224']if pretrained:load_pretrained(model, num_classes=model.num_classes, in_chans=kwargs.get('in_chans', 3), filter_fn=_conv_filter)return model@register_model def tnt_b_patch16_224(pretrained=False, **kwargs):patch_size = 16inner_stride = 4outer_dim = 640inner_dim = 40outer_num_heads = 10inner_num_heads = 4outer_dim = make_divisible(outer_dim, outer_num_heads)inner_dim = make_divisible(inner_dim, inner_num_heads)model = TNT(img_size=224, patch_size=patch_size, outer_dim=outer_dim, inner_dim=inner_dim, depth=12,outer_num_heads=outer_num_heads, inner_num_heads=inner_num_heads, qkv_bias=False,inner_stride=inner_stride, **kwargs)model.default_cfg = default_cfgs['tnt_b_patch16_224']if pretrained:load_pretrained(model, num_classes=model.num_classes, in_chans=kwargs.get('in_chans', 3), filter_fn=_conv_filter)return modelself.blocks
ModuleList((0): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(2): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(3): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(4): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(5): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(6): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(7): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(8): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(9): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(10): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(11): Block((inner_norm1): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_attn): Attention((qk): Linear(in_features=48, out_features=96, bias=False)(v): Linear(in_features=48, out_features=48, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=48, out_features=48, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(inner_norm2): LayerNorm((48,), eps=1e-05, elementwise_affine=True)(inner_mlp): Mlp((fc1): Linear(in_features=48, out_features=192, bias=True)(act): GELU()(fc2): Linear(in_features=192, out_features=48, bias=True)(drop): Dropout(p=0.0, inplace=False))(proj_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(proj): Linear(in_features=768, out_features=768, bias=False)(proj_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_attn): Attention((qk): Linear(in_features=768, out_features=1536, bias=False)(v): Linear(in_features=768, out_features=768, bias=False)(attn_drop): Dropout(p=0.0, inplace=True)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=True))(drop_path): Identity()(outer_norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(outer_mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False))) ) 創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的【Transformer】TNT: Transformer iN Transformer的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【Transformer】Deforma
- 下一篇: 大众汽车旗下斯柯达因芯片短缺问题而减产