久久精品国产精品国产精品污,男人扒开添女人下部免费视频,一级国产69式性姿势免费视频,夜鲁夜鲁很鲁在线视频 视频,欧美丰满少妇一区二区三区,国产偷国产偷亚洲高清人乐享,中文 在线 日韩 亚洲 欧美,熟妇人妻无乱码中文字幕真矢织江,一区二区三区人妻制服国产

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

The Annotated Transformer

發布時間:2025/3/21 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 The Annotated Transformer 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

The Annotated Transformer

Apr 3, 2018

from IPython.display import Image Image(filename='images/aiayn.png')

The Transformer from?“Attention is All You Need”?has been on a lot of people’s minds over the last year. Besides producing major improvements in translation quality, it provides a new architecture for many other NLP tasks. The paper itself is very clearly written, but the conventional wisdom has been that it is quite difficult to implement correctly.

In this post I present an “annotated” version of the paper in the form of a line-by-line implementation. I have reordered and deleted some sections from the original paper and added comments throughout. This document itself is a working notebook, and should be a completely usable implementation. In total there are 400 lines of library code which can process 27,000 tokens per second on 4 GPUs.

To follow along you will first need to install?PyTorch. The complete notebook is also available on?github?or on Google?Colab?with free GPUs.

Note this is merely a starting point for researchers and interested developers. The code here is based heavily on our?OpenNMT?packages. (If helpful feel free to?cite.) For other full-sevice implementations of the model check-out?Tensor2Tensor?(tensorflow) and?Sockeye?(mxnet).

  • Alexander Rush (@harvardnlp?or srush@seas.harvard.edu), with help from Vincent Nguyen and Guillaume Klein

Prelims

# !pip install http://download.pytorch.org/whl/cu80/torch-0.3.0.post4-cp36-cp36m-linux_x86_64.whl numpy matplotlib spacy torchtext seaborn import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import math, copy, time from torch.autograd import Variable import matplotlib.pyplot as plt import seaborn seaborn.set_context(context="talk") %matplotlib inline

Table of Contents

  • Prelims
  • Background
  • Model Architecture
    • Encoder and Decoder Stacks
      • Encoder
      • Decoder
      • Attention
      • Applications of Attention in our Model
    • Position-wise Feed-Forward Networks
    • Embeddings and Softmax
    • Positional Encoding
    • Full Model
  • Training
    • Batches and Masking
    • Training Loop
    • Training Data and Batching
    • Hardware and Schedule
    • Optimizer
    • Regularization
      • Label Smoothing
  • A First Example
    • Synthetic Data
    • Loss Computation
    • Greedy Decoding
  • A Real World Example
    • Data Loading
    • Iterators
    • Multi-GPU Training
    • Training the System
  • Additional Components: BPE, Search, Averaging
  • Results
    • Attention Visualization
  • Conclusion

My comments are blockquoted. The main text is all from the paper itself.

Background

The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU, ByteNet and ConvS2S, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention.

Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations. End- to-end memory networks are based on a recurrent attention mechanism instead of sequencealigned recurrence and have been shown to perform well on simple- language question answering and language modeling tasks.

To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence aligned RNNs or convolution.

Model Architecture

Most competitive neural sequence transduction models have an encoder-decoder structure?(cite). Here, the encoder maps an input sequence of symbol representations?(x1,…,xn)(x1,…,xn)?to a sequence of continuous representations?z=(z1,…,zn)z=(z1,…,zn). Given?zz, the decoder then generates an output sequence?(y1,…,ym)(y1,…,ym)?of symbols one element at a time. At each step the model is auto-regressive?(cite), consuming the previously generated symbols as additional input when generating the next.

class EncoderDecoder(nn.Module):"""A standard Encoder-Decoder architecture. Base for this and many other models."""def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):super(EncoderDecoder, self).__init__()self.encoder = encoderself.decoder = decoderself.src_embed = src_embedself.tgt_embed = tgt_embedself.generator = generatordef forward(self, src, tgt, src_mask, tgt_mask):"Take in and process masked src and target sequences."return self.decode(self.encode(src, src_mask), src_mask,tgt, tgt_mask)def encode(self, src, src_mask):return self.encoder(self.src_embed(src), src_mask)def decode(self, memory, src_mask, tgt, tgt_mask):return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask) class Generator(nn.Module):"Define standard linear + softmax generation step."def __init__(self, d_model, vocab):super(Generator, self).__init__()self.proj = nn.Linear(d_model, vocab)def forward(self, x):return F.log_softmax(self.proj(x), dim=-1)

The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.

Image(filename='images/ModalNet-21.png')

Encoder and Decoder Stacks

Encoder

The encoder is composed of a stack of?N=6N=6?identical layers.

def clones(module, N):"Produce N identical layers."return nn.ModuleList([copy.deepcopy(module) for _ in range(N)]) class Encoder(nn.Module):"Core encoder is a stack of N layers"def __init__(self, layer, N):super(Encoder, self).__init__()self.layers = clones(layer, N)self.norm = LayerNorm(layer.size)def forward(self, x, mask):"Pass the input (and mask) through each layer in turn."for layer in self.layers:x = layer(x, mask)return self.norm(x)

We employ a residual connection?(cite)?around each of the two sub-layers, followed by layer normalization?(cite).

class LayerNorm(nn.Module):"Construct a layernorm module (See citation for details)."def __init__(self, features, eps=1e-6):super(LayerNorm, self).__init__()self.a_2 = nn.Parameter(torch.ones(features))self.b_2 = nn.Parameter(torch.zeros(features))self.eps = epsdef forward(self, x):mean = x.mean(-1, keepdim=True)std = x.std(-1, keepdim=True)return self.a_2 * (x - mean) / (std + self.eps) + self.b_2

That is, the output of each sub-layer is?LayerNorm(x+Sublayer(x))LayerNorm(x+Sublayer(x)), where?Sublayer(x)Sublayer(x)?is the function implemented by the sub-layer itself. We apply dropout?(cite)?to the output of each sub-layer, before it is added to the sub-layer input and normalized.

To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension?dmodel=512dmodel=512.

class SublayerConnection(nn.Module):"""A residual connection followed by a layer norm.Note for code simplicity the norm is first as opposed to last."""def __init__(self, size, dropout):super(SublayerConnection, self).__init__()self.norm = LayerNorm(size)self.dropout = nn.Dropout(dropout)def forward(self, x, sublayer):"Apply residual connection to any sublayer with the same size."return x + self.dropout(sublayer(self.norm(x)))

Each layer has two sub-layers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed- forward network.

class EncoderLayer(nn.Module):"Encoder is made up of self-attn and feed forward (defined below)"def __init__(self, size, self_attn, feed_forward, dropout):super(EncoderLayer, self).__init__()self.self_attn = self_attnself.feed_forward = feed_forwardself.sublayer = clones(SublayerConnection(size, dropout), 2)self.size = sizedef forward(self, x, mask):"Follow Figure 1 (left) for connections."x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))return self.sublayer[1](x, self.feed_forward)

Decoder

The decoder is also composed of a stack of?N=6N=6?identical layers.

class Decoder(nn.Module):"Generic N layer decoder with masking."def __init__(self, layer, N):super(Decoder, self).__init__()self.layers = clones(layer, N)self.norm = LayerNorm(layer.size)def forward(self, x, memory, src_mask, tgt_mask):for layer in self.layers:x = layer(x, memory, src_mask, tgt_mask)return self.norm(x)

In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization.

class DecoderLayer(nn.Module):"Decoder is made of self-attn, src-attn, and feed forward (defined below)"def __init__(self, size, self_attn, src_attn, feed_forward, dropout):super(DecoderLayer, self).__init__()self.size = sizeself.self_attn = self_attnself.src_attn = src_attnself.feed_forward = feed_forwardself.sublayer = clones(SublayerConnection(size, dropout), 3)def forward(self, x, memory, src_mask, tgt_mask):"Follow Figure 1 (right) for connections."m = memoryx = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, tgt_mask))x = self.sublayer[1](x, lambda x: self.src_attn(x, m, m, src_mask))return self.sublayer[2](x, self.feed_forward)

We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions. This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position?ii?can depend only on the known outputs at positions less than?ii.

def subsequent_mask(size):"Mask out subsequent positions."attn_shape = (1, size, size)subsequent_mask = np.triu(np.ones(attn_shape), k=1).astype('uint8')return torch.from_numpy(subsequent_mask) == 0

Below the attention mask shows the position each tgt word (row) is allowed to look at (column). Words are blocked for attending to future words during training.

plt.figure(figsize=(5,5)) plt.imshow(subsequent_mask(20)[0]) None

Attention

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

We call our particular attention “Scaled Dot-Product Attention”. The input consists of queries and keys of dimension?dkdk, and values of dimension?dvdv. We compute the dot products of the query with all keys, divide each by?√dkdk, and apply a softmax function to obtain the weights on the values.

Image(filename='images/ModalNet-19.png')

In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix?QQ. The keys and values are also packed together into matrices?KK?and?VV. We compute the matrix of outputs as:

Attention(Q,K,V)=softmax(QKT√dk)VAttention(Q,K,V)=softmax(QKTdk)V

def attention(query, key, value, mask=None, dropout=None):"Compute 'Scaled Dot Product Attention'"d_k = query.size(-1)scores = torch.matmul(query, key.transpose(-2, -1)) \/ math.sqrt(d_k)if mask is not None:scores = scores.masked_fill(mask == 0, -1e9)p_attn = F.softmax(scores, dim = -1)if dropout is not None:p_attn = dropout(p_attn)return torch.matmul(p_attn, value), p_attn

The two most commonly used attention functions are additive attention?(cite), and dot-product (multiplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor of?1√dk1dk. Additive attention computes the compatibility function using a feed-forward network with a single hidden layer. While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.

While for small values of?dkdk?the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of?dkdk?(cite). We suspect that for large values of?dkdk, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients (To illustrate why the dot products get large, assume that the components of?qq?and?kk?are independent random variables with mean?00?and variance?11. Then their dot product,?q?k=∑dki=1qikiq?k=∑i=1dkqiki, has mean?00?and variance?dkdk.). To counteract this effect, we scale the dot products by?1√dk1dk.

Image(filename='images/ModalNet-20.png')

Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.MultiHead(Q,K,V)=Concat(head1,...,headh)WOwhere?headi=Attention(QWQi,KWKi,VWVi)MultiHead(Q,K,V)=Concat(head1,...,headh)WOwhere?headi=Attention(QWiQ,KWiK,VWiV)

Where the projections are parameter matrices?WQi∈Rdmodel×dkWiQ∈Rdmodel×dk,?WKi∈Rdmodel×dkWiK∈Rdmodel×dk,?WVi∈Rdmodel×dvWiV∈Rdmodel×dv?and?WO∈Rhdv×dmodelWO∈Rhdv×dmodel. In this work we employ?h=8h=8?parallel attention layers, or heads. For each of these we use?dk=dv=dmodel/h=64dk=dv=dmodel/h=64. Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality.

class MultiHeadedAttention(nn.Module):def __init__(self, h, d_model, dropout=0.1):"Take in model size and number of heads."super(MultiHeadedAttention, self).__init__()assert d_model % h == 0# We assume d_v always equals d_kself.d_k = d_model // hself.h = hself.linears = clones(nn.Linear(d_model, d_model), 4)self.attn = Noneself.dropout = nn.Dropout(p=dropout)def forward(self, query, key, value, mask=None):"Implements Figure 2"if mask is not None:# Same mask applied to all h heads.mask = mask.unsqueeze(1)nbatches = query.size(0)# 1) Do all the linear projections in batch from d_model => h x d_k query, key, value = \[l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)for l, x in zip(self.linears, (query, key, value))]# 2) Apply attention on all the projected vectors in batch. x, self.attn = attention(query, key, value, mask=mask, dropout=self.dropout)# 3) "Concat" using a view and apply a final linear. x = x.transpose(1, 2).contiguous() \.view(nbatches, -1, self.h * self.d_k)return self.linears[-1](x)

Applications of Attention in our Model

The Transformer uses multi-head attention in three different ways: 1) In “encoder-decoder attention” layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder. This allows every position in the decoder to attend over all positions in the input sequence. This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as?(cite).

2) The encoder contains self-attention layers. In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder. Each position in the encoder can attend to all positions in the previous layer of the encoder.

3) Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position. We need to prevent leftward information flow in the decoder to preserve the auto-regressive property. We implement this inside of scaled dot- product attention by masking out (setting to??∞?∞) all values in the input of the softmax which correspond to illegal connections.

Position-wise Feed-Forward Networks

In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically. This consists of two linear transformations with a ReLU activation in between.

FFN(x)=max(0,xW1+b1)W2+b2FFN(x)=max(0,xW1+b1)W2+b2

While the linear transformations are the same across different positions, they use different parameters from layer to layer. Another way of describing this is as two convolutions with kernel size 1. The dimensionality of input and output is?dmodel=512dmodel=512, and the inner-layer has dimensionality?dff=2048dff=2048.

class PositionwiseFeedForward(nn.Module):"Implements FFN equation."def __init__(self, d_model, d_ff, dropout=0.1):super(PositionwiseFeedForward, self).__init__()self.w_1 = nn.Linear(d_model, d_ff)self.w_2 = nn.Linear(d_ff, d_model)self.dropout = nn.Dropout(dropout)def forward(self, x):return self.w_2(self.dropout(F.relu(self.w_1(x))))

Embeddings and Softmax

Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension?dmodeldmodel. We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities. In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to?(cite). In the embedding layers, we multiply those weights by?√dmodeldmodel.

class Embeddings(nn.Module):def __init__(self, d_model, vocab):super(Embeddings, self).__init__()self.lut = nn.Embedding(vocab, d_model)self.d_model = d_modeldef forward(self, x):return self.lut(x) * math.sqrt(self.d_model)

Positional Encoding

Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence. To this end, we add “positional encodings” to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension?dmodeldmodel?as the embeddings, so that the two can be summed. There are many choices of positional encodings, learned and fixed?(cite).

In this work, we use sine and cosine functions of different frequencies:?PE(pos,2i)=sin(pos/100002i/dmodel)PE(pos,2i)=sin(pos/100002i/dmodel)

PE(pos,2i+1)=cos(pos/100002i/dmodel)PE(pos,2i+1)=cos(pos/100002i/dmodel)?where?pospos?is the position and?ii?is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from?2π2π?to?10000?2π10000?2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset?kk,?PEpos+kPEpos+k?can be represented as a linear function of?PEposPEpos.

In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of?Pdrop=0.1Pdrop=0.1.

class PositionalEncoding(nn.Module):"Implement the PE function."def __init__(self, d_model, dropout, max_len=5000):super(PositionalEncoding, self).__init__()self.dropout = nn.Dropout(p=dropout)# Compute the positional encodings once in log space.pe = torch.zeros(max_len, d_model)position = torch.arange(0, max_len).unsqueeze(1)div_term = torch.exp(torch.arange(0, d_model, 2) *-(math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0)self.register_buffer('pe', pe)def forward(self, x):x = x + Variable(self.pe[:, :x.size(1)], requires_grad=False)return self.dropout(x)

Below the positional encoding will add in a sine wave based on position. The frequency and offset of the wave is different for each dimension.

plt.figure(figsize=(15, 5)) pe = PositionalEncoding(20, 0) y = pe.forward(Variable(torch.zeros(1, 100, 20))) plt.plot(np.arange(100), y[0, :, 4:8].data.numpy()) plt.legend(["dim %d"%p for p in [4,5,6,7]]) None

We also experimented with using learned positional embeddings?(cite)?instead, and found that the two versions produced nearly identical results. We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.

Full Model

Here we define a function that takes in hyperparameters and produces a full model.

def make_model(src_vocab, tgt_vocab, N=6, d_model=512, d_ff=2048, h=8, dropout=0.1):"Helper: Construct a model from hyperparameters."c = copy.deepcopyattn = MultiHeadedAttention(h, d_model)ff = PositionwiseFeedForward(d_model, d_ff, dropout)position = PositionalEncoding(d_model, dropout)model = EncoderDecoder(Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),Decoder(DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout), N),nn.Sequential(Embeddings(d_model, src_vocab), c(position)),nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),Generator(d_model, tgt_vocab))# This was important from their code. # Initialize parameters with Glorot / fan_avg.for p in model.parameters():if p.dim() > 1:nn.init.xavier_uniform(p)return model # Small example model. tmp_model = make_model(10, 10, 2) None

Training

This section describes the training regime for our models.

We stop for a quick interlude to introduce some of the tools needed to train a standard encoder decoder model. First we define a batch object that holds the src and target sentences for training, as well as constructing the masks.

Batches and Masking

class Batch:"Object for holding a batch of data with mask during training."def __init__(self, src, trg=None, pad=0):self.src = srcself.src_mask = (src != pad).unsqueeze(-2)if trg is not None:self.trg = trg[:, :-1]self.trg_y = trg[:, 1:]self.trg_mask = \self.make_std_mask(self.trg, pad)self.ntokens = (self.trg_y != pad).data.sum()@staticmethoddef make_std_mask(tgt, pad):"Create a mask to hide padding and future words."tgt_mask = (tgt != pad).unsqueeze(-2)tgt_mask = tgt_mask & Variable(subsequent_mask(tgt.size(-1)).type_as(tgt_mask.data))return tgt_mask

Next we create a generic training and scoring function to keep track of loss. We pass in a generic loss compute function that also handles parameter updates.

Training Loop

def run_epoch(data_iter, model, loss_compute):"Standard Training and Logging Function"start = time.time()total_tokens = 0total_loss = 0tokens = 0for i, batch in enumerate(data_iter):out = model.forward(batch.src, batch.trg, batch.src_mask, batch.trg_mask)loss = loss_compute(out, batch.trg_y, batch.ntokens)total_loss += losstotal_tokens += batch.ntokenstokens += batch.ntokensif i % 50 == 1:elapsed = time.time() - startprint("Epoch Step: %d Loss: %f Tokens per Sec: %f" %(i, loss / batch.ntokens, tokens / elapsed))start = time.time()tokens = 0return total_loss / total_tokens

Training Data and Batching

We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs. Sentences were encoded using byte-pair encoding, which has a shared source-target vocabulary of about 37000 tokens. For English- French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary.

Sentence pairs were batched together by approximate sequence length. Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.

We will use torch text for batching. This is discussed in more detail below. Here we create batches in a torchtext function that ensures our batch size padded to the maximum batchsize does not surpass a threshold (25000 if we have 8 gpus).

global max_src_in_batch, max_tgt_in_batch def batch_size_fn(new, count, sofar):"Keep augmenting batch and calculate total number of tokens + padding."global max_src_in_batch, max_tgt_in_batchif count == 1:max_src_in_batch = 0max_tgt_in_batch = 0max_src_in_batch = max(max_src_in_batch, len(new.src))max_tgt_in_batch = max(max_tgt_in_batch, len(new.trg) + 2)src_elements = count * max_src_in_batchtgt_elements = count * max_tgt_in_batchreturn max(src_elements, tgt_elements)

Hardware and Schedule

We trained our models on one machine with 8 NVIDIA P100 GPUs. For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds. We trained the base models for a total of 100,000 steps or 12 hours. For our big models, step time was 1.0 seconds. The big models were trained for 300,000 steps (3.5 days).

Optimizer

We used the Adam optimizer?(cite)?with?β1=0.9β1=0.9,?β2=0.98β2=0.98?and??=10?9?=10?9. We varied the learning rate over the course of training, according to the formula:?lrate=d?0.5model?min(step_num?0.5,step_num?warmup_steps?1.5)lrate=dmodel?0.5?min(step_num?0.5,step_num?warmup_steps?1.5)?This corresponds to increasing the learning rate linearly for the first?warmupstepswarmupsteps?training steps, and decreasing it thereafter proportionally to the inverse square root of the step number. We used?warmupsteps=4000warmupsteps=4000.

Note: This part is very important. Need to train with this setup of the model.

class NoamOpt:"Optim wrapper that implements rate."def __init__(self, model_size, factor, warmup, optimizer):self.optimizer = optimizerself._step = 0self.warmup = warmupself.factor = factorself.model_size = model_sizeself._rate = 0def step(self):"Update parameters and rate"self._step += 1rate = self.rate()for p in self.optimizer.param_groups:p['lr'] = rateself._rate = rateself.optimizer.step()def rate(self, step = None):"Implement `lrate` above"if step is None:step = self._stepreturn self.factor * \(self.model_size ** (-0.5) *min(step ** (-0.5), step * self.warmup ** (-1.5)))def get_std_opt(model):return NoamOpt(model.src_embed[0].d_model, 2, 4000,torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))

Example of the curves of this model for different model sizes and for optimization hyperparameters.

# Three settings of the lrate hyperparameters. opts = [NoamOpt(512, 1, 4000, None), NoamOpt(512, 1, 8000, None),NoamOpt(256, 1, 4000, None)] plt.plot(np.arange(1, 20000), [[opt.rate(i) for opt in opts] for i in range(1, 20000)]) plt.legend(["512:4000", "512:8000", "256:4000"]) None

Regularization

Label Smoothing

During training, we employed label smoothing of value??ls=0.1?ls=0.1?(cite). This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.

We implement label smoothing using the KL div loss. Instead of using a one-hot target distribution, we create a distribution that has?confidence?of the correct word and the rest of the?smoothing?mass distributed throughout the vocabulary.

class LabelSmoothing(nn.Module):"Implement label smoothing."def __init__(self, size, padding_idx, smoothing=0.0):super(LabelSmoothing, self).__init__()self.criterion = nn.KLDivLoss(size_average=False)self.padding_idx = padding_idxself.confidence = 1.0 - smoothingself.smoothing = smoothingself.size = sizeself.true_dist = Nonedef forward(self, x, target):assert x.size(1) == self.sizetrue_dist = x.data.clone()true_dist.fill_(self.smoothing / (self.size - 2))true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)true_dist[:, self.padding_idx] = 0mask = torch.nonzero(target.data == self.padding_idx)if mask.dim() > 0:true_dist.index_fill_(0, mask.squeeze(), 0.0)self.true_dist = true_distreturn self.criterion(x, Variable(true_dist, requires_grad=False))

Here we can see an example of how the mass is distributed to the words based on confidence.

# Example of label smoothing. crit = LabelSmoothing(5, 0, 0.4) predict = torch.FloatTensor([[0, 0.2, 0.7, 0.1, 0],[0, 0.2, 0.7, 0.1, 0], [0, 0.2, 0.7, 0.1, 0]]) v = crit(Variable(predict.log()), Variable(torch.LongTensor([2, 1, 0])))# Show the target distributions expected by the system. plt.imshow(crit.true_dist) None

Label smoothing actually starts to penalize the model if it gets very confident about a given choice.

crit = LabelSmoothing(5, 0, 0.1) def loss(x):d = x + 3 * 1predict = torch.FloatTensor([[0, x / d, 1 / d, 1 / d, 1 / d],])#print(predict)return crit(Variable(predict.log()),Variable(torch.LongTensor([1]))).data[0] plt.plot(np.arange(1, 100), [loss(x) for x in range(1, 100)]) None

A First Example

We can begin by trying out a simple copy-task. Given a random set of input symbols from a small vocabulary, the goal is to generate back those same symbols.

Synthetic Data

def data_gen(V, batch, nbatches):"Generate random data for a src-tgt copy task."for i in range(nbatches):data = torch.from_numpy(np.random.randint(1, V, size=(batch, 10)))data[:, 0] = 1src = Variable(data, requires_grad=False)tgt = Variable(data, requires_grad=False)yield Batch(src, tgt, 0)

Loss Computation

class SimpleLossCompute:"A simple loss compute and train function."def __init__(self, generator, criterion, opt=None):self.generator = generatorself.criterion = criterionself.opt = optdef __call__(self, x, y, norm):x = self.generator(x)loss = self.criterion(x.contiguous().view(-1, x.size(-1)), y.contiguous().view(-1)) / normloss.backward()if self.opt is not None:self.opt.step()self.opt.optimizer.zero_grad()return loss.data[0] * norm

Greedy Decoding

# Train the simple copy task. V = 11 criterion = LabelSmoothing(size=V, padding_idx=0, smoothing=0.0) model = make_model(V, V, N=2) model_opt = NoamOpt(model.src_embed[0].d_model, 1, 400,torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))for epoch in range(10):model.train()run_epoch(data_gen(V, 30, 20), model, SimpleLossCompute(model.generator, criterion, model_opt))model.eval()print(run_epoch(data_gen(V, 30, 5), model, SimpleLossCompute(model.generator, criterion, None))) Epoch Step: 1 Loss: 3.023465 Tokens per Sec: 403.074173 Epoch Step: 1 Loss: 1.920030 Tokens per Sec: 641.689380 1.9274832487106324 Epoch Step: 1 Loss: 1.940011 Tokens per Sec: 432.003378 Epoch Step: 1 Loss: 1.699767 Tokens per Sec: 641.979665 1.657595729827881 Epoch Step: 1 Loss: 1.860276 Tokens per Sec: 433.320240 Epoch Step: 1 Loss: 1.546011 Tokens per Sec: 640.537198 1.4888023376464843 Epoch Step: 1 Loss: 1.682198 Tokens per Sec: 432.092305 Epoch Step: 1 Loss: 1.313169 Tokens per Sec: 639.441857 1.3485562801361084 Epoch Step: 1 Loss: 1.278768 Tokens per Sec: 433.568756 Epoch Step: 1 Loss: 1.062384 Tokens per Sec: 642.542067 0.9853351473808288 Epoch Step: 1 Loss: 1.269471 Tokens per Sec: 433.388727 Epoch Step: 1 Loss: 0.590709 Tokens per Sec: 642.862135 0.5686767101287842 Epoch Step: 1 Loss: 0.997076 Tokens per Sec: 433.009746 Epoch Step: 1 Loss: 0.343118 Tokens per Sec: 642.288427 0.34273059368133546 Epoch Step: 1 Loss: 0.459483 Tokens per Sec: 434.594030 Epoch Step: 1 Loss: 0.290385 Tokens per Sec: 642.519464 0.2612409472465515 Epoch Step: 1 Loss: 1.031042 Tokens per Sec: 434.557008 Epoch Step: 1 Loss: 0.437069 Tokens per Sec: 643.630322 0.4323212027549744 Epoch Step: 1 Loss: 0.617165 Tokens per Sec: 436.652626 Epoch Step: 1 Loss: 0.258793 Tokens per Sec: 644.372296 0.27331129014492034

This code predicts a translation using greedy decoding for simplicity.

def greedy_decode(model, src, src_mask, max_len, start_symbol):memory = model.encode(src, src_mask)ys = torch.ones(1, 1).fill_(start_symbol).type_as(src.data)for i in range(max_len-1):out = model.decode(memory, src_mask, Variable(ys), Variable(subsequent_mask(ys.size(1)).type_as(src.data)))prob = model.generator(out[:, -1])_, next_word = torch.max(prob, dim = 1)next_word = next_word.data[0]ys = torch.cat([ys, torch.ones(1, 1).type_as(src.data).fill_(next_word)], dim=1)return ysmodel.eval() src = Variable(torch.LongTensor([[1,2,3,4,5,6,7,8,9,10]]) ) src_mask = Variable(torch.ones(1, 1, 10) ) print(greedy_decode(model, src, src_mask, max_len=10, start_symbol=1)) 1 2 3 4 5 6 7 8 9 10 [torch.LongTensor of size 1x10]

A Real World Example

Now we consider a real-world example using the IWSLT German-English Translation task. This task is much smaller than the WMT task considered in the paper, but it illustrates the whole system. We also show how to use multi-gpu processing to make it really fast.

#!pip install torchtext spacy #!python -m spacy download en #!python -m spacy download de

Data Loading

We will load the dataset using torchtext and spacy for tokenization.

# For data loading. from torchtext import data, datasetsif True:import spacyspacy_de = spacy.load('de')spacy_en = spacy.load('en')def tokenize_de(text):return [tok.text for tok in spacy_de.tokenizer(text)]def tokenize_en(text):return [tok.text for tok in spacy_en.tokenizer(text)]BOS_WORD = '<s>'EOS_WORD = '</s>'BLANK_WORD = "<blank>"SRC = data.Field(tokenize=tokenize_de, pad_token=BLANK_WORD)TGT = data.Field(tokenize=tokenize_en, init_token = BOS_WORD, eos_token = EOS_WORD, pad_token=BLANK_WORD)MAX_LEN = 100train, val, test = datasets.IWSLT.splits(exts=('.de', '.en'), fields=(SRC, TGT), filter_pred=lambda x: len(vars(x)['src']) <= MAX_LEN and len(vars(x)['trg']) <= MAX_LEN)MIN_FREQ = 2SRC.build_vocab(train.src, min_freq=MIN_FREQ)TGT.build_vocab(train.trg, min_freq=MIN_FREQ)

Batching matters a ton for speed. We want to have very evenly divided batches, with absolutely minimal padding. To do this we have to hack a bit around the default torchtext batching. This code patches their default batching to make sure we search over enough sentences to find tight batches.

Iterators

class MyIterator(data.Iterator):def create_batches(self):if self.train:def pool(d, random_shuffler):for p in data.batch(d, self.batch_size * 100):p_batch = data.batch(sorted(p, key=self.sort_key),self.batch_size, self.batch_size_fn)for b in random_shuffler(list(p_batch)):yield bself.batches = pool(self.data(), self.random_shuffler)else:self.batches = []for b in data.batch(self.data(), self.batch_size,self.batch_size_fn):self.batches.append(sorted(b, key=self.sort_key))def rebatch(pad_idx, batch):"Fix order in torchtext to match ours"src, trg = batch.src.transpose(0, 1), batch.trg.transpose(0, 1)return Batch(src, trg, pad_idx)

Multi-GPU Training

Finally to really target fast training, we will use multi-gpu. This code implements multi-gpu word generation. It is not specific to transformer so I won’t go into too much detail. The idea is to split up word generation at training time into chunks to be processed in parallel across many different gpus. We do this using pytorch parallel primitives:

  • replicate - split modules onto different gpus.
  • scatter - split batches onto different gpus
  • parallel_apply - apply module to batches on different gpus
  • gather - pull scattered data back onto one gpu.
  • nn.DataParallel - a special module wrapper that calls these all before evaluating.
# Skip if not interested in multigpu. class MultiGPULossCompute:"A multi-gpu loss compute and train function."def __init__(self, generator, criterion, devices, opt=None, chunk_size=5):# Send out to different gpus.self.generator = generatorself.criterion = nn.parallel.replicate(criterion, devices=devices)self.opt = optself.devices = devicesself.chunk_size = chunk_sizedef __call__(self, out, targets, normalize):total = 0.0generator = nn.parallel.replicate(self.generator, devices=self.devices)out_scatter = nn.parallel.scatter(out, target_gpus=self.devices)out_grad = [[] for _ in out_scatter]targets = nn.parallel.scatter(targets, target_gpus=self.devices)# Divide generating into chunks.chunk_size = self.chunk_sizefor i in range(0, out_scatter[0].size(1), chunk_size):# Predict distributionsout_column = [[Variable(o[:, i:i+chunk_size].data, requires_grad=self.opt is not None)] for o in out_scatter]gen = nn.parallel.parallel_apply(generator, out_column)# Compute loss. y = [(g.contiguous().view(-1, g.size(-1)), t[:, i:i+chunk_size].contiguous().view(-1)) for g, t in zip(gen, targets)]loss = nn.parallel.parallel_apply(self.criterion, y)# Sum and normalize lossl = nn.parallel.gather(loss, target_device=self.devices[0])l = l.sum()[0] / normalizetotal += l.data[0]# Backprop loss to output of transformerif self.opt is not None:l.backward()for j, l in enumerate(loss):out_grad[j].append(out_column[j][0].grad.data.clone())# Backprop all loss through transformer. if self.opt is not None:out_grad = [Variable(torch.cat(og, dim=1)) for og in out_grad]o1 = outo2 = nn.parallel.gather(out_grad, target_device=self.devices[0])o1.backward(gradient=o2)self.opt.step()self.opt.optimizer.zero_grad()return total * normalize

Now we create our model, criterion, optimizer, data iterators, and paralelization

# GPUs to use devices = [0, 1, 2, 3] if True:pad_idx = TGT.vocab.stoi["<blank>"]model = make_model(len(SRC.vocab), len(TGT.vocab), N=6)model.cuda()criterion = LabelSmoothing(size=len(TGT.vocab), padding_idx=pad_idx, smoothing=0.1)criterion.cuda()BATCH_SIZE = 12000train_iter = MyIterator(train, batch_size=BATCH_SIZE, device=0,repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),batch_size_fn=batch_size_fn, train=True)valid_iter = MyIterator(val, batch_size=BATCH_SIZE, device=0,repeat=False, sort_key=lambda x: (len(x.src), len(x.trg)),batch_size_fn=batch_size_fn, train=False)model_par = nn.DataParallel(model, device_ids=devices) None

Now we train the model. I will play with the warmup steps a bit, but everything else uses the default parameters. On an AWS p3.8xlarge with 4 Tesla V100s, this runs at ~27,000 tokens per second with a batch size of 12,000

Training the System

#!wget https://s3.amazonaws.com/opennmt-models/iwslt.pt if False:model_opt = NoamOpt(model.src_embed[0].d_model, 1, 2000,torch.optim.Adam(model.parameters(), lr=0, betas=(0.9, 0.98), eps=1e-9))for epoch in range(10):model_par.train()run_epoch((rebatch(pad_idx, b) for b in train_iter), model_par, MultiGPULossCompute(model.generator, criterion, devices=devices, opt=model_opt))model_par.eval()loss = run_epoch((rebatch(pad_idx, b) for b in valid_iter), model_par, MultiGPULossCompute(model.generator, criterion, devices=devices, opt=None))print(loss) else:model = torch.load("iwslt.pt")

Once trained we can decode the model to produce a set of translations. Here we simply translate the first sentence in the validation set. This dataset is pretty small so the translations with greedy search are reasonably accurate.

for i, batch in enumerate(valid_iter):src = batch.src.transpose(0, 1)[:1]src_mask = (src != SRC.vocab.stoi["<blank>"]).unsqueeze(-2)out = greedy_decode(model, src, src_mask, max_len=60, start_symbol=TGT.vocab.stoi["<s>"])print("Translation:", end="\t")for i in range(1, out.size(1)):sym = TGT.vocab.itos[out[0, i]]if sym == "</s>": breakprint(sym, end =" ")print()print("Target:", end="\t")for i in range(1, batch.trg.size(0)):sym = TGT.vocab.itos[batch.trg.data[i, 0]]if sym == "</s>": breakprint(sym, end =" ")print()break Translation: <unk> <unk> . In my language , that means , thank you very much . Gold: <unk> <unk> . It means in my language , thank you very much .

Additional Components: BPE, Search, Averaging

So this mostly covers the transformer model itself. There are four aspects that we didn’t cover explicitly. We also have all these additional features implemented in?OpenNMT-py.

1) BPE/ Word-piece: We can use a library to first preprocess the data into subword units. See Rico Sennrich’s?subword- nmt?implementation. These models will transform the training data to look like this:

▁Die ▁Protokoll datei ▁kann ▁ heimlich ▁per ▁E - Mail ▁oder ▁FTP ▁an ▁einen ▁bestimmte n ▁Empf?nger ▁gesendet ▁werden .

2) Shared Embeddings: When using BPE with shared vocabulary we can share the same weight vectors between the source / target / generator. See the?(cite)?for details. To add this to the model simply do this:

if False:model.src_embed[0].lut.weight = model.tgt_embeddings[0].lut.weightmodel.generator.lut.weight = model.tgt_embed[0].lut.weight

3) Beam Search: This is a bit too complicated to cover here. See the?OpenNMT- py?for a pytorch implementation.

4) Model Averaging: The paper averages the last k checkpoints to create an ensembling effect. We can do this after the fact if we have a bunch of models:

def average(model, models):"Average models into model"for ps in zip(*[m.params() for m in [model] + models]):p[0].copy_(torch.sum(*ps[1:]) / len(ps[1:]))

Results

On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. The configuration of this model is listed in the bottom line of Table 3. Training took 3.5 days on 8 P100 GPUs. Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.

On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer (big) model trained for English-to-French used dropout rate Pdrop = 0.1, instead of 0.3.

Image(filename="images/results.png")

The code we have written here is a version of the base model. There are fully trained version of this system available here?(Example Models).

With the addtional extensions in the last section, the OpenNMT-py replication gets to 26.9 on EN-DE WMT. Here I have loaded in those parameters to our reimplemenation.

!wget https://s3.amazonaws.com/opennmt-models/en-de-model.pt model, SRC, TGT = torch.load("en-de-model.pt") model.eval() sent = "▁The ▁log ▁file ▁can ▁be ▁sent ▁secret ly ▁with ▁email ▁or ▁FTP ▁to ▁a ▁specified ▁receiver".split() src = torch.LongTensor([[SRC.stoi[w] for w in sent]]) src = Variable(src) src_mask = (src != SRC.stoi["<blank>"]).unsqueeze(-2) out = greedy_decode(model, src, src_mask, max_len=60, start_symbol=TGT.stoi["<s>"]) print("Translation:", end="\t") trans = "<s> " for i in range(1, out.size(1)):sym = TGT.itos[out[0, i]]if sym == "</s>": breaktrans += sym + " " print(trans) Translation: <s> ▁Die ▁Protokoll datei ▁kann ▁ heimlich ▁per ▁E - Mail ▁oder ▁FTP ▁an ▁einen ▁bestimmte n ▁Empf?nger ▁gesendet ▁werden .

Attention Visualization

Even with a greedy decoder the translation looks pretty good. We can further visualize it to see what is happening at each layer of the attention

tgt_sent = trans.split() def draw(data, x, y, ax):seaborn.heatmap(data, xticklabels=x, square=True, yticklabels=y, vmin=0.0, vmax=1.0, cbar=False, ax=ax)for layer in range(1, 6, 2):fig, axs = plt.subplots(1,4, figsize=(20, 10))print("Encoder Layer", layer+1)for h in range(4):draw(model.encoder.layers[layer].self_attn.attn[0, h].data, sent, sent if h ==0 else [], ax=axs[h])plt.show()for layer in range(1, 6, 2):fig, axs = plt.subplots(1,4, figsize=(20, 10))print("Decoder Self Layer", layer+1)for h in range(4):draw(model.decoder.layers[layer].self_attn.attn[0, h].data[:len(tgt_sent), :len(tgt_sent)], tgt_sent, tgt_sent if h ==0 else [], ax=axs[h])plt.show()print("Decoder Src Layer", layer+1)fig, axs = plt.subplots(1,4, figsize=(20, 10))for h in range(4):draw(model.decoder.layers[layer].self_attn.attn[0, h].data[:len(tgt_sent), :len(sent)], sent, tgt_sent if h ==0 else [], ax=axs[h])plt.show() Encoder Layer 2

Encoder Layer 4

Encoder Layer 6

Decoder Self Layer 2

Decoder Src Layer 2

Decoder Self Layer 4

Decoder Src Layer 4

Decoder Self Layer 6

Decoder Src Layer 6

Conclusion

Hopefully this code is useful for future research. Please reach out if you have any issues. If you find this code helpful, also check out our other OpenNMT tools.

@inproceedings{opennmt,author = {Guillaume Klein andYoon Kim andYuntian Deng andJean Senellart andAlexander M. Rush},title = {OpenNMT: Open-Source Toolkit for Neural Machine Translation},booktitle = {Proc. ACL},year = {2017},url = {https://doi.org/10.18653/v1/P17-4012},doi = {10.18653/v1/P17-4012} }

Cheers, srush

總結

以上是生活随笔為你收集整理的The Annotated Transformer的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

呦交小u女精品视频 | 亚洲人成影院在线无码按摩店 | 无遮挡国产高潮视频免费观看 | 久久精品国产精品国产精品污 | 国产成人精品优优av | 色噜噜亚洲男人的天堂 | 国产成人无码av一区二区 | 国内老熟妇对白xxxxhd | 亚洲日韩精品欧美一区二区 | 装睡被陌生人摸出水好爽 | av无码电影一区二区三区 | 国产精品第一国产精品 | 捆绑白丝粉色jk震动捧喷白浆 | 亚洲精品综合五月久久小说 | 小鲜肉自慰网站xnxx | 国产精品久久国产精品99 | 狠狠亚洲超碰狼人久久 | 久久久久免费精品国产 | 亚洲欧美综合区丁香五月小说 | 男女下面进入的视频免费午夜 | 久久99精品国产.久久久久 | 色 综合 欧美 亚洲 国产 | 国产香蕉97碰碰久久人人 | 色偷偷人人澡人人爽人人模 | 国产sm调教视频在线观看 | 欧美激情综合亚洲一二区 | 性色欲网站人妻丰满中文久久不卡 | 亚洲国产午夜精品理论片 | 免费网站看v片在线18禁无码 | 欧美 丝袜 自拍 制服 另类 | 精品久久8x国产免费观看 | 双乳奶水饱满少妇呻吟 | 成人无码影片精品久久久 | 亚洲精品欧美二区三区中文字幕 | 中文字幕+乱码+中文字幕一区 | 亚洲精品中文字幕乱码 | 成人免费视频视频在线观看 免费 | 国产午夜亚洲精品不卡下载 | 日本熟妇人妻xxxxx人hd | 中文字幕+乱码+中文字幕一区 | 国产精品理论片在线观看 | 中文字幕无码人妻少妇免费 | 无码人中文字幕 | 日本精品人妻无码免费大全 | 色欲综合久久中文字幕网 | 免费网站看v片在线18禁无码 | 日日躁夜夜躁狠狠躁 | 小sao货水好多真紧h无码视频 | 亚洲一区av无码专区在线观看 | 中文精品久久久久人妻不卡 | 欧美成人高清在线播放 | 日日摸天天摸爽爽狠狠97 | 国产成人无码专区 | 国产人妖乱国产精品人妖 | 又紧又大又爽精品一区二区 | 中文字幕av日韩精品一区二区 | 国产亚洲精品久久久闺蜜 | 精品乱子伦一区二区三区 | 丰满人妻一区二区三区免费视频 | 水蜜桃色314在线观看 | 奇米影视888欧美在线观看 | 国产又粗又硬又大爽黄老大爷视 | 国产精品丝袜黑色高跟鞋 | 男人的天堂2018无码 | 日日摸夜夜摸狠狠摸婷婷 | 永久免费观看美女裸体的网站 | 亚洲熟妇色xxxxx欧美老妇y | 精品日本一区二区三区在线观看 | 国产精品毛片一区二区 | 精品aⅴ一区二区三区 | 台湾无码一区二区 | 无遮挡啪啪摇乳动态图 | 野外少妇愉情中文字幕 | 久久99精品久久久久婷婷 | 人人妻人人澡人人爽欧美精品 | 国产精品嫩草久久久久 | 久久亚洲a片com人成 | 久久天天躁狠狠躁夜夜免费观看 | 一区二区三区高清视频一 | 99视频精品全部免费免费观看 | 国产欧美亚洲精品a | 牛和人交xxxx欧美 | 国产舌乚八伦偷品w中 | 日本精品人妻无码免费大全 | av香港经典三级级 在线 | 久久精品成人欧美大片 | 丰满护士巨好爽好大乳 | 桃花色综合影院 | 在线播放亚洲第一字幕 | 人人澡人摸人人添 | 一本色道久久综合狠狠躁 | 国产欧美熟妇另类久久久 | 国产福利视频一区二区 | 国产内射老熟女aaaa | 免费国产成人高清在线观看网站 | 色欲av亚洲一区无码少妇 | 亚洲国产欧美日韩精品一区二区三区 | 亚洲 激情 小说 另类 欧美 | 亚洲s色大片在线观看 | 成人无码影片精品久久久 | 男女爱爱好爽视频免费看 | 丝袜 中出 制服 人妻 美腿 | 日韩人妻少妇一区二区三区 | 秋霞成人午夜鲁丝一区二区三区 | 中文字幕人妻无码一区二区三区 | 国产精品美女久久久 | 中文字幕人妻丝袜二区 | 中文字幕乱码中文乱码51精品 | 荡女精品导航 | 在教室伦流澡到高潮hnp视频 | 精品人妻av区 | 亚洲男女内射在线播放 | 啦啦啦www在线观看免费视频 | 亚洲国产精品毛片av不卡在线 | 国产精品久久久久9999小说 | 久久久久亚洲精品中文字幕 | 99久久亚洲精品无码毛片 | 狠狠色欧美亚洲狠狠色www | 亚洲国产成人a精品不卡在线 | 久久综合狠狠综合久久综合88 | 日本熟妇大屁股人妻 | 蜜桃视频插满18在线观看 | 久久久久成人精品免费播放动漫 | 日韩av无码中文无码电影 | 国产成人无码区免费内射一片色欲 | 欧美日韩一区二区综合 | 免费视频欧美无人区码 | 国产莉萝无码av在线播放 | 成人无码视频在线观看网站 | 国产精品自产拍在线观看 | 国产精品人人妻人人爽 | 国产精品久久福利网站 | 精品乱码久久久久久久 | 亚洲午夜无码久久 | 欧美激情一区二区三区成人 | 亚洲另类伦春色综合小说 | 一本久道久久综合狠狠爱 | 国产深夜福利视频在线 | 久久久久99精品国产片 | 日韩在线不卡免费视频一区 | 亚洲国产精品久久久久久 | 人人妻人人澡人人爽精品欧美 | 国产人成高清在线视频99最全资源 | 国产午夜精品一区二区三区嫩草 | 久久午夜无码鲁丝片秋霞 | 中文字幕无码日韩欧毛 | 国产明星裸体无码xxxx视频 | 无遮挡啪啪摇乳动态图 | 成人一区二区免费视频 | 国产女主播喷水视频在线观看 | 又粗又大又硬毛片免费看 | 亚洲人亚洲人成电影网站色 | 日韩精品无码一区二区中文字幕 | 欧美 日韩 人妻 高清 中文 | 国产精品国产三级国产专播 | 久激情内射婷内射蜜桃人妖 | 荡女精品导航 | 久久99精品久久久久久动态图 | 久久精品国产一区二区三区肥胖 | 国产乱人伦偷精品视频 | 欧美第一黄网免费网站 | 亚洲国产精品无码久久久久高潮 | 国产做国产爱免费视频 | 国内丰满熟女出轨videos | 内射欧美老妇wbb | 国产性生交xxxxx无码 | 国内少妇偷人精品视频免费 | 久久精品国产一区二区三区 | 国产精品高潮呻吟av久久 | 久久久久成人片免费观看蜜芽 | 99精品视频在线观看免费 | 熟妇女人妻丰满少妇中文字幕 | 毛片内射-百度 | 久久精品国产99久久6动漫 | 色噜噜亚洲男人的天堂 | v一区无码内射国产 | 午夜成人1000部免费视频 | 精品人妻中文字幕有码在线 | 日韩av无码一区二区三区不卡 | 国产真实夫妇视频 | 99er热精品视频 | 国产亚洲精品久久久ai换 | www成人国产高清内射 | 精品无码av一区二区三区 | 四虎4hu永久免费 | 亚洲а∨天堂久久精品2021 | 国产 浪潮av性色四虎 | 天堂久久天堂av色综合 | 黑人玩弄人妻中文在线 | 在线观看国产一区二区三区 | 三上悠亚人妻中文字幕在线 | 精品人人妻人人澡人人爽人人 | 国产农村妇女高潮大叫 | 性欧美疯狂xxxxbbbb | 鲁鲁鲁爽爽爽在线视频观看 | 九九久久精品国产免费看小说 | 欧美日韩亚洲国产精品 | 久久人人爽人人人人片 | 亚洲色无码一区二区三区 | 97久久超碰中文字幕 | 丰满少妇弄高潮了www | 国产亚洲日韩欧美另类第八页 | 熟妇人妻无乱码中文字幕 | 国产精品无码一区二区三区不卡 | 夜夜躁日日躁狠狠久久av | 亚洲国产日韩a在线播放 | 亚洲欧美中文字幕5发布 | 亚洲乱码国产乱码精品精 | 日本欧美一区二区三区乱码 | 婷婷五月综合缴情在线视频 | 在线 国产 欧美 亚洲 天堂 | 欧美性黑人极品hd | 久久久精品欧美一区二区免费 | 久久视频在线观看精品 | www一区二区www免费 | 久久久中文字幕日本无吗 | 成年美女黄网站色大免费全看 | 国产成人综合美国十次 | 久久国产精品萌白酱免费 | 国产真实夫妇视频 | 国产成人无码av一区二区 | 亚洲精品www久久久 | 亚洲国产精品毛片av不卡在线 | 一区二区传媒有限公司 | 亚洲国产精品无码久久久久高潮 | 亚洲国产综合无码一区 | 欧洲美熟女乱又伦 | 久久综合狠狠综合久久综合88 | 又大又黄又粗又爽的免费视频 | 真人与拘做受免费视频一 | 国产精品久免费的黄网站 | 乱码午夜-极国产极内射 | 国产麻豆精品一区二区三区v视界 | 波多野结衣高清一区二区三区 | 国产熟妇高潮叫床视频播放 | 亚洲色欲久久久综合网东京热 | 中文亚洲成a人片在线观看 | 久久久精品国产sm最大网站 | 欧美精品无码一区二区三区 | 亚洲の无码国产の无码步美 | 欧美老妇与禽交 | 国产精品视频免费播放 | 亚洲乱码国产乱码精品精 | 色一情一乱一伦一区二区三欧美 | 人妻天天爽夜夜爽一区二区 | 免费乱码人妻系列无码专区 | 麻豆成人精品国产免费 | 波多野结衣av一区二区全免费观看 | 欧美日韩一区二区免费视频 | 日韩人妻无码中文字幕视频 | 亚洲成av人片在线观看无码不卡 | 久久精品国产一区二区三区肥胖 | 99久久人妻精品免费一区 | 国产又粗又硬又大爽黄老大爷视 | 日本熟妇大屁股人妻 | 男人的天堂2018无码 | 精品国产乱码久久久久乱码 | 国产精品毛片一区二区 | a在线亚洲男人的天堂 | 亚洲欧美日韩成人高清在线一区 | 国产精品99爱免费视频 | www国产亚洲精品久久网站 | 亚洲爆乳精品无码一区二区三区 | 精品人妻av区 | 久精品国产欧美亚洲色aⅴ大片 | 人妻少妇精品无码专区动漫 | 日本护士xxxxhd少妇 | 精品久久久无码人妻字幂 | 98国产精品综合一区二区三区 | 精品人妻人人做人人爽 | 中文精品无码中文字幕无码专区 | 中文字幕亚洲情99在线 | 久久人人爽人人爽人人片ⅴ | 99麻豆久久久国产精品免费 | 97se亚洲精品一区 | 亚洲经典千人经典日产 | 国产精品亚洲а∨无码播放麻豆 | 亚洲国产成人a精品不卡在线 | 麻豆国产人妻欲求不满谁演的 | 日韩欧美中文字幕在线三区 | 色综合久久网 | 日本大乳高潮视频在线观看 | 国产成人无码专区 | 天天做天天爱天天爽综合网 | 少妇高潮喷潮久久久影院 | 999久久久国产精品消防器材 | 成人欧美一区二区三区黑人 | 丰满少妇弄高潮了www | 啦啦啦www在线观看免费视频 | 风流少妇按摩来高潮 | 久久精品视频在线看15 | 国产精品亚洲五月天高清 | 一区二区三区乱码在线 | 欧洲 | 亚洲码国产精品高潮在线 | 日韩人妻少妇一区二区三区 | 鲁大师影院在线观看 | 亚洲区欧美区综合区自拍区 | 日韩人妻少妇一区二区三区 | 亚洲精品国偷拍自产在线麻豆 | 强奷人妻日本中文字幕 | 偷窥日本少妇撒尿chinese | 欧美成人免费全部网站 | 国产精品久久久午夜夜伦鲁鲁 | 亚洲熟妇自偷自拍另类 | 亚洲人成无码网www | 国产欧美精品一区二区三区 | 免费乱码人妻系列无码专区 | 午夜福利试看120秒体验区 | 日本熟妇大屁股人妻 | 特级做a爰片毛片免费69 | 精品久久久中文字幕人妻 | 自拍偷自拍亚洲精品被多人伦好爽 | 亚洲精品成人av在线 | 亚洲国产成人av在线观看 | 亚洲日韩一区二区三区 | 精品人妻中文字幕有码在线 | 精品夜夜澡人妻无码av蜜桃 | 国产色在线 | 国产 | 色欲人妻aaaaaaa无码 | 色综合久久网 | 久久午夜无码鲁丝片 | 亚洲熟女一区二区三区 | 精品 日韩 国产 欧美 视频 | www一区二区www免费 | 又色又爽又黄的美女裸体网站 | 女人被男人爽到呻吟的视频 | 亚洲 日韩 欧美 成人 在线观看 | 欧美丰满老熟妇xxxxx性 | 国产成人无码午夜视频在线观看 | 亚洲精品一区国产 | 日日麻批免费40分钟无码 | 亚洲色www成人永久网址 | 精品国产一区av天美传媒 | 国产亚洲精品久久久闺蜜 | 精品厕所偷拍各类美女tp嘘嘘 | 亚洲日本va中文字幕 | 无码毛片视频一区二区本码 | 嫩b人妻精品一区二区三区 | 国产亚洲视频中文字幕97精品 | 国产精品丝袜黑色高跟鞋 | 久久综合久久自在自线精品自 | 性做久久久久久久免费看 | 东京热男人av天堂 | av无码电影一区二区三区 | 2019nv天堂香蕉在线观看 | 久久久久免费看成人影片 | 中文字幕无码免费久久99 | 欧美国产日产一区二区 | 99精品国产综合久久久久五月天 | 午夜丰满少妇性开放视频 | 自拍偷自拍亚洲精品被多人伦好爽 | 亚洲欧洲中文日韩av乱码 | 久久精品中文字幕大胸 | 性做久久久久久久久 | 欧美性生交xxxxx久久久 | 任你躁国产自任一区二区三区 | 亚洲成av人综合在线观看 | 亚洲精品欧美二区三区中文字幕 | 亚洲国产精品久久久久久 | 亚洲色成人中文字幕网站 | 免费看男女做好爽好硬视频 | 日本va欧美va欧美va精品 | 国产 浪潮av性色四虎 | 图片小说视频一区二区 | 在线a亚洲视频播放在线观看 | 欧美亚洲日韩国产人成在线播放 | 日韩人妻无码一区二区三区久久99 | 男人的天堂2018无码 | 国产尤物精品视频 | 国产婷婷色一区二区三区在线 | 性欧美熟妇videofreesex | 清纯唯美经典一区二区 | 丰满少妇人妻久久久久久 | 欧美国产亚洲日韩在线二区 | 7777奇米四色成人眼影 | 狠狠cao日日穞夜夜穞av | 老头边吃奶边弄进去呻吟 | 国内丰满熟女出轨videos | 少妇被黑人到高潮喷出白浆 | 国产精品丝袜黑色高跟鞋 | 久久久久亚洲精品男人的天堂 | 天堂久久天堂av色综合 | 欧美丰满老熟妇xxxxx性 | 亚洲爆乳大丰满无码专区 | 国产成人久久精品流白浆 | 中文无码成人免费视频在线观看 | 日本乱偷人妻中文字幕 | 九九在线中文字幕无码 | 无码精品人妻一区二区三区av | 丰满少妇弄高潮了www | 中文无码成人免费视频在线观看 | 国产特级毛片aaaaaa高潮流水 | 久久精品99久久香蕉国产色戒 | 国产成人精品优优av | 大屁股大乳丰满人妻 | 鲁一鲁av2019在线 | 1000部啪啪未满十八勿入下载 | 中文字幕人妻无码一夲道 | 国产精品无码一区二区桃花视频 | 丝袜 中出 制服 人妻 美腿 | 成人免费视频在线观看 | 俄罗斯老熟妇色xxxx | 亚洲成a人一区二区三区 | 欧美人与物videos另类 | 亚洲精品国产第一综合99久久 | 奇米影视7777久久精品人人爽 | 精品乱码久久久久久久 | 精品人妻人人做人人爽夜夜爽 | 一二三四在线观看免费视频 | 久久国产精品二国产精品 | 国产在热线精品视频 | 无码吃奶揉捏奶头高潮视频 | 国产成人无码av一区二区 | 色窝窝无码一区二区三区色欲 | 无码国产乱人伦偷精品视频 | 十八禁真人啪啪免费网站 | 亚洲熟妇色xxxxx欧美老妇 | 麻豆果冻传媒2021精品传媒一区下载 | 老熟女乱子伦 | 欧美日韩一区二区三区自拍 | 蜜桃臀无码内射一区二区三区 | 鲁一鲁av2019在线 | 亚洲s码欧洲m码国产av | 1000部啪啪未满十八勿入下载 | 国内精品人妻无码久久久影院蜜桃 | 兔费看少妇性l交大片免费 | 99久久人妻精品免费二区 | 鲁鲁鲁爽爽爽在线视频观看 | 亚洲国产精华液网站w | 欧美日韩久久久精品a片 | 人人澡人人妻人人爽人人蜜桃 | 国产精品福利视频导航 | 欧美日本日韩 | 人妻无码αv中文字幕久久琪琪布 | 日本肉体xxxx裸交 | 亚洲精品中文字幕乱码 | 久久久久成人片免费观看蜜芽 | 亚洲日韩av一区二区三区中文 | 领导边摸边吃奶边做爽在线观看 | 欧美成人家庭影院 | 人人澡人人透人人爽 | 精品偷自拍另类在线观看 | 麻花豆传媒剧国产免费mv在线 | yw尤物av无码国产在线观看 | 成 人 网 站国产免费观看 | 在线天堂新版最新版在线8 | 色欲久久久天天天综合网精品 | 久久亚洲国产成人精品性色 | 国产深夜福利视频在线 | 少女韩国电视剧在线观看完整 | 成人性做爰aaa片免费看 | 天天燥日日燥 | 黑人粗大猛烈进出高潮视频 | 中文字幕无码人妻少妇免费 | 国产人成高清在线视频99最全资源 | 中文字幕av日韩精品一区二区 | 亚洲日本va中文字幕 | 欧美freesex黑人又粗又大 | 国产亚洲精品精品国产亚洲综合 | 国产精品高潮呻吟av久久 | 在线播放无码字幕亚洲 | 欧美日本免费一区二区三区 | 草草网站影院白丝内射 | 国产va免费精品观看 | 国产精品人人爽人人做我的可爱 | 少妇高潮喷潮久久久影院 | 亚洲综合在线一区二区三区 | 中文字幕 亚洲精品 第1页 | 日本一本二本三区免费 | 国产黑色丝袜在线播放 | 精品国产精品久久一区免费式 | 黑森林福利视频导航 | 无码人妻少妇伦在线电影 | 色狠狠av一区二区三区 | 一本一道久久综合久久 | 欧美国产亚洲日韩在线二区 | 最新版天堂资源中文官网 | 无码人妻丰满熟妇区五十路百度 | 牲欲强的熟妇农村老妇女 | av在线亚洲欧洲日产一区二区 | 精品人妻av区 | 99久久婷婷国产综合精品青草免费 | 久久99精品久久久久婷婷 | 人妻无码久久精品人妻 | 性做久久久久久久久 | 俺去俺来也www色官网 | 日本丰满护士爆乳xxxx | 无码中文字幕色专区 | 中文无码精品a∨在线观看不卡 | 熟妇人妻无码xxx视频 | 亚洲小说春色综合另类 | 日韩精品无码一区二区中文字幕 | 欧美 日韩 亚洲 在线 | 性欧美牲交xxxxx视频 | 伊人久久大香线蕉av一区二区 | 免费观看的无遮挡av | 在线播放亚洲第一字幕 | 国产精品无码一区二区三区不卡 | 久久久亚洲欧洲日产国码αv | 亚洲国产精品一区二区第一页 | 青春草在线视频免费观看 | 国产免费久久久久久无码 | 一本久道高清无码视频 | 国产在线精品一区二区高清不卡 | 精品久久久无码中文字幕 | 亚洲国产精品久久久久久 | 2020久久超碰国产精品最新 | 午夜成人1000部免费视频 | 大胆欧美熟妇xx | 亚洲精品国产精品乱码视色 | 免费无码一区二区三区蜜桃大 | 亚洲人成人无码网www国产 | 无码人妻av免费一区二区三区 | 精品久久久久香蕉网 | 国产在线精品一区二区三区直播 | 国产精品无码一区二区三区不卡 | 国产三级久久久精品麻豆三级 | 好男人www社区 | 国精产品一品二品国精品69xx | 久久国产劲爆∧v内射 | 国产精品久久久 | 国产偷国产偷精品高清尤物 | 成人片黄网站色大片免费观看 | 7777奇米四色成人眼影 | 精品无码国产一区二区三区av | 久久97精品久久久久久久不卡 | 熟妇女人妻丰满少妇中文字幕 | 成人免费视频视频在线观看 免费 | 久久午夜无码鲁丝片午夜精品 | 丰满少妇高潮惨叫视频 | 国产 浪潮av性色四虎 | 亚洲日本va中文字幕 | 国产激情无码一区二区app | 日日噜噜噜噜夜夜爽亚洲精品 | 日韩人妻少妇一区二区三区 | 999久久久国产精品消防器材 | 成 人影片 免费观看 | 欧美黑人巨大xxxxx | 中文字幕乱码人妻二区三区 | 少妇性l交大片欧洲热妇乱xxx | 精品无码av一区二区三区 | 少妇性荡欲午夜性开放视频剧场 | 国内精品久久毛片一区二区 | 男女爱爱好爽视频免费看 | 国产 浪潮av性色四虎 | 人妻无码久久精品人妻 | 永久免费观看国产裸体美女 | 影音先锋中文字幕无码 | 国产精品福利视频导航 | 无码av免费一区二区三区试看 | 国产一区二区三区影院 | 精品国产av色一区二区深夜久久 | 未满小14洗澡无码视频网站 | 99久久精品无码一区二区毛片 | 精品无码国产一区二区三区av | 中文字幕无码免费久久9一区9 | 日韩人妻无码中文字幕视频 | 国产一区二区三区影院 | 中文字幕无码av波多野吉衣 | 人人妻人人澡人人爽人人精品浪潮 | 久久久久久九九精品久 | 大地资源中文第3页 | 欧美日韩一区二区综合 | 中文字幕乱码人妻无码久久 | 小sao货水好多真紧h无码视频 | 麻豆md0077饥渴少妇 | 内射欧美老妇wbb | 亚洲理论电影在线观看 | 久久综合九色综合97网 | 日日摸夜夜摸狠狠摸婷婷 | 在线天堂新版最新版在线8 | 欧美日韩久久久精品a片 | 桃花色综合影院 | 国产麻豆精品精东影业av网站 | 暴力强奷在线播放无码 | 大地资源网第二页免费观看 | 毛片内射-百度 | 在线观看欧美一区二区三区 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 精品夜夜澡人妻无码av蜜桃 | 中文字幕av无码一区二区三区电影 | 中文字幕无码av波多野吉衣 | 性欧美videos高清精品 | 九九在线中文字幕无码 | 国产在线精品一区二区高清不卡 | 夜先锋av资源网站 | 中文字幕+乱码+中文字幕一区 | 国产一区二区三区四区五区加勒比 | 日日噜噜噜噜夜夜爽亚洲精品 | 国产农村乱对白刺激视频 | 国产成人无码av片在线观看不卡 | 青春草在线视频免费观看 | 少妇的肉体aa片免费 | 夜先锋av资源网站 | 熟妇人妻无码xxx视频 | 亚洲大尺度无码无码专区 | 蜜桃av抽搐高潮一区二区 | 久久久久久久久888 | 亚洲熟妇自偷自拍另类 | 福利一区二区三区视频在线观看 | 男女猛烈xx00免费视频试看 | 成 人 网 站国产免费观看 | 久青草影院在线观看国产 | 国产一精品一av一免费 | 男女下面进入的视频免费午夜 | 丰满岳乱妇在线观看中字无码 | 大色综合色综合网站 | 亚洲另类伦春色综合小说 | 国产高清不卡无码视频 | 国产又爽又猛又粗的视频a片 | 风流少妇按摩来高潮 | 一二三四社区在线中文视频 | 九月婷婷人人澡人人添人人爽 | 精品夜夜澡人妻无码av蜜桃 | 搡女人真爽免费视频大全 | 欧美亚洲国产一区二区三区 | 日本va欧美va欧美va精品 | 任你躁国产自任一区二区三区 | 久久久精品国产sm最大网站 | 亚洲色成人中文字幕网站 | 少妇人妻大乳在线视频 | 日本一卡二卡不卡视频查询 | 日本熟妇浓毛 | 日韩av无码一区二区三区不卡 | 亚洲人亚洲人成电影网站色 | 无码任你躁久久久久久久 | 婷婷丁香六月激情综合啪 | 精品国精品国产自在久国产87 | 亚洲s码欧洲m码国产av | 老熟女乱子伦 | 国产无套内射久久久国产 | 久9re热视频这里只有精品 | 青青草原综合久久大伊人精品 | 中文字幕无码免费久久99 | 无码av最新清无码专区吞精 | 7777奇米四色成人眼影 | 狠狠色噜噜狠狠狠7777奇米 | 狠狠色丁香久久婷婷综合五月 | 亚洲中文字幕无码中文字在线 | 日本精品高清一区二区 | 超碰97人人射妻 | 少妇性俱乐部纵欲狂欢电影 | 婷婷综合久久中文字幕蜜桃三电影 | 成人无码精品1区2区3区免费看 | 国产精品对白交换视频 | 亚洲中文字幕在线观看 | 久久国内精品自在自线 | 又大又黄又粗又爽的免费视频 | 久久精品无码一区二区三区 | 高潮喷水的毛片 | 国语自产偷拍精品视频偷 | 精品久久综合1区2区3区激情 | 麻豆精品国产精华精华液好用吗 | 欧美性猛交xxxx富婆 | 国产精品久久福利网站 | 国产午夜手机精彩视频 | 亚洲国产综合无码一区 | 亚洲国产精品久久人人爱 | 欧美性生交xxxxx久久久 | 内射白嫩少妇超碰 | 久久五月精品中文字幕 | 亚洲午夜福利在线观看 | 久久久久久久女国产乱让韩 | 鲁大师影院在线观看 | 国产人妻精品一区二区三区 | 国产日产欧产精品精品app | 欧美丰满老熟妇xxxxx性 | 国产真实夫妇视频 | 99久久久国产精品无码免费 | 亚洲国产精品毛片av不卡在线 | 无码帝国www无码专区色综合 | 精品国产av色一区二区深夜久久 | 日韩精品无码一本二本三本色 | 狂野欧美性猛交免费视频 | 亚洲精品成人av在线 | 色婷婷av一区二区三区之红樱桃 | 亚洲一区av无码专区在线观看 | 国产精品人妻一区二区三区四 | 亚洲狠狠色丁香婷婷综合 | 中文字幕av伊人av无码av | 国内精品久久久久久中文字幕 | 蜜桃视频韩日免费播放 | 天堂亚洲2017在线观看 | 亚洲毛片av日韩av无码 | 奇米影视7777久久精品人人爽 | 一二三四社区在线中文视频 | 中文字幕日产无线码一区 | 九一九色国产 | 精品久久久久久亚洲精品 | 久久精品视频在线看15 | 成人av无码一区二区三区 | 日本一卡2卡3卡四卡精品网站 | 久久久精品成人免费观看 | 中文精品无码中文字幕无码专区 | 日本一卡二卡不卡视频查询 | 精品久久久久久亚洲精品 | 色综合天天综合狠狠爱 | 亚洲色成人中文字幕网站 | 日本免费一区二区三区最新 | 日韩精品无码一本二本三本色 | 野狼第一精品社区 | 午夜时刻免费入口 | 国产真实伦对白全集 | 日韩精品成人一区二区三区 | 亚洲日本一区二区三区在线 | 麻花豆传媒剧国产免费mv在线 | 欧美真人作爱免费视频 | 欧美熟妇另类久久久久久不卡 | 亚洲中文字幕成人无码 | 国产精品二区一区二区aⅴ污介绍 | 欧美激情内射喷水高潮 | 中文字幕av日韩精品一区二区 | 久久97精品久久久久久久不卡 | 欧美黑人巨大xxxxx | aⅴ亚洲 日韩 色 图网站 播放 | 无码帝国www无码专区色综合 | a在线亚洲男人的天堂 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 精品无码国产自产拍在线观看蜜 | 国产成人精品视频ⅴa片软件竹菊 | 精品 日韩 国产 欧美 视频 | 亚洲精品国产精品乱码不卡 | 扒开双腿吃奶呻吟做受视频 | 无码帝国www无码专区色综合 | 久久精品99久久香蕉国产色戒 | www一区二区www免费 | 内射后入在线观看一区 | 国产绳艺sm调教室论坛 | 丰满妇女强制高潮18xxxx | 伊在人天堂亚洲香蕉精品区 | 黑人巨大精品欧美一区二区 | 初尝人妻少妇中文字幕 | 麻豆精产国品 | 国产美女极度色诱视频www | 巨爆乳无码视频在线观看 | 人人妻人人澡人人爽欧美精品 | 久久综合色之久久综合 | 亚洲精品成人av在线 | 日韩在线不卡免费视频一区 | 天堂久久天堂av色综合 | 成 人 网 站国产免费观看 | 国产精品二区一区二区aⅴ污介绍 | 成人免费视频在线观看 | 成熟女人特级毛片www免费 | 免费乱码人妻系列无码专区 | 国产九九九九九九九a片 | 亚洲国产欧美日韩精品一区二区三区 | 啦啦啦www在线观看免费视频 | 国产在线无码精品电影网 | 老熟女重囗味hdxx69 | 女人被爽到呻吟gif动态图视看 | 波多野结衣av在线观看 | 日韩亚洲欧美中文高清在线 | 性啪啪chinese东北女人 | 亚洲乱码国产乱码精品精 | 国产精品亚洲а∨无码播放麻豆 | 99riav国产精品视频 | 久久亚洲国产成人精品性色 | 亚洲人成无码网www | 六十路熟妇乱子伦 | 国精产品一区二区三区 | 高清无码午夜福利视频 | 国产亚洲精品精品国产亚洲综合 | 久久久精品成人免费观看 | 丰满肥臀大屁股熟妇激情视频 | 丁香啪啪综合成人亚洲 | 荫蒂被男人添的好舒服爽免费视频 | 狠狠色色综合网站 | 大肉大捧一进一出视频出来呀 | 日日天干夜夜狠狠爱 | 亚洲国产av精品一区二区蜜芽 | 亚洲综合色区中文字幕 | 亚洲熟悉妇女xxx妇女av | 女人和拘做爰正片视频 | 国产午夜亚洲精品不卡下载 | 国产美女极度色诱视频www | 国产猛烈高潮尖叫视频免费 | 欧美激情内射喷水高潮 | 中文久久乱码一区二区 | 久久国产精品_国产精品 | a片免费视频在线观看 | 天堂在线观看www | 国产精品a成v人在线播放 | 欧美丰满熟妇xxxx性ppx人交 | 熟妇女人妻丰满少妇中文字幕 | 亚洲精品一区二区三区大桥未久 | 三上悠亚人妻中文字幕在线 | 欧美黑人乱大交 | 未满小14洗澡无码视频网站 | 无码人妻丰满熟妇区毛片18 | 国产激情一区二区三区 | 无套内谢的新婚少妇国语播放 | 欧美午夜特黄aaaaaa片 | 少妇高潮一区二区三区99 | 性生交片免费无码看人 | 亚洲人成网站在线播放942 | 蜜桃视频插满18在线观看 | 无码人妻黑人中文字幕 | 麻豆国产97在线 | 欧洲 | 国产 浪潮av性色四虎 | 乱人伦中文视频在线观看 | 久久久久亚洲精品男人的天堂 | 欧美色就是色 | 无码人妻出轨黑人中文字幕 | 牛和人交xxxx欧美 | 国产成人一区二区三区别 | 人人澡人人妻人人爽人人蜜桃 | 少妇被粗大的猛进出69影院 | √天堂中文官网8在线 | 青青久在线视频免费观看 | 中文字幕无码日韩欧毛 | 国产乡下妇女做爰 | 国产偷抇久久精品a片69 | 无码播放一区二区三区 | 成人影院yy111111在线观看 | 永久免费观看美女裸体的网站 | 天堂在线观看www | 天天拍夜夜添久久精品 | 亚洲成a人一区二区三区 | 国产成人精品三级麻豆 | 亚洲熟女一区二区三区 | 人人妻人人澡人人爽人人精品浪潮 | 亚洲成av人在线观看网址 | 日韩av无码一区二区三区 | 国产又粗又硬又大爽黄老大爷视 | 久久国产精品精品国产色婷婷 | 1000部啪啪未满十八勿入下载 | 国产成人一区二区三区在线观看 | 国产精品人人妻人人爽 | 欧美激情内射喷水高潮 | 久久人人爽人人爽人人片av高清 | 久久久久免费精品国产 | 精品久久久无码人妻字幂 | 久久 国产 尿 小便 嘘嘘 | 久精品国产欧美亚洲色aⅴ大片 | 亚洲国产精品无码久久久久高潮 | 又大又紧又粉嫩18p少妇 | 国产精品国产自线拍免费软件 | 国产熟妇高潮叫床视频播放 | 99精品无人区乱码1区2区3区 | 亚洲国产精品一区二区第一页 | 漂亮人妻洗澡被公强 日日躁 | 久久久久国色av免费观看性色 | 国产亚av手机在线观看 | 人妻少妇精品视频专区 | 国产亚洲精品久久久久久久久动漫 | 3d动漫精品啪啪一区二区中 | 欧美丰满老熟妇xxxxx性 | 中文字幕+乱码+中文字幕一区 | 久久久成人毛片无码 | 欧美日本精品一区二区三区 | 欧美喷潮久久久xxxxx | 精品亚洲成av人在线观看 | 午夜精品久久久久久久 | 丝袜美腿亚洲一区二区 | 亚无码乱人伦一区二区 | 熟女俱乐部五十路六十路av | 久久综合久久自在自线精品自 | 精品国产一区av天美传媒 | 亚洲色偷偷偷综合网 | 午夜不卡av免费 一本久久a久久精品vr综合 | www一区二区www免费 | 亚洲精品无码人妻无码 | 亚洲中文无码av永久不收费 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 熟女少妇在线视频播放 | 日日碰狠狠躁久久躁蜜桃 | 亚洲国产精品毛片av不卡在线 | 亚洲国产精品久久人人爱 | 亚洲欧洲日本综合aⅴ在线 | 日本护士xxxxhd少妇 | 国产精品鲁鲁鲁 | 久久国产36精品色熟妇 | 亚洲成a人片在线观看日本 | 色情久久久av熟女人妻网站 | 国产极品美女高潮无套在线观看 | 国产成人无码a区在线观看视频app | 中文字幕无码av激情不卡 | 精品国产麻豆免费人成网站 | 日本饥渴人妻欲求不满 | 99久久精品无码一区二区毛片 | 人人澡人人妻人人爽人人蜜桃 | 久久精品国产亚洲精品 | 久久久久国色av免费观看性色 | 麻豆精品国产精华精华液好用吗 | 欧美三级不卡在线观看 | 少妇太爽了在线观看 | 日韩精品一区二区av在线 | 性欧美疯狂xxxxbbbb | 亚洲精品国产精品乱码视色 | 丁香花在线影院观看在线播放 | 天堂亚洲免费视频 | 伊人久久大香线蕉av一区二区 | 99国产精品白浆在线观看免费 | 一本大道久久东京热无码av | 自拍偷自拍亚洲精品被多人伦好爽 | 色综合久久久久综合一本到桃花网 | 7777奇米四色成人眼影 | 亚洲成色在线综合网站 | 人妻夜夜爽天天爽三区 | 亚洲中文无码av永久不收费 | 久久人人爽人人爽人人片av高清 | 男女爱爱好爽视频免费看 | 欧洲精品码一区二区三区免费看 | 国色天香社区在线视频 | 国产乱人伦av在线无码 | 亚洲欧美日韩国产精品一区二区 | 精品 日韩 国产 欧美 视频 | 日日碰狠狠丁香久燥 | 中文字幕人妻丝袜二区 | 日韩av无码一区二区三区不卡 | 婷婷综合久久中文字幕蜜桃三电影 | 97人妻精品一区二区三区 | 午夜不卡av免费 一本久久a久久精品vr综合 | 欧美午夜特黄aaaaaa片 | 精品成在人线av无码免费看 | 国产精品毛片一区二区 | 色欲av亚洲一区无码少妇 | 成熟人妻av无码专区 | 亚洲中文字幕乱码av波多ji | 内射白嫩少妇超碰 | 夜精品a片一区二区三区无码白浆 | 亚洲人交乣女bbw | 精品欧洲av无码一区二区三区 | 国产精品-区区久久久狼 | 少女韩国电视剧在线观看完整 | 日本大香伊一区二区三区 | 欧美人妻一区二区三区 | 少妇人妻av毛片在线看 | 亚洲综合无码一区二区三区 | 久久精品女人的天堂av | 日本va欧美va欧美va精品 | 亚洲国产精品美女久久久久 | 亚洲成在人网站无码天堂 | 99re在线播放 | 少妇人妻av毛片在线看 | 在线播放免费人成毛片乱码 | 99久久久无码国产精品免费 | 久久成人a毛片免费观看网站 | 高清国产亚洲精品自在久久 | 国产精品亚洲五月天高清 | 成 人 网 站国产免费观看 | 天天躁夜夜躁狠狠是什么心态 | 人妻少妇精品久久 | 人人妻在人人 | 天天躁夜夜躁狠狠是什么心态 | 久久国产精品萌白酱免费 | 成年美女黄网站色大免费视频 | 久久久久久久久888 | 又大又硬又黄的免费视频 | 人人澡人人透人人爽 | 四虎国产精品一区二区 | 亚洲人亚洲人成电影网站色 | 人人妻人人澡人人爽人人精品浪潮 | 日日天日日夜日日摸 | 爱做久久久久久 | 精品成在人线av无码免费看 | 日韩人妻无码一区二区三区久久99 | 日日碰狠狠躁久久躁蜜桃 | 99久久婷婷国产综合精品青草免费 | 久久精品国产99精品亚洲 | 一本色道久久综合狠狠躁 | 中文字幕无码免费久久9一区9 | 成人综合网亚洲伊人 | 国产人妻人伦精品 | 中文字幕无码av激情不卡 | 亚洲日韩av一区二区三区中文 | 乱中年女人伦av三区 | 无码人妻精品一区二区三区下载 | 又紧又大又爽精品一区二区 | 四十如虎的丰满熟妇啪啪 | 人妻少妇被猛烈进入中文字幕 | 亚洲の无码国产の无码影院 | 真人与拘做受免费视频 | 中国女人内谢69xxxxxa片 | 少妇久久久久久人妻无码 | 红桃av一区二区三区在线无码av | 国产乡下妇女做爰 | 久久久久久亚洲精品a片成人 | 国产精品-区区久久久狼 | 精品久久久久香蕉网 | 久久精品国产亚洲精品 | 人人澡人人透人人爽 | 久久久久久国产精品无码下载 | 国产亚洲欧美在线专区 | 麻花豆传媒剧国产免费mv在线 | 撕开奶罩揉吮奶头视频 | 国产精品美女久久久网av | 国精品人妻无码一区二区三区蜜柚 | 97久久国产亚洲精品超碰热 | 男女下面进入的视频免费午夜 | 国产亚洲人成在线播放 | 精品无人区无码乱码毛片国产 | 特黄特色大片免费播放器图片 | 水蜜桃色314在线观看 | 国产亚洲精品久久久久久国模美 | 97无码免费人妻超级碰碰夜夜 | 无码av免费一区二区三区试看 | 久久综合色之久久综合 | 国产一区二区三区四区五区加勒比 | 国产办公室秘书无码精品99 | 荫蒂被男人添的好舒服爽免费视频 | 日日天干夜夜狠狠爱 | 久久国产劲爆∧v内射 | 老司机亚洲精品影院无码 | 欧美丰满熟妇xxxx性ppx人交 | 精品国产福利一区二区 | 性做久久久久久久久 | 日韩在线不卡免费视频一区 | 亚洲人成影院在线观看 | 内射巨臀欧美在线视频 | 午夜福利试看120秒体验区 | 欧美怡红院免费全部视频 | 亚洲精品成人福利网站 | 成人无码视频在线观看网站 | 国产亚洲视频中文字幕97精品 | 国产性生大片免费观看性 | 亚洲精品国偷拍自产在线麻豆 | 国产真人无遮挡作爱免费视频 | 婷婷丁香六月激情综合啪 | 无码成人精品区在线观看 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 久久亚洲国产成人精品性色 | 中文字幕 亚洲精品 第1页 | 久久午夜无码鲁丝片 | 少妇性俱乐部纵欲狂欢电影 | 欧美日韩一区二区综合 | 久久 国产 尿 小便 嘘嘘 | 色 综合 欧美 亚洲 国产 | 人妻与老人中文字幕 | 好爽又高潮了毛片免费下载 | 欧美性黑人极品hd | 丝袜足控一区二区三区 | 人妻少妇精品无码专区二区 | 18精品久久久无码午夜福利 | 国产人妻精品一区二区三区不卡 | 波多野42部无码喷潮在线 | 狠狠亚洲超碰狼人久久 | 亚洲精品中文字幕 | 亚洲精品国产第一综合99久久 | 国产人妖乱国产精品人妖 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 人妻人人添人妻人人爱 | 亚洲成av人片在线观看无码不卡 | 黑人粗大猛烈进出高潮视频 | 国产精品久久久久无码av色戒 | 亚洲成a人片在线观看无码3d | 亚洲国产精品无码一区二区三区 | 国产av一区二区精品久久凹凸 | 欧美日韩久久久精品a片 | 国产乱人无码伦av在线a | 东京热男人av天堂 | 国产av久久久久精东av | 熟女俱乐部五十路六十路av | 18无码粉嫩小泬无套在线观看 | 双乳奶水饱满少妇呻吟 | 无码人妻少妇伦在线电影 | 亚洲成a人片在线观看无码3d | 成年女人永久免费看片 | 国产尤物精品视频 | 久久久成人毛片无码 | 天天躁夜夜躁狠狠是什么心态 | 国产人妻精品一区二区三区 | 图片小说视频一区二区 | 久久久www成人免费毛片 | 日本免费一区二区三区最新 | 国产亚洲tv在线观看 | 亚洲精品成a人在线观看 | 黑人大群体交免费视频 | 欧美日韩视频无码一区二区三 | 欧美三级不卡在线观看 | 亚洲精品国偷拍自产在线麻豆 | 青草视频在线播放 | 精品少妇爆乳无码av无码专区 | 国产亚洲人成a在线v网站 | 妺妺窝人体色www在线小说 | 久久久精品国产sm最大网站 | 人妻少妇被猛烈进入中文字幕 | 久久亚洲日韩精品一区二区三区 | 国产成人精品无码播放 | 国产精品亚洲lv粉色 | 欧美国产日韩久久mv | 日本饥渴人妻欲求不满 | 欧美精品在线观看 | 国产后入清纯学生妹 | 午夜时刻免费入口 | 99国产精品白浆在线观看免费 | 成人精品一区二区三区中文字幕 | 欧美人与禽zoz0性伦交 | 精品国产精品久久一区免费式 | 日韩精品久久久肉伦网站 | 小鲜肉自慰网站xnxx | 国内揄拍国内精品人妻 | 国产乱人伦偷精品视频 | 成在人线av无码免费 | 色婷婷欧美在线播放内射 | 蜜桃无码一区二区三区 | 国产精品久久久久9999小说 | 日本护士xxxxhd少妇 | 久久午夜无码鲁丝片 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 7777奇米四色成人眼影 | 色欲av亚洲一区无码少妇 | 桃花色综合影院 | 欧洲极品少妇 | 国内精品久久久久久中文字幕 | 久久综合网欧美色妞网 | 无码人中文字幕 | 男女超爽视频免费播放 | 色 综合 欧美 亚洲 国产 | 日本精品人妻无码免费大全 | 伊人久久婷婷五月综合97色 | 成人亚洲精品久久久久 | 波多野结衣av一区二区全免费观看 | 国产在线aaa片一区二区99 | 激情国产av做激情国产爱 | 亚洲色在线无码国产精品不卡 | 男女性色大片免费网站 | 日本熟妇人妻xxxxx人hd | 欧美日韩综合一区二区三区 | 中文无码成人免费视频在线观看 | 国产激情综合五月久久 | 亚洲va欧美va天堂v国产综合 | 波多野结衣aⅴ在线 | 国产精品久久久一区二区三区 | 亚洲精品国产精品乱码视色 | 亚洲中文字幕乱码av波多ji | 高潮毛片无遮挡高清免费视频 | 亚洲 高清 成人 动漫 | 成 人 网 站国产免费观看 | 久久aⅴ免费观看 | √天堂资源地址中文在线 | 偷窥村妇洗澡毛毛多 | 欧美怡红院免费全部视频 | 国产精品人人爽人人做我的可爱 | 日欧一片内射va在线影院 | 亚洲综合另类小说色区 | 无码一区二区三区在线 | 精品亚洲成av人在线观看 | 无码午夜成人1000部免费视频 | 国产人妻精品一区二区三区 | 日日天日日夜日日摸 | 国产激情艳情在线看视频 | 一本久久a久久精品亚洲 | 国产无遮挡又黄又爽免费视频 | 国产麻豆精品一区二区三区v视界 | 真人与拘做受免费视频 | 亚拍精品一区二区三区探花 | 亚洲日韩av片在线观看 | 欧美午夜特黄aaaaaa片 | 欧美日韩一区二区三区自拍 | 久久视频在线观看精品 | 国产真实夫妇视频 | 日本一卡2卡3卡四卡精品网站 | 亚洲综合另类小说色区 | 黑森林福利视频导航 | 国产猛烈高潮尖叫视频免费 | 久久精品国产一区二区三区 | 国产一区二区不卡老阿姨 | 久久久精品国产sm最大网站 | 性开放的女人aaa片 | 日日麻批免费40分钟无码 | 成人性做爰aaa片免费看不忠 | 纯爱无遮挡h肉动漫在线播放 | 一本色道婷婷久久欧美 | www一区二区www免费 | 久久久久亚洲精品男人的天堂 | 国产精品美女久久久久av爽李琼 | 精品人妻人人做人人爽夜夜爽 | 国产三级久久久精品麻豆三级 | 动漫av一区二区在线观看 | 亚洲人成影院在线观看 | 色综合久久久无码中文字幕 | 国产精品亚洲lv粉色 | 国产色xx群视频射精 | 久久久成人毛片无码 | 俄罗斯老熟妇色xxxx | 免费看少妇作爱视频 | 亚洲综合无码一区二区三区 | 久9re热视频这里只有精品 | 久久亚洲中文字幕无码 | 亚洲乱码日产精品bd | 欧美阿v高清资源不卡在线播放 | 亚洲高清偷拍一区二区三区 | 国产深夜福利视频在线 | 国产亚洲精品久久久久久国模美 | 国产欧美熟妇另类久久久 | 亚洲无人区午夜福利码高清完整版 | 无码吃奶揉捏奶头高潮视频 | 久久无码人妻影院 | 国产精品福利视频导航 | 亚洲男女内射在线播放 | 中文字幕人成乱码熟女app | 久久久久久亚洲精品a片成人 | 亚洲va中文字幕无码久久不卡 | 久久精品国产一区二区三区肥胖 | 东京热男人av天堂 | 日产精品高潮呻吟av久久 | 国产 精品 自在自线 | 女人被男人躁得好爽免费视频 | 久久久久免费看成人影片 | 国产av剧情md精品麻豆 | 一本一道久久综合久久 | 国产欧美精品一区二区三区 | 精品人妻人人做人人爽 | 成人精品一区二区三区中文字幕 | 国产精品美女久久久久av爽李琼 | 国产成人久久精品流白浆 | 99久久精品日本一区二区免费 | 中文字幕乱妇无码av在线 | 无码一区二区三区在线观看 | 中文精品无码中文字幕无码专区 | 久久国产精品精品国产色婷婷 | 欧美自拍另类欧美综合图片区 | 国产偷自视频区视频 | www国产亚洲精品久久久日本 | 欧美日本免费一区二区三区 | 少妇被黑人到高潮喷出白浆 | 伊人久久大香线蕉亚洲 | 99久久精品日本一区二区免费 | 荫蒂添的好舒服视频囗交 | 国产精品视频免费播放 | 日韩精品成人一区二区三区 | 久久久av男人的天堂 | 国产精品亚洲lv粉色 | 久久人人爽人人人人片 | 亚洲午夜久久久影院 | 久久99热只有频精品8 | 亚洲人交乣女bbw | 久久97精品久久久久久久不卡 | 久久人人爽人人爽人人片av高清 | 天堂亚洲免费视频 | 亚洲伊人久久精品影院 | av在线亚洲欧洲日产一区二区 | 久久精品人妻少妇一区二区三区 | 爽爽影院免费观看 | 久久久久久国产精品无码下载 | 青青青手机频在线观看 | 久久精品女人的天堂av | 国产香蕉97碰碰久久人人 | 男人的天堂2018无码 | 久久久中文字幕日本无吗 | 国产成人综合在线女婷五月99播放 | 国产亚洲人成a在线v网站 | 亚洲第一网站男人都懂 | 日韩av激情在线观看 | 亚洲爆乳精品无码一区二区三区 | 国产一区二区三区影院 | 对白脏话肉麻粗话av | 亚洲欧美日韩成人高清在线一区 | 免费网站看v片在线18禁无码 | 日日鲁鲁鲁夜夜爽爽狠狠 | 国产无遮挡又黄又爽免费视频 | 久久精品中文字幕一区 | 成人精品天堂一区二区三区 | 国产色xx群视频射精 | 欧美丰满熟妇xxxx | 欧美丰满熟妇xxxx | 欧美亚洲日韩国产人成在线播放 | 亚洲成a人片在线观看无码3d | 国产性生大片免费观看性 | 日本饥渴人妻欲求不满 | 国内老熟妇对白xxxxhd | 国产97在线 | 亚洲 | 国产人成高清在线视频99最全资源 | 99国产精品白浆在线观看免费 | 真人与拘做受免费视频一 | 亚洲精品久久久久久一区二区 | 丰满岳乱妇在线观看中字无码 | 国产精品爱久久久久久久 | 亚洲天堂2017无码 | 乱码午夜-极国产极内射 | 骚片av蜜桃精品一区 | 成人性做爰aaa片免费看 | 久久www免费人成人片 | 香港三级日本三级妇三级 | 激情五月综合色婷婷一区二区 | 久久国产精品_国产精品 | 蜜桃av抽搐高潮一区二区 | 丁香啪啪综合成人亚洲 | 亚洲国产欧美在线成人 | 装睡被陌生人摸出水好爽 | 夜夜影院未满十八勿进 | 欧美日韩久久久精品a片 | 免费国产黄网站在线观看 | 国产av人人夜夜澡人人爽麻豆 | 国产一区二区三区四区五区加勒比 | 久久亚洲精品中文字幕无男同 | 麻豆md0077饥渴少妇 | 免费中文字幕日韩欧美 | 国产精品久久久久久亚洲毛片 | 水蜜桃色314在线观看 | 久久久久久av无码免费看大片 | 四十如虎的丰满熟妇啪啪 | 久久久久久久女国产乱让韩 | 99久久久无码国产精品免费 | 东京一本一道一二三区 | 无码av最新清无码专区吞精 | 鲁大师影院在线观看 | 鲁一鲁av2019在线 | 欧美野外疯狂做受xxxx高潮 | 性欧美牲交在线视频 | 亚洲成a人片在线观看无码 | 人人澡人人透人人爽 | 亚洲成av人片天堂网无码】 | 一本一道久久综合久久 | 色五月丁香五月综合五月 | 国产极品视觉盛宴 | 久久久久免费看成人影片 | 国产成人综合色在线观看网站 | 性生交片免费无码看人 | 久久人人97超碰a片精品 | 日本熟妇大屁股人妻 | 午夜性刺激在线视频免费 | 国产九九九九九九九a片 | 国产卡一卡二卡三 | 亚洲精品国产精品乱码视色 | 少妇性俱乐部纵欲狂欢电影 | 亚洲热妇无码av在线播放 | 国产精品美女久久久久av爽李琼 | 欧洲欧美人成视频在线 | 国产精品.xx视频.xxtv | 少妇激情av一区二区 | 国产一区二区三区影院 | 精品一二三区久久aaa片 | 97色伦图片97综合影院 | 人妻天天爽夜夜爽一区二区 | 色婷婷久久一区二区三区麻豆 | 成 人 免费观看网站 | 欧美人妻一区二区三区 | 色综合视频一区二区三区 | 婷婷丁香六月激情综合啪 | 久久精品丝袜高跟鞋 | 色欲人妻aaaaaaa无码 | 极品尤物被啪到呻吟喷水 | 久久人人爽人人人人片 | 国产一区二区三区影院 | 波多野结衣aⅴ在线 | 欧美三级不卡在线观看 | 无码人妻久久一区二区三区不卡 | 久久aⅴ免费观看 | 久久久精品国产sm最大网站 | 中文字幕av日韩精品一区二区 | 久久亚洲精品中文字幕无男同 | 色五月五月丁香亚洲综合网 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 夜夜影院未满十八勿进 | 国产精品久久久久久无码 | 亚洲人成网站免费播放 | 亚洲综合在线一区二区三区 | 欧美激情一区二区三区成人 | 久久久久免费精品国产 | 中文字幕无码免费久久9一区9 | 久久午夜夜伦鲁鲁片无码免费 | 男女作爱免费网站 | 一本色道久久综合狠狠躁 | 亚洲精品一区国产 | 性色欲网站人妻丰满中文久久不卡 | 久久综合给合久久狠狠狠97色 | 国产乱码精品一品二品 | 亚洲精品一区二区三区在线观看 | 欧美国产亚洲日韩在线二区 | 成人毛片一区二区 | 四虎影视成人永久免费观看视频 | 人人妻人人澡人人爽欧美精品 | 亚洲人交乣女bbw | a片免费视频在线观看 | 欧美放荡的少妇 | 久在线观看福利视频 | 国产网红无码精品视频 | 久久精品女人天堂av免费观看 | 无码av免费一区二区三区试看 | 中文字幕av无码一区二区三区电影 | 亚洲一区二区三区国产精华液 | 久久人人爽人人人人片 | 亚洲国产精品成人久久蜜臀 | 草草网站影院白丝内射 | 亚洲精品中文字幕久久久久 | 正在播放东北夫妻内射 | 日本又色又爽又黄的a片18禁 | 国产av一区二区精品久久凹凸 | 久久久国产精品无码免费专区 | 全黄性性激高免费视频 | 一本精品99久久精品77 | 欧美国产日产一区二区 | 一本久久a久久精品vr综合 | 无码av岛国片在线播放 | 在线视频网站www色 | 精品久久久久久人妻无码中文字幕 | 噜噜噜亚洲色成人网站 | 永久免费观看美女裸体的网站 | 天堂а√在线地址中文在线 | 精品夜夜澡人妻无码av蜜桃 | 最近的中文字幕在线看视频 | 亚洲欧洲日本综合aⅴ在线 | 无码毛片视频一区二区本码 | 成 人 免费观看网站 | 人人超人人超碰超国产 | 77777熟女视频在线观看 а天堂中文在线官网 | 人妻天天爽夜夜爽一区二区 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 亚洲精品国产第一综合99久久 | 5858s亚洲色大成网站www | 国产在热线精品视频 | 国产精品亚洲五月天高清 | 爆乳一区二区三区无码 | 无码国产色欲xxxxx视频 | 亚洲中文无码av永久不收费 | 欧美精品国产综合久久 | 国产在线aaa片一区二区99 | 无码一区二区三区在线 | 亚洲 欧美 激情 小说 另类 | 久久 国产 尿 小便 嘘嘘 | 精品无人国产偷自产在线 | 老头边吃奶边弄进去呻吟 | 国产又粗又硬又大爽黄老大爷视 | 国产精品igao视频网 | 午夜熟女插插xx免费视频 | 欧洲精品码一区二区三区免费看 | 漂亮人妻洗澡被公强 日日躁 | 在线а√天堂中文官网 | 露脸叫床粗话东北少妇 | 丰满诱人的人妻3 | 国产精品香蕉在线观看 | 亚洲欧美中文字幕5发布 | 色偷偷人人澡人人爽人人模 | 欧美变态另类xxxx | 精品久久久久久亚洲精品 | 狠狠色丁香久久婷婷综合五月 | 精品无人区无码乱码毛片国产 | 性生交大片免费看女人按摩摩 | 男女下面进入的视频免费午夜 | 国产精品鲁鲁鲁 | 强开小婷嫩苞又嫩又紧视频 | 精品久久久无码人妻字幂 | 国产办公室秘书无码精品99 | 无码国内精品人妻少妇 | 波多野结衣av一区二区全免费观看 | 综合人妻久久一区二区精品 | 国产农村乱对白刺激视频 | 一本久久伊人热热精品中文字幕 | 亚洲无人区午夜福利码高清完整版 | 国产精品亚洲综合色区韩国 | 色综合久久久无码网中文 | 老熟妇乱子伦牲交视频 | 国产精品对白交换视频 | 免费无码av一区二区 | 亚洲欧美精品aaaaaa片 | 无码人妻精品一区二区三区不卡 | 草草网站影院白丝内射 | 免费无码av一区二区 | 欧美人与物videos另类 | 两性色午夜免费视频 | 国产超级va在线观看视频 | 久久久国产精品无码免费专区 | 中文精品无码中文字幕无码专区 | 99久久精品无码一区二区毛片 | 日韩无码专区 | 日韩av无码一区二区三区 | 国产欧美亚洲精品a | 日本高清一区免费中文视频 | a在线亚洲男人的天堂 | 国产欧美精品一区二区三区 | 欧美日韩一区二区三区自拍 | 亚洲国产精品一区二区美利坚 | 亚洲 a v无 码免 费 成 人 a v | 熟妇人妻无码xxx视频 | 99在线 | 亚洲 | 又紧又大又爽精品一区二区 | 亚洲精品国产第一综合99久久 | 天堂亚洲免费视频 | 男人扒开女人内裤强吻桶进去 | 草草网站影院白丝内射 | 曰本女人与公拘交酡免费视频 | 国产色精品久久人妻 | 青青草原综合久久大伊人精品 | 香港三级日本三级妇三级 | 国产无遮挡又黄又爽免费视频 | 午夜精品一区二区三区的区别 | 精品人妻中文字幕有码在线 | 嫩b人妻精品一区二区三区 | 狠狠cao日日穞夜夜穞av | 天堂亚洲免费视频 | 真人与拘做受免费视频 | 少妇被黑人到高潮喷出白浆 | 无码乱肉视频免费大全合集 | 少妇无码av无码专区在线观看 | 日日天日日夜日日摸 | 日本又色又爽又黄的a片18禁 | 亚洲精品国产第一综合99久久 | 超碰97人人射妻 | 日日夜夜撸啊撸 | 久久97精品久久久久久久不卡 | 2020久久香蕉国产线看观看 | 成人一在线视频日韩国产 | www一区二区www免费 | 国产精品无码久久av | 欧洲美熟女乱又伦 | 色 综合 欧美 亚洲 国产 | 青青青爽视频在线观看 | 亚洲国产精品久久人人爱 | 麻花豆传媒剧国产免费mv在线 | 午夜熟女插插xx免费视频 | 少妇高潮喷潮久久久影院 | 日本高清一区免费中文视频 | 亚洲精品综合五月久久小说 | 娇妻被黑人粗大高潮白浆 | 中文字幕亚洲情99在线 | 人人妻人人澡人人爽精品欧美 | 偷窥日本少妇撒尿chinese | 色婷婷av一区二区三区之红樱桃 | 精品久久久久久人妻无码中文字幕 | 精品久久综合1区2区3区激情 | 丰满人妻精品国产99aⅴ | 粉嫩少妇内射浓精videos | 久久久精品456亚洲影院 | 亚洲综合伊人久久大杳蕉 | 国内精品一区二区三区不卡 | 欧美成人家庭影院 | 亚洲欧洲中文日韩av乱码 | 国产精品人妻一区二区三区四 | 九月婷婷人人澡人人添人人爽 | 日本精品人妻无码免费大全 | 国产在线aaa片一区二区99 | 久久成人a毛片免费观看网站 | 色情久久久av熟女人妻网站 | 最近的中文字幕在线看视频 | 国产激情精品一区二区三区 | 国产口爆吞精在线视频 | 国产va免费精品观看 | 亚洲精品国产a久久久久久 | 免费观看又污又黄的网站 | 免费人成在线观看网站 | 日韩精品一区二区av在线 | 国产莉萝无码av在线播放 | 精品亚洲韩国一区二区三区 | 在线视频网站www色 | 人人澡人人妻人人爽人人蜜桃 | 国产三级精品三级男人的天堂 | 伊人久久大香线蕉亚洲 | 成人无码视频在线观看网站 | 精品一区二区三区波多野结衣 | 粗大的内捧猛烈进出视频 | 欧美日本精品一区二区三区 | 亚洲国产综合无码一区 | 亚洲国产精品一区二区第一页 | 亚洲а∨天堂久久精品2021 | a在线亚洲男人的天堂 | 丰满人妻翻云覆雨呻吟视频 | 亚洲精品鲁一鲁一区二区三区 | 无码人妻黑人中文字幕 | 久久久久av无码免费网 | 一二三四在线观看免费视频 | 动漫av网站免费观看 | 精品少妇爆乳无码av无码专区 | 日本精品少妇一区二区三区 | 欧美日韩在线亚洲综合国产人 | 欧美日韩一区二区免费视频 | 久激情内射婷内射蜜桃人妖 | 自拍偷自拍亚洲精品10p | 精品国产乱码久久久久乱码 | 人妻体内射精一区二区三四 | 亚洲综合久久一区二区 | 疯狂三人交性欧美 | 亚洲天堂2017无码 | 香蕉久久久久久av成人 | 亚洲人成网站色7799 | 精品无码一区二区三区的天堂 | 国产精品人妻一区二区三区四 | 成人精品视频一区二区三区尤物 | 免费观看又污又黄的网站 | 中文无码精品a∨在线观看不卡 | 久久精品99久久香蕉国产色戒 | 亚洲国产成人a精品不卡在线 | 在线观看国产午夜福利片 | 日本乱人伦片中文三区 | 亚洲国产精品一区二区美利坚 | 欧美真人作爱免费视频 | 亚洲成熟女人毛毛耸耸多 | 无码一区二区三区在线 | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 成人精品视频一区二区 | 色欲av亚洲一区无码少妇 | 大肉大捧一进一出好爽视频 | 在线天堂新版最新版在线8 | 无码吃奶揉捏奶头高潮视频 | 51国偷自产一区二区三区 | 美女毛片一区二区三区四区 | 少妇无码av无码专区在线观看 | 亚洲爆乳无码专区 | 国产午夜无码精品免费看 | 国产福利视频一区二区 | 亚洲毛片av日韩av无码 | 婷婷色婷婷开心五月四房播播 | 国产无av码在线观看 | 青草视频在线播放 | 亚洲精品鲁一鲁一区二区三区 | 国产精品无套呻吟在线 | 精品一区二区三区无码免费视频 | 四虎影视成人永久免费观看视频 | 好屌草这里只有精品 | 国内精品人妻无码久久久影院蜜桃 | 亚洲国产精品一区二区第一页 | 国产精品二区一区二区aⅴ污介绍 | 国产精品久久久 | 国产成人亚洲综合无码 | 国产舌乚八伦偷品w中 | 亚洲 a v无 码免 费 成 人 a v | 久久精品国产精品国产精品污 | 四虎4hu永久免费 | 久久婷婷五月综合色国产香蕉 | 亚洲精品国产精品乱码视色 | 无遮挡啪啪摇乳动态图 | 欧美熟妇另类久久久久久不卡 | 日本护士毛茸茸高潮 | 小鲜肉自慰网站xnxx | 99久久精品国产一区二区蜜芽 | 久久精品人人做人人综合试看 | 国产成人精品优优av | 女人高潮内射99精品 | 欧美怡红院免费全部视频 | 中文无码成人免费视频在线观看 | 免费无码一区二区三区蜜桃大 | 欧美自拍另类欧美综合图片区 | 欧美乱妇无乱码大黄a片 | 国产情侣作爱视频免费观看 | 一本色道久久综合亚洲精品不卡 | 精品国产av色一区二区深夜久久 | 特级做a爰片毛片免费69 | 天天躁夜夜躁狠狠是什么心态 | 国产情侣作爱视频免费观看 | а√资源新版在线天堂 | 久久综合给久久狠狠97色 | 荫蒂添的好舒服视频囗交 | 久久精品国产大片免费观看 | 国产成人无码a区在线观看视频app | 久久久久久亚洲精品a片成人 | 国产在线无码精品电影网 | 帮老师解开蕾丝奶罩吸乳网站 | 国产成人人人97超碰超爽8 | 久久久精品人妻久久影视 | 伊人久久婷婷五月综合97色 | 一本色道婷婷久久欧美 | 欧美日本日韩 | 国产一区二区三区四区五区加勒比 | 永久免费精品精品永久-夜色 | 18黄暴禁片在线观看 | 国产三级久久久精品麻豆三级 | 国产高清不卡无码视频 | 99riav国产精品视频 | 中文字幕无码免费久久99 | 人妻熟女一区 | 国产卡一卡二卡三 | 国产一精品一av一免费 | 亚洲精品一区二区三区大桥未久 | 精品国偷自产在线视频 | 十八禁视频网站在线观看 | 日韩亚洲欧美精品综合 | 香港三级日本三级妇三级 | 欧美日韩一区二区综合 | 成人aaa片一区国产精品 | 国产成人精品久久亚洲高清不卡 | 亚拍精品一区二区三区探花 | 99久久人妻精品免费二区 | 青草视频在线播放 | 久久精品丝袜高跟鞋 | 欧美性色19p | 国产香蕉尹人综合在线观看 | 亚洲成a人一区二区三区 | 欧美国产日产一区二区 | 玩弄中年熟妇正在播放 | 色综合久久88色综合天天 | 熟女体下毛毛黑森林 | 日本xxxx色视频在线观看免费 | 国产日产欧产精品精品app | 午夜福利一区二区三区在线观看 |