久久精品国产精品国产精品污,男人扒开添女人下部免费视频,一级国产69式性姿势免费视频,夜鲁夜鲁很鲁在线视频 视频,欧美丰满少妇一区二区三区,国产偷国产偷亚洲高清人乐享,中文 在线 日韩 亚洲 欧美,熟妇人妻无乱码中文字幕真矢织江,一区二区三区人妻制服国产

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

特征提取,转换和选择

發布時間:2023/11/28 生活经验 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 特征提取,转换和选择 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

特征提取,轉換和選擇
Extracting, transforming and selecting features
This section covers algorithms for working with features, roughly divided into these groups:
? Extraction: Extracting features from “raw” data
? Transformation: Scaling, converting, or modifying features
? Selection: Selecting a subset from a larger set of features
? Locality Sensitive Hashing (LSH): This class of algorithms combines aspects of feature transformation with other algorithms.
本節涵蓋使用功能的算法,大致分為以下幾類:
? 提取:從“原始”數據中提取特征
? 轉換:縮放,轉換或修改特征
? 選擇:從更大的特征集中選擇一個子集
? 局部敏感哈希(LSH):此類算法將特征轉換的各個方面與其它算法結合在一起。
Table of Contents
? Feature Extractors
o TF-IDF
o Word2Vec
o CountVectorizer
o FeatureHasher
? Feature Transformers
o Tokenizer
o StopWordsRemover
o nn-gram
o Binarizer
o PCA
o PolynomialExpansion
o Discrete Cosine Transform (DCT)
o StringIndexer
o IndexToString
o OneHotEncoder
o VectorIndexer
o Interaction
o Normalizer
o StandardScaler
o RobustScaler
o MinMaxScaler
o MaxAbsScaler
o Bucketizer
o ElementwiseProduct
o SQLTransformer
o VectorAssembler
o VectorSizeHint
o QuantileDiscretizer
o Imputer
? Feature Selectors
o VectorSlicer
o RFormula
o ChiSqSelector
o UnivariateFeatureSelector
o VarianceThresholdSelector
? Locality Sensitive Hashing
o LSH Operations
? Feature Transformation
? Approximate Similarity Join
? Approximate Nearest Neighbor Search
o LSH Algorithms
? Bucketed Random Projection for Euclidean Distance
? MinHash for Jaccard Distance
Feature Extractors
TF-IDF
Term frequency-inverse document frequency (TF-IDF) is a feature vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus. Denote a term by t, a document by d, and the corpus by D. Term frequency TF(t,d) is the number of times that term t appears in document d, while document frequency DF(t,D) is the number of documents that contains term t. If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that appear very often but carry little information about the document, e.g. “a”, “the”, and “of”. If a term appears very often across the corpus, it means it doesn’t carry special information about a particular document. Inverse document frequency is a numerical measure of how much information a term provides:
變量逆頻率文檔頻率(TF-IDF) 是一種特征向量化方法,廣泛用于文本挖掘中,反映變量對語料庫中文檔的重要性。用t表示變量,用d表示文檔,用D表示語料庫。變量頻率TF(t,d)是變量t在文檔d中出現的次數,而文檔頻率DF(t,D)是包含變量t的文檔數。如果僅使用變量頻率來衡量重要性,則過分強調那些經常出現,但幾乎不包含有關文檔信息的變量,例如“一個a”,“該the”和“屬于of”。如果變量經常出現在整個語料庫中,則表示該變量不包含有關特定文檔的特殊信息。逆文檔頻率是一個變量大小信息,提供了一個數值量度:

where |D| is the total number of documents in the corpus. Since logarithm is used, if a term appears in all documents, its IDF value becomes 0. Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus. The TF-IDF measure is simply the product of TF and IDF:
其中|D|是所述語料庫中的文件的總數。由于使用對數,因此如果一個變量出現在所有文檔中,則其IDF值將變為0。注意,應用了平滑變量以避免對主體外的變量除以零。TF-IDF度量只是TF和IDF的乘積:

There are several variants on the definition of term frequency and document frequency. In MLlib, we separate TF and IDF to make them flexible.
TF: Both HashingTF and CountVectorizer can be used to generate the term frequency vectors.
HashingTF is a Transformer which takes sets of terms and converts those sets into fixed-length feature vectors. In text processing, a “set of terms” might be a bag of words. HashingTF utilizes the hashing trick. A raw feature is mapped into an index (term) by applying a hash function. The hash function used here is MurmurHash 3. Then term frequencies are calculated based on the mapped indices. This approach avoids the need to compute a global term-to-index map, which can be expensive for a large corpus, but it suffers from potential hash collisions, where different raw features may become the same term after hashing. To reduce the chance of collision, we can increase the target feature dimension, i.e. the number of buckets of the hash table. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the feature dimension, otherwise the features will not be mapped evenly to the vector indices. The default feature dimension is 218=262,144218=262,144. An optional binary toggle parameter controls term frequency counts. When set to true all nonzero frequency counts are set to 1. This is especially useful for discrete probabilistic models that model binary, rather than integer, counts.
CountVectorizer converts text documents to vectors of term counts. Refer to CountVectorizer for more details.
IDF: IDF is an Estimator which is fit on a dataset and produces an IDFModel. The IDFModel takes feature vectors (generally created from HashingTF or CountVectorizer) and scales each feature. Intuitively, it down-weights features which appear frequently in a corpus.
Note: spark.ml doesn’t provide tools for text segmentation. We refer users to the Stanford NLP Group and scalanlp/chalk.
Examples
In the following code segment, we start with a set of sentences. We split each sentence into words using Tokenizer. For each sentence (bag of words), we use HashingTF to hash the sentence into a feature vector. We use IDF to rescale the feature vectors; this generally improves performance when using text as features. Our feature vectors could then be passed to a learning algorithm.
變量頻率和文檔頻率的定義有多種變體。在MLlib中,將TF和IDF分開以使其具有靈活性。
TF:HashingTF和CountVectorizer均可用于生成項頻率向量。
HashingTF是,Transformer接受一組變量并將其轉換為固定長度的特征向量。在文本處理中,“一組變量”可能是一袋單詞。 HashingTF利用哈希理論。通過應用哈希函數將原始特征映射到索引(項)。這里使用的哈希函數是MurmurHash 3。然后根據映射的索引計算詞頻。這種方法避免了需要計算全局項到索引圖的情況,對于大型語料庫可能是昂貴的,但是會遭受潛在的哈希沖突,即哈希后不同的原始特征可能變成相同的變量。為了減少沖突的概率,可以增加目標要素的維數,即哈希表的存儲數。使用散列值的簡單模來確定向量索引,建議使用2的冪作為特征維,否則特征將不會均勻地映射到向量索引。默認特征尺寸為
。可選的二進制切換參數控制項頻率計數。當設置為true時,所有非零頻率計數都設置為1。對于模擬二進制而不是整數計數的離散概率模型特別有用。
CountVectorizer將文本文檔轉換為變量計數向量。有關更多詳細信息,請參考CountVectorizer 。
IDF:IDF是Estimator適合數據集,產生的IDFIDFModel。所述 IDFModel需要的特征向量(通常從創建HashingTF或CountVectorizer)和縮放每個特征。直觀地,會減少在語料庫中經常出現的特征的權重。
注意: spark.ml不提供用于文本分割的工具。將用戶推薦給Stanford NLP Group和 scalanlp / chalk。
例子
在下面的代碼段中,從一組句子開始。使用將每個句子分成單詞Tokenizer。對于每個句子(單詞袋),用HashingTF將句子散列為特征向量。IDF用來重新縮放特征向量;使用文本作為特征時,通常可以提高性能。然后,特征向量可以傳遞給學習算法。

? Scala
? Java
? Python
Refer to the HashingTF Scala docs and the IDF Scala docs for more details on the API.
import org.apache.spark.ml.feature.{HashingTF, IDF, Tokenizer}

val sentenceData = spark.createDataFrame(Seq(
(0.0, “Hi I heard about Spark”),
(0.0, “I wish Java could use case classes”),
(1.0, “Logistic regression models are neat”)
)).toDF(“label”, “sentence”)

val tokenizer = new Tokenizer().setInputCol(“sentence”).setOutputCol(“words”)
val wordsData = tokenizer.transform(sentenceData)

val hashingTF = new HashingTF()
.setInputCol(“words”).setOutputCol(“rawFeatures”).setNumFeatures(20)

val featurizedData = hashingTF.transform(wordsData)
// alternatively, CountVectorizer can also be used to get term frequency vectors

val idf = new IDF().setInputCol(“rawFeatures”).setOutputCol(“features”)
val idfModel = idf.fit(featurizedData)

val rescaledData = idfModel.transform(featurizedData)
rescaledData.select(“label”, “features”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/TfIdfExample.scala” in the Spark repo.
Word2Vec
Word2Vec is an Estimator which takes sequences of words representing documents and trains a Word2VecModel. The model maps each word to a unique fixed-size vector. The Word2VecModel transforms each document into a vector using the average of all words in the document; this vector can then be used as features for prediction, document similarity calculations, etc. Please refer to the MLlib user guide on Word2Vec for more details. Word2Vec是一個Estimator,表示文檔的單詞序列并訓練一個 Word2VecModel。該模型將每個單詞映射到唯一的固定大小的向量。使用Word2VecModel 文檔中所有單詞的平均值,將每個文檔轉換為向量;然后,可以將此向量用作預測,文檔相似度計算等的功能。有關更多詳細信息,可參考Word2Vec上的MLlib用戶指南。
Examples
In the following code segment, we start with a set of documents, each of which is represented as a sequence of words. For each document, we transform it into a feature vector. This feature vector could then be passed to a learning algorithm. 在下面的代碼段中,從一組文檔開始,每個文檔都由一個單詞序列表示。對于每個文檔,將其轉換為特征向量。然后可以將該特征向量傳遞給學習算法。
? Scala
? Java
? Python
Refer to the Word2Vec Scala docs for more details on the API.
import org.apache.spark.ml.feature.Word2Vec
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.Row

// Input data: Each row is a bag of words from a sentence or document.
val documentDF = spark.createDataFrame(Seq(
“Hi I heard about Spark”.split(" “),
“I wish Java could use case classes”.split(” “),
“Logistic regression models are neat”.split(” ")
).map(Tuple1.apply)).toDF(“text”)

// Learn a mapping from words to Vectors.
val word2Vec = new Word2Vec()
.setInputCol(“text”)
.setOutputCol(“result”)
.setVectorSize(3)
.setMinCount(0)
val model = word2Vec.fit(documentDF)

val result = model.transform(documentDF)
result.collect().foreach { case Row(text: Seq[_], features: Vector) =>
println(s"Text: [${text.mkString(", “)}] => \nVector: $features\n”) }
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/Word2VecExample.scala” in the Spark repo.
CountVectorizer
CountVectorizer and CountVectorizerModel aim to help convert a collection of text documents to vectors of token counts. When an a-priori dictionary is not available, CountVectorizer can be used as an Estimator to extract the vocabulary, and generates a CountVectorizerModel. The model produces sparse representations for the documents over the vocabulary, which can then be passed to other algorithms like LDA.
During the fitting process, CountVectorizer will select the top vocabSize words ordered by term frequency across the corpus. An optional parameter minDF also affects the fitting process by specifying the minimum number (or fraction if < 1.0) of documents a term must appear in to be included in the vocabulary. Another optional binary toggle parameter controls the output vector. If set to true all nonzero counts are set to 1. This is especially useful for discrete probabilistic models that model binary, rather than integer, counts.
CountVectorizer和CountVectorizerModel,幫助轉換文本文檔的集合令牌計數的載體。當先驗詞典不可用時,CountVectorizer可以用作Estimator,提取詞匯表并生成CountVectorizerModel。該模型為詞匯表上的文檔生成稀疏表示,然后可以將其傳遞給其它算法,例如LDA。
在擬合過程中,CountVectorizer將選擇vocabSize整個語料庫中,按詞頻排列的前幾個詞。可選參數minDF,通過指定一個單詞必須出現在詞匯表中的最小數量(如果小于1.0,則為小數)來影響擬合過程。另一個可選的二進制,切換參數控制輸出向量。如果將其設置為true,則所有非零計數都將設置為1。這對于模擬二進制,而不是整數計數的離散概率模型特別有用。
Examples
Assume that we have the following DataFrame with columns id and texts:
假設有以下帶有列id和 texts的DataFrame:

idtexts
0Array(“a”, “b”, “c”)
1Array(“a”, “b”, “b”, “c”, “a”)

each row in texts is a document of type Array[String]. Invoking fit of CountVectorizer produces a CountVectorizerModel with vocabulary (a, b, c). Then the output column “vector” after transformation contains: 每行texts是一個Array [String]類型的文檔。調用的契合度CountVectorizer會產生CountVectorizerModel帶有詞匯量(a,b,c)的a。然后,轉換后的輸出列“ vector”包含:

idtextsvector
0Array(“a”, “b”, “c”)(3,[0,1,2],[1.0,1.0,1.0])
1Array(“a”, “b”, “b”, “c”, “a”)(3,[0,1,2],[2.0,2.0,1.0])

Each vector represents the token counts of the document over the vocabulary.
? Scala
? Java
? Python
Refer to the CountVectorizer Scala docs and the CountVectorizerModel Scala docs for more details on the API. 有關API的更多詳細信息,參考CountVectorizer Scala文檔 和CountVectorizerModel Scala文檔。
import org.apache.spark.ml.feature.{CountVectorizer, CountVectorizerModel}

val df = spark.createDataFrame(Seq(
(0, Array(“a”, “b”, “c”)),
(1, Array(“a”, “b”, “b”, “c”, “a”))
)).toDF(“id”, “words”)

// fit a CountVectorizerModel from the corpus
val cvModel: CountVectorizerModel = new CountVectorizer()
.setInputCol(“words”)
.setOutputCol(“features”)
.setVocabSize(3)
.setMinDF(2)
.fit(df)

// alternatively, define CountVectorizerModel with a-priori vocabulary
val cvm = new CountVectorizerModel(Array(“a”, “b”, “c”))
.setInputCol(“words”)
.setOutputCol(“features”)

cvModel.transform(df).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/CountVectorizerExample.scala” in the Spark repo.
FeatureHasher
Feature hashing projects a set of categorical or numerical features into a feature vector of specified dimension (typically substantially smaller than that of the original feature space). This is done using the hashing trick to map features to indices in the feature vector.
The FeatureHasher transformer operates on multiple columns. Each column may contain either numeric or categorical features. Behavior and handling of column data types is as follows:
? Numeric columns: For numeric features, the hash value of the column name is used to map the feature value to its index in the feature vector. By default, numeric features are not treated as categorical (even when they are integers). To treat them as categorical, specify the relevant columns using the categoricalCols parameter.
? String columns: For categorical features, the hash value of the string “column_name=value” is used to map to the vector index, with an indicator value of 1.0. Thus, categorical features are “one-hot” encoded (similarly to using OneHotEncoder with dropLast=false).
? Boolean columns: Boolean values are treated in the same way as string columns. That is, boolean features are represented as “column_name=true” or “column_name=false”, with an indicator value of 1.0.
Null (missing) values are ignored (implicitly zero in the resulting feature vector).
The hash function used here is also the MurmurHash 3 used in HashingTF. Since a simple modulo on the hashed value is used to determine the vector index, it is advisable to use a power of two as the numFeatures parameter; otherwise the features will not be mapped evenly to the vector indices.
特征哈希將一組分類或數字特征投影到指定維數的特征向量中(通常大大小于原始特征空間的特征向量)。這是通過使用哈希技巧 將特征映射到特征向量中的索引來完成的。
該FeatureHasher變壓器上多列運行。每列都可以包含數字或分類特征。列數據類型的行為和處理如下:
? 數字列:對于數字特征,列名稱的哈希值用于將特征值映射到特征向量中的索引。默認情況下,數字功能不被視為分類(即使是整數)。要將其視為分類,使用categoricalCols參數指定相關列。
? 字符串列:對于分類特征,字符串“ column_name = value”的哈希值,用于映射到向量索引,指示符值為1.0。因此,分類特征被“一次熱”編碼(類似于將OneHotEncoder與一起使用 dropLast=false)。
? 布爾列:布爾值與字符串列的處理方式相同。即,布爾特征表示為“ column_name = true”或“ column_name = false”,指示符值為1.0。
空(缺失)值將被忽略(在所得特征向量中隱式為零)。
這里使用的哈希函數也是HashingTF中 使用的MurmurHash 3。由于使用散列值的簡單模來確定向量索引,因此建議使用2的冪作為numFeatures參數;否則,建議使用2的冪。不然,這些特征將不會均勻地映射到矢量索引。

Examples
Assume that we have a DataFrame with 4 input columns real, bool, stringNum, and string. These different data types as input will illustrate the behavior of the transform to produce a column of feature vectors. 假設有4個輸入列的數據幀real,bool,stringNum,和string。這些不同的數據類型作為輸入,將生成一列特征向量的變換。

realboolstringNumstring
2.2true1foo
3.3false2bar
4.4false3baz
5.5false4foo

Then the output of FeatureHasher.transform on this DataFrame is:

realboolstringNumstringfeatures
2.2true1foo(262144,[51871, 63643,174475,253195],[1.0,1.0,2.2,1.0])
3.3false2bar(262144,[6031, 80619,140467,174475],[1.0,1.0,1.0,3.3])
4.4false3baz(262144,[24279,140467,174475,196810],[1.0,1.0,4.4,1.0])
5.5false4foo(262144,[63643,140467,168512,174475],[1.0,1.0,1.0,5.5])

The resulting feature vectors could then be passed to a learning algorithm.
? Scala
? Java
? Python
Refer to the FeatureHasher Scala docs for more details on the API.
import org.apache.spark.ml.feature.FeatureHasher

val dataset = spark.createDataFrame(Seq(
(2.2, true, “1”, “foo”),
(3.3, false, “2”, “bar”),
(4.4, false, “3”, “baz”),
(5.5, false, “4”, “foo”)
)).toDF(“real”, “bool”, “stringNum”, “string”)

val hasher = new FeatureHasher()
.setInputCols(“real”, “bool”, “stringNum”, “string”)
.setOutputCol(“features”)

val featurized = hasher.transform(dataset)
featurized.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/FeatureHasherExample.scala” in the Spark repo.
Feature Transformers
Tokenizer
Tokenization is the process of taking text (such as a sentence) and breaking it into individual terms (usually words). A simple Tokenizer class provides this functionality. The example below shows how to split sentences into sequences of words.
RegexTokenizer allows more advanced tokenization based on regular expression (regex) matching. By default, the parameter “pattern” (regex, default: “\s+”) is used as delimiters to split the input text. Alternatively, users can set parameter “gaps” to false indicating the regex “pattern” denotes “tokens” rather than splitting gaps, and find all matching occurrences as the tokenization result.
標記化是獲取文本(例如句子),并將其分解為單個術語(通常是單詞)的過程。一個簡單的Tokenizer類提供了此功能。下面的示例顯示了如何將句子分成單詞序列。
RegexTokenizer允許基于正則表達式(regex)匹配,進行更高級的標記化。默認情況下,參數“ pattern”(正則表達式,默認值:),"\s+"用作分隔輸入文本的定界符。或者,用戶可以將參數“ gap”設置為false,以表示正則表達式“ pattern”表示“令牌”,而不是拆分間隙,并找到所有匹配的出現作為標記化結果。
Examples
? Scala
? Java
? Python
Refer to the Tokenizer Scala docs and the RegexTokenizer Scala docs for more details on the API. 有關API的更多詳細信息,可參考Tokenizer Scala文檔 和RegexTokenizer Scala文檔。
import org.apache.spark.ml.feature.{RegexTokenizer, Tokenizer}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._

val sentenceDataFrame = spark.createDataFrame(Seq(
(0, “Hi I heard about Spark”),
(1, “I wish Java could use case classes”),
(2, “Logistic,regression,models,are,neat”)
)).toDF(“id”, “sentence”)

val tokenizer = new Tokenizer().setInputCol(“sentence”).setOutputCol(“words”)
val regexTokenizer = new RegexTokenizer()
.setInputCol(“sentence”)
.setOutputCol(“words”)
.setPattern("\W") // alternatively .setPattern("\w+").setGaps(false)

val countTokens = udf { (words: Seq[String]) => words.length }

val tokenized = tokenizer.transform(sentenceDataFrame)
tokenized.select(“sentence”, “words”)
.withColumn(“tokens”, countTokens(col(“words”))).show(false)

val regexTokenized = regexTokenizer.transform(sentenceDataFrame)
regexTokenized.select(“sentence”, “words”)
.withColumn(“tokens”, countTokens(col(“words”))).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/TokenizerExample.scala” in the Spark repo.
StopWordsRemover
Stop words are words which should be excluded from the input, typically because the words appear frequently and don’t carry as much meaning.
StopWordsRemover takes as input a sequence of strings (e.g. the output of a Tokenizer) and drops all the stop words from the input sequences. The list of stopwords is specified by the stopWords parameter. Default stop words for some languages are accessible by calling StopWordsRemover.loadDefaultStopWords(language), for which available options are “danish”, “dutch”, “english”, “finnish”, “french”, “german”, “hungarian”, “italian”, “norwegian”, “portuguese”, “russian”, “spanish”, “swedish” and “turkish”. A boolean parameter caseSensitive indicates if the matches should be case sensitive (false by default).
停用詞是應從輸入中排除的詞,通常是因為這些詞頻繁出現且含義不大。
StopWordsRemover將一個字符串序列(例如Tokenizer的輸出)作為輸入,并從輸入序列中刪除所有停用詞。停用詞列表由stopWords參數指定。可以通過調用來訪問某些語言的默認停用詞StopWordsRemover.loadDefaultStopWords(language),其可用選項為“丹麥語”,“荷蘭語”,“英語”,“芬蘭語”,“法語”,“德語”,“匈牙利語”,“意大利語”,“挪威語” ”,“葡萄牙語”,“俄語”,“西班牙語”,“瑞典語”和“土耳其語”。布爾參數caseSensitive表示匹配項是否區分大小寫(默認情況下為false)。
Examples
Assume that we have the following DataFrame with columns id and raw:

idraw
0[I, saw, the, red, balloon]
1[Mary, had, a, little, lamb]

Applying StopWordsRemover with raw as the input column and filtered as the output column, we should get the following:

idrawfiltered
0[I, saw, the, red, balloon][saw, red, balloon]
1[Mary, had, a, little, lamb][Mary, little, lamb]

In filtered, the stop words “I”, “the”, “had”, and “a” have been filtered out.
? Scala
? Java
? Python
Refer to the StopWordsRemover Scala docs for more details on the API.
import org.apache.spark.ml.feature.StopWordsRemover

val remover = new StopWordsRemover()
.setInputCol(“raw”)
.setOutputCol(“filtered”)

val dataSet = spark.createDataFrame(Seq(
(0, Seq(“I”, “saw”, “the”, “red”, “balloon”)),
(1, Seq(“Mary”, “had”, “a”, “little”, “lamb”))
)).toDF(“id”, “raw”)

remover.transform(dataSet).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/StopWordsRemoverExample.scala” in the Spark repo.
nn-gram
An n-gram is a sequence of nn tokens (typically words) for some integer nn. The NGram class can be used to transform input features into nn-grams.
NGram takes as input a sequence of strings (e.g. the output of a Tokenizer). The parameter n is used to determine the number of terms in each nn-gram. The output will consist of a sequence of nn-grams where each nn-gram is represented by a space-delimited string of nn consecutive words. If the input sequence contains fewer than n strings, no output is produced.
Examples
? Scala
? Java
? Python
Refer to the NGram Scala docs for more details on the API.
import org.apache.spark.ml.feature.NGram

val wordDataFrame = spark.createDataFrame(Seq(
(0, Array(“Hi”, “I”, “heard”, “about”, “Spark”)),
(1, Array(“I”, “wish”, “Java”, “could”, “use”, “case”, “classes”)),
(2, Array(“Logistic”, “regression”, “models”, “are”, “neat”))
)).toDF(“id”, “words”)

val ngram = new NGram().setN(2).setInputCol(“words”).setOutputCol(“ngrams”)

val ngramDataFrame = ngram.transform(wordDataFrame)
ngramDataFrame.select(“ngrams”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/NGramExample.scala” in the Spark repo.
Binarizer
Binarization is the process of thresholding numerical features to binary (0/1) features.
Binarizer takes the common parameters inputCol and outputCol, as well as the threshold for binarization. Feature values greater than the threshold are binarized to 1.0; values equal to or less than the threshold are binarized to 0.0. Both Vector and Double types are supported for inputCol.
Examples
? Scala
? Java
? Python
Refer to the Binarizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Binarizer

val data = Array((0, 0.1), (1, 0.8), (2, 0.2))
val dataFrame = spark.createDataFrame(data).toDF(“id”, “feature”)

val binarizer: Binarizer = new Binarizer()
.setInputCol(“feature”)
.setOutputCol(“binarized_feature”)
.setThreshold(0.5)

val binarizedDataFrame = binarizer.transform(dataFrame)

println(s"Binarizer output with Threshold = ${binarizer.getThreshold}")
binarizedDataFrame.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/BinarizerExample.scala” in the Spark repo.
PCA
PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. A PCA class trains a model to project vectors to a low-dimensional space using PCA. The example below shows how to project 5-dimensional feature vectors into 3-dimensional principal components.
Examples
? Scala
? Java
? Python
Refer to the PCA Scala docs for more details on the API.
import org.apache.spark.ml.feature.PCA
import org.apache.spark.ml.linalg.Vectors

val data = Array(
Vectors.sparse(5, Seq((1, 1.0), (3, 7.0))),
Vectors.dense(2.0, 0.0, 3.0, 4.0, 5.0),
Vectors.dense(4.0, 0.0, 0.0, 6.0, 7.0)
)
val df = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val pca = new PCA()
.setInputCol(“features”)
.setOutputCol(“pcaFeatures”)
.setK(3)
.fit(df)

val result = pca.transform(df).select(“pcaFeatures”)
result.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/PCAExample.scala” in the Spark repo.
PolynomialExpansion
Polynomial expansion is the process of expanding your features into a polynomial space, which is formulated by an n-degree combination of original dimensions. A PolynomialExpansion class provides this functionality. The example below shows how to expand your features into a 3-degree polynomial space.
Examples
? Scala
? Java
? Python
Refer to the PolynomialExpansion Scala docs for more details on the API.
import org.apache.spark.ml.feature.PolynomialExpansion
import org.apache.spark.ml.linalg.Vectors

val data = Array(
Vectors.dense(2.0, 1.0),
Vectors.dense(0.0, 0.0),
Vectors.dense(3.0, -1.0)
)
val df = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val polyExpansion = new PolynomialExpansion()
.setInputCol(“features”)
.setOutputCol(“polyFeatures”)
.setDegree(3)

val polyDF = polyExpansion.transform(df)
polyDF.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/PolynomialExpansionExample.scala” in the Spark repo.
Discrete Cosine Transform (DCT)
The Discrete Cosine Transform transforms a length NN real-valued sequence in the time domain into another length NN real-valued sequence in the frequency domain. A DCT class provides this functionality, implementing the DCT-II and scaling the result by 1/2–√1/2 such that the representing matrix for the transform is unitary. No shift is applied to the transformed sequence (e.g. the 00th element of the transformed sequence is the 00th DCT coefficient and not the N/2N/2th).
Examples
? Scala
? Java
? Python
Refer to the DCT Scala docs for more details on the API.
import org.apache.spark.ml.feature.DCT
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
Vectors.dense(0.0, 1.0, -2.0, 3.0),
Vectors.dense(-1.0, 2.0, 4.0, -7.0),
Vectors.dense(14.0, -2.0, -5.0, 1.0))

val df = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val dct = new DCT()
.setInputCol(“features”)
.setOutputCol(“featuresDCT”)
.setInverse(false)

val dctDf = dct.transform(df)
dctDf.select(“featuresDCT”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/DCTExample.scala” in the Spark repo.
StringIndexer
StringIndexer encodes a string column of labels to a column of label indices. StringIndexer can encode multiple columns. The indices are in [0, numLabels), and four ordering options are supported: “frequencyDesc”: descending order by label frequency (most frequent label assigned 0), “frequencyAsc”: ascending order by label frequency (least frequent label assigned 0), “alphabetDesc”: descending alphabetical order, and “alphabetAsc”: ascending alphabetical order (default = “frequencyDesc”). Note that in case of equal frequency when under “frequencyDesc”/”frequencyAsc”, the strings are further sorted by alphabet.
The unseen labels will be put at index numLabels if user chooses to keep them. If the input column is numeric, we cast it to string and index the string values. When downstream pipeline components such as Estimator or Transformer make use of this string-indexed label, you must set the input column of the component to this string-indexed column name. In many cases, you can set the input column with setInputCol.
Examples
Assume that we have the following DataFrame with columns id and category:

idcategory
0a
1b
2c
3a
4a
5c

category is a string column with three labels: “a”, “b”, and “c”. Applying StringIndexer with category as the input column and categoryIndex as the output column, we should get the following:

idcategorycategoryIndex
0a0.0
1b2.0
2c1.0
3a0.0
4a0.0
5c1.0

“a” gets index 0 because it is the most frequent, followed by “c” with index 1 and “b” with index 2.
Additionally, there are three strategies regarding how StringIndexer will handle unseen labels when you have fit a StringIndexer on one dataset and then use it to transform another:
? throw an exception (which is the default)
? skip the row containing the unseen label entirely
? put unseen labels in a special additional bucket, at index numLabels
Examples
Let’s go back to our previous example but this time reuse our previously defined StringIndexer on the following dataset:

idcategory
0a
1b
2c
3d
4e

If you’ve not set how StringIndexer handles unseen labels or set it to “error”, an exception will be thrown. However, if you had called setHandleInvalid(“skip”), the following dataset will be generated:

idcategorycategoryIndex
0a0.0
1b2.0
2c1.0

Notice that the rows containing “d” or “e” do not appear.
If you call setHandleInvalid(“keep”), the following dataset will be generated:

idcategorycategoryIndex
0a0.0
1b2.0
2c1.0
3d3.0
4e3.0

Notice that the rows containing “d” or “e” are mapped to index “3.0”
? Scala
? Java
? Python
Refer to the StringIndexer Scala docs for more details on the API.
import org.apache.spark.ml.feature.StringIndexer

val df = spark.createDataFrame(
Seq((0, “a”), (1, “b”), (2, “c”), (3, “a”), (4, “a”), (5, “c”))
).toDF(“id”, “category”)

val indexer = new StringIndexer()
.setInputCol(“category”)
.setOutputCol(“categoryIndex”)

val indexed = indexer.fit(df).transform(df)
indexed.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/StringIndexerExample.scala” in the Spark repo.
IndexToString
Symmetrically to StringIndexer, IndexToString maps a column of label indices back to a column containing the original labels as strings. A common use case is to produce indices from labels with StringIndexer, train a model with those indices and retrieve the original labels from the column of predicted indices with IndexToString. However, you are free to supply your own labels.
Examples
Building on the StringIndexer example, let’s assume we have the following DataFrame with columns id and categoryIndex:

idcategoryIndex
00.0
12.0
21.0
30.0
40.0
51.0

Applying IndexToString with categoryIndex as the input column, originalCategory as the output column, we are able to retrieve our original labels (they will be inferred from the columns’ metadata):

idcategoryIndexoriginalCategory
00.0a
12.0b
21.0c
30.0a
40.0a
51.0c

? Scala
? Java
? Python
Refer to the IndexToString Scala docs for more details on the API.
import org.apache.spark.ml.attribute.Attribute
import org.apache.spark.ml.feature.{IndexToString, StringIndexer}

val df = spark.createDataFrame(Seq(
(0, “a”),
(1, “b”),
(2, “c”),
(3, “a”),
(4, “a”),
(5, “c”)
)).toDF(“id”, “category”)

val indexer = new StringIndexer()
.setInputCol(“category”)
.setOutputCol(“categoryIndex”)
.fit(df)
val indexed = indexer.transform(df)

println(s"Transformed string column ‘indexer.getInputCol′"+s"toindexedcolumn′{indexer.getInputCol}' " + s"to indexed column 'indexer.getInputCol"+s"toindexedcolumn{indexer.getOutputCol}’")
indexed.show()

val inputColSchema = indexed.schema(indexer.getOutputCol)
println(s"StringIndexer will store labels in output column metadata: " +
s"${Attribute.fromStructField(inputColSchema).toString}\n")

val converter = new IndexToString()
.setInputCol(“categoryIndex”)
.setOutputCol(“originalCategory”)

val converted = converter.transform(indexed)

println(s"Transformed indexed column ‘converter.getInputCol′backtooriginalstring"+s"column′{converter.getInputCol}' back to original string " + s"column 'converter.getInputColbacktooriginalstring"+s"column{converter.getOutputCol}’ using labels in metadata")
converted.select(“id”, “categoryIndex”, “originalCategory”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/IndexToStringExample.scala” in the Spark repo.
OneHotEncoder
One-hot encoding maps a categorical feature, represented as a label index, to a binary vector with at most a single one-value indicating the presence of a specific feature value from among the set of all feature values. This encoding allows algorithms which expect continuous features, such as Logistic Regression, to use categorical features. For string type input data, it is common to encode categorical features using StringIndexer first.
OneHotEncoder can transform multiple columns, returning an one-hot-encoded output vector column for each input column. It is common to merge these vectors into a single feature vector using VectorAssembler.
OneHotEncoder supports the handleInvalid parameter to choose how to handle invalid input during transforming data. Available options include ‘keep’ (any invalid inputs are assigned to an extra categorical index) and ‘error’ (throw an error).
Examples
? Scala
? Java
? Python
Refer to the OneHotEncoder Scala docs for more details on the API.
import org.apache.spark.ml.feature.OneHotEncoder

val df = spark.createDataFrame(Seq(
(0.0, 1.0),
(1.0, 0.0),
(2.0, 1.0),
(0.0, 2.0),
(0.0, 1.0),
(2.0, 0.0)
)).toDF(“categoryIndex1”, “categoryIndex2”)

val encoder = new OneHotEncoder()
.setInputCols(Array(“categoryIndex1”, “categoryIndex2”))
.setOutputCols(Array(“categoryVec1”, “categoryVec2”))
val model = encoder.fit(df)

val encoded = model.transform(df)
encoded.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/OneHotEncoderExample.scala” in the Spark repo.
VectorIndexer
VectorIndexer helps index categorical features in datasets of Vectors. It can both automatically decide which features are categorical and convert original values to category indices. Specifically, it does the following:

  1. Take an input column of type Vector and a parameter maxCategories.
  2. Decide which features should be categorical based on the number of distinct values, where features with at most maxCategories are declared categorical.
  3. Compute 0-based category indices for each categorical feature.
  4. Index categorical features and transform original feature values to indices.
    Indexing categorical features allows algorithms such as Decision Trees and Tree Ensembles to treat categorical features appropriately, improving performance.
    Examples
    In the example below, we read in a dataset of labeled points and then use VectorIndexer to decide which features should be treated as categorical. We transform the categorical feature values to their indices. This transformed data could then be passed to algorithms such as DecisionTreeRegressor that handle categorical features.
    ? Scala
    ? Java
    ? Python
    Refer to the VectorIndexer Scala docs for more details on the API.
    import org.apache.spark.ml.feature.VectorIndexer

val data = spark.read.format(“libsvm”).load(“data/mllib/sample_libsvm_data.txt”)

val indexer = new VectorIndexer()
.setInputCol(“features”)
.setOutputCol(“indexed”)
.setMaxCategories(10)

val indexerModel = indexer.fit(data)

val categoricalFeatures: Set[Int] = indexerModel.categoryMaps.keys.toSet
println(s"Chose ${categoricalFeatures.size} " +
s"categorical features: ${categoricalFeatures.mkString(", “)}”)

// Create new column “indexed” with categorical values transformed to indices
val indexedData = indexerModel.transform(data)
indexedData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorIndexerExample.scala” in the Spark repo.
Interaction
Interaction is a Transformer which takes vector or double-valued columns, and generates a single vector column that contains the product of all combinations of one value from each input column.
For example, if you have 2 vector type columns each of which has 3 dimensions as input columns, then you’ll get a 9-dimensional vector as the output column.
Examples
Assume that we have the following DataFrame with the columns “id1”, “vec1”, and “vec2”:

id1vec1vec2
1[1.0,2.0,3.0][8.0,4.0,5.0]
2[4.0,3.0,8.0][7.0,9.0,8.0]
3[6.0,1.0,9.0][2.0,3.0,6.0]
4[10.0,8.0,6.0][9.0,4.0,5.0]
5[9.0,2.0,7.0][10.0,7.0,3.0]
6[1.0,1.0,4.0][2.0,8.0,4.0]

Applying Interaction with those input columns, then interactedCol as the output column contains:

id1vec1vec2interactedCol
1[1.0,2.0,3.0][8.0,4.0,5.0][8.0,4.0,5.0,16.0,8.0,10.0,24.0,12.0,15.0]
2[4.0,3.0,8.0][7.0,9.0,8.0][56.0,72.0,64.0,42.0,54.0,48.0,112.0,144.0,128.0]
3[6.0,1.0,9.0][2.0,3.0,6.0][36.0,54.0,108.0,6.0,9.0,18.0,54.0,81.0,162.0]
4[10.0,8.0,6.0][9.0,4.0,5.0][360.0,160.0,200.0,288.0,128.0,160.0,216.0,96.0,120.0]
5[9.0,2.0,7.0][10.0,7.0,3.0][450.0,315.0,135.0,100.0,70.0,30.0,350.0,245.0,105.0]
6[1.0,1.0,4.0][2.0,8.0,4.0][12.0,48.0,24.0,12.0,48.0,24.0,48.0,192.0,96.0]

? Scala
? Java
? Python
Refer to the Interaction Scala docs for more details on the API.
import org.apache.spark.ml.feature.Interaction
import org.apache.spark.ml.feature.VectorAssembler

val df = spark.createDataFrame(Seq(
(1, 1, 2, 3, 8, 4, 5),
(2, 4, 3, 8, 7, 9, 8),
(3, 6, 1, 9, 2, 3, 6),
(4, 10, 8, 6, 9, 4, 5),
(5, 9, 2, 7, 10, 7, 3),
(6, 1, 1, 4, 2, 8, 4)
)).toDF(“id1”, “id2”, “id3”, “id4”, “id5”, “id6”, “id7”)

val assembler1 = new VectorAssembler().
setInputCols(Array(“id2”, “id3”, “id4”)).
setOutputCol(“vec1”)

val assembled1 = assembler1.transform(df)

val assembler2 = new VectorAssembler().
setInputCols(Array(“id5”, “id6”, “id7”)).
setOutputCol(“vec2”)

val assembled2 = assembler2.transform(assembled1).select(“id1”, “vec1”, “vec2”)

val interaction = new Interaction()
.setInputCols(Array(“id1”, “vec1”, “vec2”))
.setOutputCol(“interactedCol”)

val interacted = interaction.transform(assembled2)

interacted.show(truncate = false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/InteractionExample.scala” in the Spark repo.
Normalizer
Normalizer is a Transformer which transforms a dataset of Vector rows, normalizing each Vector to have unit norm. It takes parameter p, which specifies the p-norm used for normalization. (p=2p=2 by default.) This normalization can help standardize your input data and improve the behavior of learning algorithms.
Examples
The following example demonstrates how to load a dataset in libsvm format and then normalize each row to have unit L1L1 norm and unit L∞L∞ norm.
? Scala
? Java
? Python
Refer to the Normalizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Normalizer
import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 0.5, -1.0)),
(1, Vectors.dense(2.0, 1.0, 1.0)),
(2, Vectors.dense(4.0, 10.0, 2.0))
)).toDF(“id”, “features”)

// Normalize each Vector using L1L^1L1 norm.
val normalizer = new Normalizer()
.setInputCol(“features”)
.setOutputCol(“normFeatures”)
.setP(1.0)

val l1NormData = normalizer.transform(dataFrame)
println(“Normalized using L^1 norm”)
l1NormData.show()

// Normalize each Vector using L∞L^\inftyL norm.
val lInfNormData = normalizer.transform(dataFrame, normalizer.p -> Double.PositiveInfinity)
println(“Normalized using L^inf norm”)
lInfNormData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/NormalizerExample.scala” in the Spark repo.
StandardScaler
StandardScaler transforms a dataset of Vector rows, normalizing each feature to have unit standard deviation and/or zero mean. It takes parameters:
? withStd: True by default. Scales the data to unit standard deviation.
? withMean: False by default. Centers the data with mean before scaling. It will build a dense output, so take care when applying to sparse input.
StandardScaler is an Estimator which can be fit on a dataset to produce a StandardScalerModel; this amounts to computing summary statistics. The model can then transform a Vector column in a dataset to have unit standard deviation and/or zero mean features.
Note that if the standard deviation of a feature is zero, it will return default 0.0 value in the Vector for that feature.
Examples
The following example demonstrates how to load a dataset in libsvm format and then normalize each feature to have unit standard deviation.
? Scala
? Java
? Python
Refer to the StandardScaler Scala docs for more details on the API.
import org.apache.spark.ml.feature.StandardScaler

val dataFrame = spark.read.format(“libsvm”).load(“data/mllib/sample_libsvm_data.txt”)

val scaler = new StandardScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)
.setWithStd(true)
.setWithMean(false)

// Compute summary statistics by fitting the StandardScaler.
val scalerModel = scaler.fit(dataFrame)

// Normalize each feature to have unit standard deviation.
val scaledData = scalerModel.transform(dataFrame)
scaledData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/StandardScalerExample.scala” in the Spark repo.
RobustScaler
RobustScaler transforms a dataset of Vector rows, removing the median and scaling the data according to a specific quantile range (by default the IQR: Interquartile Range, quantile range between the 1st quartile and the 3rd quartile). Its behavior is quite similar to StandardScaler, however the median and the quantile range are used instead of mean and standard deviation, which make it robust to outliers. It takes parameters:
? lower: 0.25 by default. Lower quantile to calculate quantile range, shared by all features.
? upper: 0.75 by default. Upper quantile to calculate quantile range, shared by all features.
? withScaling: True by default. Scales the data to quantile range.
? withCentering: False by default. Centers the data with median before scaling. It will build a dense output, so take care when applying to sparse input.
RobustScaler is an Estimator which can be fit on a dataset to produce a RobustScalerModel; this amounts to computing quantile statistics. The model can then transform a Vector column in a dataset to have unit quantile range and/or zero median features.
Note that if the quantile range of a feature is zero, it will return default 0.0 value in the Vector for that feature.
Examples
The following example demonstrates how to load a dataset in libsvm format and then normalize each feature to have unit quantile range.
? Scala
? Java
? Python
Refer to the RobustScaler Scala docs for more details on the API.
import org.apache.spark.ml.feature.RobustScaler

val dataFrame = spark.read.format(“libsvm”).load(“data/mllib/sample_libsvm_data.txt”)

val scaler = new RobustScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)
.setWithScaling(true)
.setWithCentering(false)
.setLower(0.25)
.setUpper(0.75)

// Compute summary statistics by fitting the RobustScaler.
val scalerModel = scaler.fit(dataFrame)

// Transform each feature to have unit quantile range.
val scaledData = scalerModel.transform(dataFrame)
scaledData.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/RobustScalerExample.scala” in the Spark repo.
MinMaxScaler
MinMaxScaler transforms a dataset of Vector rows, rescaling each feature to a specific range (often [0, 1]). It takes parameters:
? min: 0.0 by default. Lower bound after transformation, shared by all features.
? max: 1.0 by default. Upper bound after transformation, shared by all features.
MinMaxScaler computes summary statistics on a data set and produces a MinMaxScalerModel. The model can then transform each feature individually such that it is in the given range.
The rescaled value for a feature E is calculated as,
Rescaled(ei)=ei?EminEmax?Emin?(max?min)+min(1)(1)Rescaled(ei)=ei?EminEmax?Emin?(max?min)+min
For the case EmaxEminEmaxEmin, Rescaled(ei)=0.5?(max+min)Rescaled(ei)=0.5?(max+min)
Note that since zero values will probably be transformed to non-zero values, output of the transformer will be DenseVector even for sparse input.
Examples
The following example demonstrates how to load a dataset in libsvm format and then rescale each feature to [0, 1].
? Scala
? Java
? Python
Refer to the MinMaxScaler Scala docs and the MinMaxScalerModel Scala docs for more details on the API.
import org.apache.spark.ml.feature.MinMaxScaler
import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 0.1, -1.0)),
(1, Vectors.dense(2.0, 1.1, 1.0)),
(2, Vectors.dense(3.0, 10.1, 3.0))
)).toDF(“id”, “features”)

val scaler = new MinMaxScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)

// Compute summary statistics and generate MinMaxScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [min, max].
val scaledData = scalerModel.transform(dataFrame)
println(s"Features scaled to range: [${scaler.getMin}, ${scaler.getMax}]")
scaledData.select(“features”, “scaledFeatures”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/MinMaxScalerExample.scala” in the Spark repo.
MaxAbsScaler
MaxAbsScaler transforms a dataset of Vector rows, rescaling each feature to range [-1, 1] by dividing through the maximum absolute value in each feature. It does not shift/center the data, and thus does not destroy any sparsity.
MaxAbsScaler computes summary statistics on a data set and produces a MaxAbsScalerModel. The model can then transform each feature individually to range [-1, 1].
Examples
The following example demonstrates how to load a dataset in libsvm format and then rescale each feature to [-1, 1].
? Scala
? Java
? Python
Refer to the MaxAbsScaler Scala docs and the MaxAbsScalerModel Scala docs for more details on the API.
import org.apache.spark.ml.feature.MaxAbsScaler
import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 0.1, -8.0)),
(1, Vectors.dense(2.0, 1.0, -4.0)),
(2, Vectors.dense(4.0, 10.0, 8.0))
)).toDF(“id”, “features”)

val scaler = new MaxAbsScaler()
.setInputCol(“features”)
.setOutputCol(“scaledFeatures”)

// Compute summary statistics and generate MaxAbsScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [-1, 1]
val scaledData = scalerModel.transform(dataFrame)
scaledData.select(“features”, “scaledFeatures”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/MaxAbsScalerExample.scala” in the Spark repo.
Bucketizer
Bucketizer transforms a column of continuous features to a column of feature buckets, where the buckets are specified by users. It takes a parameter:
? splits: Parameter for mapping continuous features into buckets. With n+1 splits, there are n buckets. A bucket defined by splits x,y holds values in the range [x,y) except the last bucket, which also includes y. Splits should be strictly increasing. Values at -inf, inf must be explicitly provided to cover all Double values; Otherwise, values outside the splits specified will be treated as errors. Two examples of splits are Array(Double.NegativeInfinity, 0.0, 1.0, Double.PositiveInfinity) and Array(0.0, 1.0, 2.0).
Note that if you have no idea of the upper and lower bounds of the targeted column, you should add Double.NegativeInfinity and Double.PositiveInfinity as the bounds of your splits to prevent a potential out of Bucketizer bounds exception.
Note also that the splits that you provided have to be in strictly increasing order, i.e. s0 < s1 < s2 < … < sn.
More details can be found in the API docs for Bucketizer.
Examples
The following example demonstrates how to bucketize a column of Doubles into another index-wised column.
? Scala
? Java
? Python
Refer to the Bucketizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Bucketizer

val splits = Array(Double.NegativeInfinity, -0.5, 0.0, 0.5, Double.PositiveInfinity)

val data = Array(-999.9, -0.5, -0.3, 0.0, 0.2, 999.9)
val dataFrame = spark.createDataFrame(data.map(Tuple1.apply)).toDF(“features”)

val bucketizer = new Bucketizer()
.setInputCol(“features”)
.setOutputCol(“bucketedFeatures”)
.setSplits(splits)

// Transform original data into its bucket index.
val bucketedData = bucketizer.transform(dataFrame)

println(s"Bucketizer output with ${bucketizer.getSplits.length-1} buckets")
bucketedData.show()

val splitsArray = Array(
Array(Double.NegativeInfinity, -0.5, 0.0, 0.5, Double.PositiveInfinity),
Array(Double.NegativeInfinity, -0.3, 0.0, 0.3, Double.PositiveInfinity))

val data2 = Array(
(-999.9, -999.9),
(-0.5, -0.2),
(-0.3, -0.1),
(0.0, 0.0),
(0.2, 0.4),
(999.9, 999.9))
val dataFrame2 = spark.createDataFrame(data2).toDF(“features1”, “features2”)

val bucketizer2 = new Bucketizer()
.setInputCols(Array(“features1”, “features2”))
.setOutputCols(Array(“bucketedFeatures1”, “bucketedFeatures2”))
.setSplitsArray(splitsArray)

// Transform original data into its bucket index.
val bucketedData2 = bucketizer2.transform(dataFrame2)

println(s"Bucketizer output with [" +
s"bucketizer2.getSplitsArray(0).length?1,"+s"{bucketizer2.getSplitsArray(0).length-1}, " + s"bucketizer2.getSplitsArray(0).length?1,"+s"{bucketizer2.getSplitsArray(1).length-1}] buckets for each input column")
bucketedData2.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/BucketizerExample.scala” in the Spark repo.
ElementwiseProduct
ElementwiseProduct multiplies each input vector by a provided “weight” vector, using element-wise multiplication. In other words, it scales each column of the dataset by a scalar multiplier. This represents the Hadamard product between the input vector, v and transforming vector, w, to yield a result vector.
????v1?vN????°????w1?wN????=????v1w1?vNwN????(v1?vN)°(w1?wN)=(v1w1?vNwN)
Examples
This example below demonstrates how to transform vectors using a transforming vector value.
? Scala
? Java
? Python
Refer to the ElementwiseProduct Scala docs for more details on the API.
import org.apache.spark.ml.feature.ElementwiseProduct
import org.apache.spark.ml.linalg.Vectors

// Create some vector data; also works for sparse vectors
val dataFrame = spark.createDataFrame(Seq(
(“a”, Vectors.dense(1.0, 2.0, 3.0)),
(“b”, Vectors.dense(4.0, 5.0, 6.0)))).toDF(“id”, “vector”)

val transformingVector = Vectors.dense(0.0, 1.0, 2.0)
val transformer = new ElementwiseProduct()
.setScalingVec(transformingVector)
.setInputCol(“vector”)
.setOutputCol(“transformedVector”)

// Batch transform the vectors to create new column:
transformer.transform(dataFrame).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/ElementwiseProductExample.scala” in the Spark repo.
SQLTransformer
SQLTransformer implements the transformations which are defined by SQL statement. Currently, we only support SQL syntax like “SELECT … FROM THIS …” where “THIS” represents the underlying table of the input dataset. The select clause specifies the fields, constants, and expressions to display in the output, and can be any select clause that Spark SQL supports. Users can also use Spark SQL built-in function and UDFs to operate on these selected columns. For example, SQLTransformer supports statements like:
? SELECT a, a + b AS a_b FROM THIS
? SELECT a, SQRT(b) AS b_sqrt FROM THIS where a > 5
? SELECT a, b, SUM? AS c_sum FROM THIS GROUP BY a, b
Examples
Assume that we have the following DataFrame with columns id, v1 and v2:

idv1v2
01.03.0
22.05.0

This is the output of the SQLTransformer with statement “SELECT *, (v1 + v2) AS v3, (v1 * v2) AS v4 FROM THIS”:

idv1v2v3v4
01.03.04.03.0
22.05.07.010.0

? Scala
? Java
? Python
Refer to the SQLTransformer Scala docs for more details on the API.
import org.apache.spark.ml.feature.SQLTransformer

val df = spark.createDataFrame(
Seq((0, 1.0, 3.0), (2, 2.0, 5.0))).toDF(“id”, “v1”, “v2”)

val sqlTrans = new SQLTransformer().setStatement(
“SELECT *, (v1 + v2) AS v3, (v1 * v2) AS v4 FROM THIS”)

sqlTrans.transform(df).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/SQLTransformerExample.scala” in the Spark repo.
VectorAssembler
VectorAssembler is a transformer that combines a given list of columns into a single vector column. It is useful for combining raw features and features generated by different feature transformers into a single feature vector, in order to train ML models like logistic regression and decision trees. VectorAssembler accepts the following input column types: all numeric types, boolean type, and vector type. In each row, the values of the input columns will be concatenated into a vector in the specified order.
Examples
Assume that we have a DataFrame with the columns id, hour, mobile, userFeatures, and clicked:

idhourmobileuserFeaturesclicked
0181.0[0.0, 10.0, 0.5]1.0

userFeatures is a vector column that contains three user features. We want to combine hour, mobile, and userFeatures into a single feature vector called features and use it to predict clicked or not. If we set VectorAssembler’s input columns to hour, mobile, and userFeatures and output column to features, after transformation we should get the following DataFrame:

idhourmobileuserFeaturesclickedfeatures
0181.0[0.0, 10.0, 0.5]1.0[18.0, 1.0, 0.0, 10.0, 0.5]

? Scala
? Java
? Python
Refer to the VectorAssembler Scala docs for more details on the API.
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
Seq((0, 18, 1.0, Vectors.dense(0.0, 10.0, 0.5), 1.0))
).toDF(“id”, “hour”, “mobile”, “userFeatures”, “clicked”)

val assembler = new VectorAssembler()
.setInputCols(Array(“hour”, “mobile”, “userFeatures”))
.setOutputCol(“features”)

val output = assembler.transform(dataset)
println(“Assembled columns ‘hour’, ‘mobile’, ‘userFeatures’ to vector column ‘features’”)
output.select(“features”, “clicked”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorAssemblerExample.scala” in the Spark repo.
VectorSizeHint
It can sometimes be useful to explicitly specify the size of the vectors for a column of VectorType. For example, VectorAssembler uses size information from its input columns to produce size information and metadata for its output column. While in some cases this information can be obtained by inspecting the contents of the column, in a streaming dataframe the contents are not available until the stream is started. VectorSizeHint allows a user to explicitly specify the vector size for a column so that VectorAssembler, or other transformers that might need to know vector size, can use that column as an input.
To use VectorSizeHint a user must set the inputCol and size parameters. Applying this transformer to a dataframe produces a new dataframe with updated metadata for inputCol specifying the vector size. Downstream operations on the resulting dataframe can get this size using the metadata.
VectorSizeHint can also take an optional handleInvalid parameter which controls its behaviour when the vector column contains nulls or vectors of the wrong size. By default handleInvalid is set to “error”, indicating an exception should be thrown. This parameter can also be set to “skip”, indicating that rows containing invalid values should be filtered out from the resulting dataframe, or “optimistic”, indicating that the column should not be checked for invalid values and all rows should be kept. Note that the use of “optimistic” can cause the resulting dataframe to be in an inconsistent state, meaning the metadata for the column VectorSizeHint was applied to does not match the contents of that column. Users should take care to avoid this kind of inconsistent state.
? Scala
? Java
? Python
Refer to the VectorSizeHint Scala docs for more details on the API.
import org.apache.spark.ml.feature.{VectorAssembler, VectorSizeHint}
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
Seq(
(0, 18, 1.0, Vectors.dense(0.0, 10.0, 0.5), 1.0),
(0, 18, 1.0, Vectors.dense(0.0, 10.0), 0.0))
).toDF(“id”, “hour”, “mobile”, “userFeatures”, “clicked”)

val sizeHint = new VectorSizeHint()
.setInputCol(“userFeatures”)
.setHandleInvalid(“skip”)
.setSize(3)

val datasetWithSize = sizeHint.transform(dataset)
println(“Rows where ‘userFeatures’ is not the right size are filtered out”)
datasetWithSize.show(false)

val assembler = new VectorAssembler()
.setInputCols(Array(“hour”, “mobile”, “userFeatures”))
.setOutputCol(“features”)

// This dataframe can be used by downstream transformers as before
val output = assembler.transform(datasetWithSize)
println(“Assembled columns ‘hour’, ‘mobile’, ‘userFeatures’ to vector column ‘features’”)
output.select(“features”, “clicked”).show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorSizeHintExample.scala” in the Spark repo.
QuantileDiscretizer
QuantileDiscretizer takes a column with continuous features and outputs a column with binned categorical features. The number of bins is set by the numBuckets parameter. It is possible that the number of buckets used will be smaller than this value, for example, if there are too few distinct values of the input to create enough distinct quantiles.
NaN values: NaN values will be removed from the column during QuantileDiscretizer fitting. This will produce a Bucketizer model for making predictions. During the transformation, Bucketizer will raise an error when it finds NaN values in the dataset, but the user can also choose to either keep or remove NaN values within the dataset by setting handleInvalid. If the user chooses to keep NaN values, they will be handled specially and placed into their own bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3], but NaNs will be counted in a special bucket[4].
Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for approxQuantile for a detailed description). The precision of the approximation can be controlled with the relativeError parameter. When set to zero, exact quantiles are calculated (Note: Computing exact quantiles is an expensive operation). The lower and upper bin bounds will be -Infinity and +Infinity covering all real values.
Examples
Assume that we have a DataFrame with the columns id, hour:

idhour
018.0
----------
119.0
----------
28.0
----------
35.0
----------
42.2

hour is a continuous feature with Double type. We want to turn the continuous feature into a categorical one. Given numBuckets = 3, we should get the following DataFrame:

idhourresult
018.02.0
----------------
119.02.0
----------------
28.01.0
----------------
35.01.0
----------------
42.20.0

? Scala
? Java
? Python
Refer to the QuantileDiscretizer Scala docs for more details on the API.
import org.apache.spark.ml.feature.QuantileDiscretizer

val data = Array((0, 18.0), (1, 19.0), (2, 8.0), (3, 5.0), (4, 2.2))
val df = spark.createDataFrame(data).toDF(“id”, “hour”)

val discretizer = new QuantileDiscretizer()
.setInputCol(“hour”)
.setOutputCol(“result”)
.setNumBuckets(3)

val result = discretizer.fit(df).transform(df)
result.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/QuantileDiscretizerExample.scala” in the Spark repo.
Imputer
The Imputer estimator completes missing values in a dataset, either using the mean or the median of the columns in which the missing values are located. The input columns should be of numeric type. Currently Imputer does not support categorical features and possibly creates incorrect values for columns containing categorical features. Imputer can impute custom values other than ‘NaN’ by .setMissingValue(custom_value). For example, .setMissingValue(0) will impute all occurrences of (0).
Note all null values in the input columns are treated as missing, and so are also imputed.
Examples
Suppose that we have a DataFrame with the columns a and b:
a | b
------------|-----------
1.0 | Double.NaN
2.0 | Double.NaN
Double.NaN | 3.0
4.0 | 4.0
5.0 | 5.0
In this example, Imputer will replace all occurrences of Double.NaN (the default for the missing value) with the mean (the default imputation strategy) computed from the other values in the corresponding columns. In this example, the surrogate values for columns a and b are 3.0 and 4.0 respectively. After transformation, the missing values in the output columns will be replaced by the surrogate value for the relevant column.
a | b | out_a | out_b
------------|------------|-------|-------
1.0 | Double.NaN | 1.0 | 4.0
2.0 | Double.NaN | 2.0 | 4.0
Double.NaN | 3.0 | 3.0 | 3.0
4.0 | 4.0 | 4.0 | 4.0
5.0 | 5.0 | 5.0 | 5.0
? Scala
? Java
? Python
Refer to the Imputer Scala docs for more details on the API.
import org.apache.spark.ml.feature.Imputer

val df = spark.createDataFrame(Seq(
(1.0, Double.NaN),
(2.0, Double.NaN),
(Double.NaN, 3.0),
(4.0, 4.0),
(5.0, 5.0)
)).toDF(“a”, “b”)

val imputer = new Imputer()
.setInputCols(Array(“a”, “b”))
.setOutputCols(Array(“out_a”, “out_b”))

val model = imputer.fit(df)
model.transform(df).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/ImputerExample.scala” in the Spark repo.
Feature Selectors
VectorSlicer
VectorSlicer is a transformer that takes a feature vector and outputs a new feature vector with a sub-array of the original features. It is useful for extracting features from a vector column.
VectorSlicer accepts a vector column with specified indices, then outputs a new vector column whose values are selected via those indices. There are two types of indices,

  1. Integer indices that represent the indices into the vector, setIndices().
  2. String indices that represent the names of features into the vector, setNames(). This requires the vector column to have an AttributeGroup since the implementation matches on the name field of an Attribute.
    Specification by integer and string are both acceptable. Moreover, you can use integer index and string name simultaneously. At least one feature must be selected. Duplicate features are not allowed, so there can be no overlap between selected indices and names. Note that if names of features are selected, an exception will be thrown if empty input attributes are encountered.
    The output vector will order features with the selected indices first (in the order given), followed by the selected names (in the order given).
    Examples
    Suppose that we have a DataFrame with the column userFeatures:
    userFeatures

[0.0, 10.0, 0.5]
userFeatures is a vector column that contains three user features. Assume that the first column of userFeatures are all zeros, so we want to remove it and select only the last two columns. The VectorSlicer selects the last two elements with setIndices(1, 2) then produces a new vector column named features:

userFeaturesfeatures
[0.0, 10.0, 0.5][10.0, 0.5]

Suppose also that we have potential input attributes for the userFeatures, i.e. [“f1”, “f2”, “f3”], then we can use setNames(“f2”, “f3”) to select them.

userFeaturesfeatures
[0.0, 10.0, 0.5][10.0, 0.5]
[“f1”, “f2”, “f3”][“f2”, “f3”]

? Scala
? Java
? Python
Refer to the VectorSlicer Scala docs for more details on the API.
import java.util.Arrays

import org.apache.spark.ml.attribute.{Attribute, AttributeGroup, NumericAttribute}
import org.apache.spark.ml.feature.VectorSlicer
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.{Row, SparkSession}
import org.apache.spark.sql.types.StructType

val data = Arrays.asList(
Row(Vectors.sparse(3, Seq((0, -2.0), (1, 2.3)))),
Row(Vectors.dense(-2.0, 2.3, 0.0))
)

val defaultAttr = NumericAttribute.defaultAttr
val attrs = Array(“f1”, “f2”, “f3”).map(defaultAttr.withName)
val attrGroup = new AttributeGroup(“userFeatures”, attrs.asInstanceOf[Array[Attribute]])

val dataset = spark.createDataFrame(data, StructType(Array(attrGroup.toStructField())))

val slicer = new VectorSlicer().setInputCol(“userFeatures”).setOutputCol(“features”)

slicer.setIndices(Array(1)).setNames(Array(“f3”))
// or slicer.setIndices(Array(1, 2)), or slicer.setNames(Array(“f2”, “f3”))

val output = slicer.transform(dataset)
output.show(false)
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VectorSlicerExample.scala” in the Spark repo.
RFormula
RFormula selects columns specified by an R model formula. Currently we support a limited subset of the R operators, including ‘~’, ‘.’, ‘:’, ‘+’, and ‘-‘. The basic operators are:
? ~ separate target and terms
? + concat terms, “+ 0” means removing intercept
? - remove a term, “- 1” means removing intercept
? : interaction (multiplication for numeric values, or binarized categorical values)
? . all columns except target
Suppose a and b are double columns, we use the following simple examples to illustrate the effect of RFormula:
? y ~ a + b means model y ~ w0 + w1 * a + w2 * b where w0 is the intercept and w1, w2 are coefficients.
? y ~ a + b + a:b - 1 means model y ~ w1 * a + w2 * b + w3 * a * b where w1, w2, w3 are coefficients.
RFormula produces a vector column of features and a double or string column of label. Like when formulas are used in R for linear regression, numeric columns will be cast to doubles. As to string input columns, they will first be transformed with StringIndexer using ordering determined by stringOrderType, and the last category after ordering is dropped, then the doubles will be one-hot encoded.
Suppose a string feature column containing values {‘b’, ‘a’, ‘b’, ‘a’, ‘c’, ‘b’}, we set stringOrderType to control the encoding:

stringOrderTypeCategory mapped to 0 by StringIndexerCategory dropped by RFormula
‘frequencyDesc’most frequent category (‘b’)least frequent category (‘c’)
‘frequencyAsc’least frequent category (‘c’)most frequent category (‘b’)
‘alphabetDesc’last alphabetical category (‘c’)first alphabetical category (‘a’)
‘alphabetAsc’first alphabetical category (‘a’)last alphabetical category (‘c’)

If the label column is of type string, it will be first transformed to double with StringIndexer using frequencyDesc ordering. If the label column does not exist in the DataFrame, the output label column will be created from the specified response variable in the formula.
Note: The ordering option stringOrderType is NOT used for the label column. When the label column is indexed, it uses the default descending frequency ordering in StringIndexer.
Examples
Assume that we have a DataFrame with the columns id, country, hour, and clicked:

idcountryhourclicked
7“US”181.0
8“CA”120.0
9“NZ”150.0

If we use RFormula with a formula string of clicked ~ country + hour, which indicates that we want to predict clicked based on country and hour, after transformation we should get the following DataFrame:

idcountryhourclickedfeatureslabel
7“US”181.0[0.0, 0.0, 18.0]1.0
8“CA”120.0[0.0, 1.0, 12.0]0.0
9“NZ”150.0[1.0, 0.0, 15.0]0.0

? Scala
? Java
? Python
Refer to the RFormula Scala docs for more details on the API.
import org.apache.spark.ml.feature.RFormula

val dataset = spark.createDataFrame(Seq(
(7, “US”, 18, 1.0),
(8, “CA”, 12, 0.0),
(9, “NZ”, 15, 0.0)
)).toDF(“id”, “country”, “hour”, “clicked”)

val formula = new RFormula()
.setFormula(“clicked ~ country + hour”)
.setFeaturesCol(“features”)
.setLabelCol(“label”)

val output = formula.fit(dataset).transform(dataset)
output.select(“features”, “label”).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/RFormulaExample.scala” in the Spark repo.
ChiSqSelector
ChiSqSelector stands for Chi-Squared feature selection. It operates on labeled data with categorical features. ChiSqSelector uses the Chi-Squared test of independence to decide which features to choose. It supports five selection methods: numTopFeatures, percentile, fpr, fdr, fwe:
? numTopFeatures chooses a fixed number of top features according to a chi-squared test. This is akin to yielding the features with the most predictive power.
? percentile is similar to numTopFeatures but chooses a fraction of all features instead of a fixed number.
? fpr chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.
? fdr uses the Benjamini-Hochberg procedure to choose all features whose false discovery rate is below a threshold.
? fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection. By default, the selection method is numTopFeatures, with the default number of top features set to 50. The user can choose a selection method using setSelectorType.
Examples
Assume that we have a DataFrame with the columns id, features, and clicked, which is used as our target to be predicted:

idfeaturesclicked
7[0.0, 0.0, 18.0, 1.0]1.0
8[0.0, 1.0, 12.0, 0.0]0.0
9[1.0, 0.0, 15.0, 0.1]0.0

If we use ChiSqSelector with numTopFeatures = 1, then according to our label clicked the last column in our features is chosen as the most useful feature:

idfeaturesclickedselectedFeatures
7[0.0, 0.0, 18.0, 1.0]1.0[1.0]
8[0.0, 1.0, 12.0, 0.0]0.0[0.0]
9[1.0, 0.0, 15.0, 0.1]0.0[0.1]

? Scala
? Java
? Python
Refer to the ChiSqSelector Scala docs for more details on the API.
import org.apache.spark.ml.feature.ChiSqSelector
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
(7, Vectors.dense(0.0, 0.0, 18.0, 1.0), 1.0),
(8, Vectors.dense(0.0, 1.0, 12.0, 0.0), 0.0),
(9, Vectors.dense(1.0, 0.0, 15.0, 0.1), 0.0)
)

val df = spark.createDataset(data).toDF(“id”, “features”, “clicked”)

val selector = new ChiSqSelector()
.setNumTopFeatures(1)
.setFeaturesCol(“features”)
.setLabelCol(“clicked”)
.setOutputCol(“selectedFeatures”)

val result = selector.fit(df).transform(df)

println(s"ChiSqSelector output with top ${selector.getNumTopFeatures} features selected")
result.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/ChiSqSelectorExample.scala” in the Spark repo.
UnivariateFeatureSelector
UnivariateFeatureSelector operates on categorical/continuous labels with categorical/continuous features. User can set featureType and labelType, and Spark will pick the score function to use based on the specified featureType and labelType.

featureTypelabelTypescore function
categoricalcategoricalchi-squared (chi2)
continuouscategoricalANOVATest (f_classif)
continuouscontinuousF-value (f_regression)

It supports five selection modes: numTopFeatures, percentile, fpr, fdr, fwe:
? numTopFeatures chooses a fixed number of top features.
? percentile is similar to numTopFeatures but chooses a fraction of all features instead of a fixed number.
? fpr chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.
? fdr uses the Benjamini-Hochberg procedure to choose all features whose false discovery rate is below a threshold.
? fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection.
By default, the selection mode is numTopFeatures, with the default selectionThreshold sets to 50.
Examples
Assume that we have a DataFrame with the columns id, features, and label, which is used as our target to be predicted:

idfeatureslabel
1[1.7, 4.4, 7.6, 5.8, 9.6, 2.3]3.0
2[8.8, 7.3, 5.7, 7.3, 2.2, 4.1]2.0
3[1.2, 9.5, 2.5, 3.1, 8.7, 2.5]3.0
4[3.7, 9.2, 6.1, 4.1, 7.5, 3.8]2.0
5[8.9, 5.2, 7.8, 8.3, 5.2, 3.0]4.0
6[7.9, 8.5, 9.2, 4.0, 9.4, 2.1]4.0

If we set featureType to continuous and labelType to categorical with numTopFeatures = 1, the last column in our features is chosen as the most useful feature:

idfeatureslabelselectedFeatures
1[1.7, 4.4, 7.6, 5.8, 9.6, 2.3]3.0[2.3]
2[8.8, 7.3, 5.7, 7.3, 2.2, 4.1]2.0[4.1]
3[1.2, 9.5, 2.5, 3.1, 8.7, 2.5]3.0[2.5]
4[3.7, 9.2, 6.1, 4.1, 7.5, 3.8]2.0[3.8]
5[8.9, 5.2, 7.8, 8.3, 5.2, 3.0]4.0[3.0]
6[7.9, 8.5, 9.2, 4.0, 9.4, 2.1]4.0[2.1]

? Scala
? Java
? Python
Refer to the UnivariateFeatureSelector Scala docs for more details on the API.
import org.apache.spark.ml.feature.UnivariateFeatureSelector
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
(1, Vectors.dense(1.7, 4.4, 7.6, 5.8, 9.6, 2.3), 3.0),
(2, Vectors.dense(8.8, 7.3, 5.7, 7.3, 2.2, 4.1), 2.0),
(3, Vectors.dense(1.2, 9.5, 2.5, 3.1, 8.7, 2.5), 3.0),
(4, Vectors.dense(3.7, 9.2, 6.1, 4.1, 7.5, 3.8), 2.0),
(5, Vectors.dense(8.9, 5.2, 7.8, 8.3, 5.2, 3.0), 4.0),
(6, Vectors.dense(7.9, 8.5, 9.2, 4.0, 9.4, 2.1), 4.0)
)

val df = spark.createDataset(data).toDF(“id”, “features”, “label”)

val selector = new UnivariateFeatureSelector()
.setFeatureType(“continuous”)
.setLabelType(“categorical”)
.setSelectionMode(“numTopFeatures”)
.setSelectionThreshold(1)
.setFeaturesCol(“features”)
.setLabelCol(“label”)
.setOutputCol(“selectedFeatures”)

val result = selector.fit(df).transform(df)

println(s"UnivariateFeatureSelector output with top ${selector.getSelectionThreshold}" +
s" features selected using f_classif")
result.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/UnivariateFeatureSelectorExample.scala” in the Spark repo.
VarianceThresholdSelector
VarianceThresholdSelector is a selector that removes low-variance features. Features with a variance not greater than the varianceThreshold will be removed. If not set, varianceThreshold defaults to 0, which means only features with variance 0 (i.e. features that have the same value in all samples) will be removed.
Examples
Assume that we have a DataFrame with the columns id and features, which is used as our target to be predicted:

idfeatures
1[6.0, 7.0, 0.0, 7.0, 6.0, 0.0]
2[0.0, 9.0, 6.0, 0.0, 5.0, 9.0]
3[0.0, 9.0, 3.0, 0.0, 5.0, 5.0]
4[0.0, 9.0, 8.0, 5.0, 6.0, 4.0]
5[8.0, 9.0, 6.0, 5.0, 4.0, 4.0]
6[8.0, 9.0, 6.0, 0.0, 0.0, 0.0]

The variance for the 6 features are 16.67, 0.67, 8.17, 10.17, 5.07, and 11.47 respectively. If we use VarianceThresholdSelector with varianceThreshold = 8.0, then the features with variance <= 8.0 are removed:

idfeaturesselectedFeatures
1[6.0, 7.0, 0.0, 7.0, 6.0, 0.0][6.0,0.0,7.0,0.0]
2[0.0, 9.0, 6.0, 0.0, 5.0, 9.0][0.0,6.0,0.0,9.0]
3[0.0, 9.0, 3.0, 0.0, 5.0, 5.0][0.0,3.0,0.0,5.0]
4[0.0, 9.0, 8.0, 5.0, 6.0, 4.0][0.0,8.0,5.0,4.0]
5[8.0, 9.0, 6.0, 5.0, 4.0, 4.0][8.0,6.0,5.0,4.0]
6[8.0, 9.0, 6.0, 0.0, 0.0, 0.0][8.0,6.0,0.0,0.0]

? Scala
? Java
? Python
Refer to the VarianceThresholdSelector Scala docs for more details on the API.
import org.apache.spark.ml.feature.VarianceThresholdSelector
import org.apache.spark.ml.linalg.Vectors

val data = Seq(
(1, Vectors.dense(6.0, 7.0, 0.0, 7.0, 6.0, 0.0)),
(2, Vectors.dense(0.0, 9.0, 6.0, 0.0, 5.0, 9.0)),
(3, Vectors.dense(0.0, 9.0, 3.0, 0.0, 5.0, 5.0)),
(4, Vectors.dense(0.0, 9.0, 8.0, 5.0, 6.0, 4.0)),
(5, Vectors.dense(8.0, 9.0, 6.0, 5.0, 4.0, 4.0)),
(6, Vectors.dense(8.0, 9.0, 6.0, 0.0, 0.0, 0.0))
)

val df = spark.createDataset(data).toDF(“id”, “features”)

val selector = new VarianceThresholdSelector()
.setVarianceThreshold(8.0)
.setFeaturesCol(“features”)
.setOutputCol(“selectedFeatures”)

val result = selector.fit(df).transform(df)

println(s"Output: Features with variance lower than" +
s" ${selector.getVarianceThreshold} are removed.")
result.show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/VarianceThresholdSelectorExample.scala” in the Spark repo.
Locality Sensitive Hashing
Locality Sensitive Hashing (LSH) is an important class of hashing techniques, which is commonly used in clustering, approximate nearest neighbor search and outlier detection with large datasets.
The general idea of LSH is to use a family of functions (“LSH families”) to hash data points into buckets, so that the data points which are close to each other are in the same buckets with high probability, while data points that are far away from each other are very likely in different buckets. An LSH family is formally defined as follows.
In a metric space (M, d), where M is a set and d is a distance function on M, an LSH family is a family of functions h that satisfy the following properties:
?p,q∈M,d(p,q)≤r1?Pr(h§=h(q))≥p1d(p,q)≥r2?Pr(h§=h(q))≤p2?p,q∈M,d(p,q)≤r1?Pr(h§=h(q))≥p1d(p,q)≥r2?Pr(h§=h(q))≤p2
This LSH family is called (r1, r2, p1, p2)-sensitive.
In Spark, different LSH families are implemented in separate classes (e.g., MinHash), and APIs for feature transformation, approximate similarity join and approximate nearest neighbor are provided in each class.
In LSH, we define a false positive as a pair of distant input features (with d(p,q)≥r2d(p,q)≥r2) which are hashed into the same bucket, and we define a false negative as a pair of nearby features (with d(p,q)≤r1d(p,q)≤r1) which are hashed into different buckets.
LSH Operations
We describe the major types of operations which LSH can be used for. A fitted LSH model has methods for each of these operations.
Feature Transformation
Feature transformation is the basic functionality to add hashed values as a new column. This can be useful for dimensionality reduction. Users can specify input and output column names by setting inputCol and outputCol.
LSH also supports multiple LSH hash tables. Users can specify the number of hash tables by setting numHashTables. This is also used for OR-amplification in approximate similarity join and approximate nearest neighbor. Increasing the number of hash tables will increase the accuracy but will also increase communication cost and running time.
The type of outputCol is Seq[Vector] where the dimension of the array equals numHashTables, and the dimensions of the vectors are currently set to 1. In future releases, we will implement AND-amplification so that users can specify the dimensions of these vectors.
Approximate Similarity Join
Approximate similarity join takes two datasets and approximately returns pairs of rows in the datasets whose distance is smaller than a user-defined threshold. Approximate similarity join supports both joining two different datasets and self-joining. Self-joining will produce some duplicate pairs.
Approximate similarity join accepts both transformed and untransformed datasets as input. If an untransformed dataset is used, it will be transformed automatically. In this case, the hash signature will be created as outputCol.
In the joined dataset, the origin datasets can be queried in datasetA and datasetB. A distance column will be added to the output dataset to show the true distance between each pair of rows returned.
Approximate Nearest Neighbor Search
Approximate nearest neighbor search takes a dataset (of feature vectors) and a key (a single feature vector), and it approximately returns a specified number of rows in the dataset that are closest to the vector.
Approximate nearest neighbor search accepts both transformed and untransformed datasets as input. If an untransformed dataset is used, it will be transformed automatically. In this case, the hash signature will be created as outputCol.
A distance column will be added to the output dataset to show the true distance between each output row and the searched key.
Note: Approximate nearest neighbor search will return fewer than k rows when there are not enough candidates in the hash bucket.
LSH Algorithms
Bucketed Random Projection for Euclidean Distance
Bucketed Random Projection is an LSH family for Euclidean distance. The Euclidean distance is defined as follows:
d(x,y)=∑i(xi?yi)2??????????√d(x,y)=∑i(xi?yi)2
Its LSH family projects feature vectors xx onto a random unit vector vv and portions the projected results into hash buckets:
h(x)=?x?vr?h(x)=?x?vr?
where r is a user-defined bucket length. The bucket length can be used to control the average size of hash buckets (and thus the number of buckets). A larger bucket length (i.e., fewer buckets) increases the probability of features being hashed to the same bucket (increasing the numbers of true and false positives).
Bucketed Random Projection accepts arbitrary vectors as input features, and supports both sparse and dense vectors.
? Scala
? Java
? Python
Refer to the BucketedRandomProjectionLSH Scala docs for more details on the API.
import org.apache.spark.ml.feature.BucketedRandomProjectionLSH
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.col

val dfA = spark.createDataFrame(Seq(
(0, Vectors.dense(1.0, 1.0)),
(1, Vectors.dense(1.0, -1.0)),
(2, Vectors.dense(-1.0, -1.0)),
(3, Vectors.dense(-1.0, 1.0))
)).toDF(“id”, “features”)

val dfB = spark.createDataFrame(Seq(
(4, Vectors.dense(1.0, 0.0)),
(5, Vectors.dense(-1.0, 0.0)),
(6, Vectors.dense(0.0, 1.0)),
(7, Vectors.dense(0.0, -1.0))
)).toDF(“id”, “features”)

val key = Vectors.dense(1.0, 0.0)

val brp = new BucketedRandomProjectionLSH()
.setBucketLength(2.0)
.setNumHashTables(3)
.setInputCol(“features”)
.setOutputCol(“hashes”)

val model = brp.fit(dfA)

// Feature Transformation
println(“The hashed dataset where hashed values are stored in the column ‘hashes’:”)
model.transform(dfA).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate
// similarity join.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxSimilarityJoin(transformedA, transformedB, 1.5)
println(“Approximately joining dfA and dfB on Euclidean distance smaller than 1.5:”)
model.approxSimilarityJoin(dfA, dfB, 1.5, “EuclideanDistance”)
.select(col(“datasetA.id”).alias(“idA”),
col(“datasetB.id”).alias(“idB”),
col(“EuclideanDistance”)).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate nearest
// neighbor search.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxNearestNeighbors(transformedA, key, 2)
println(“Approximately searching dfA for 2 nearest neighbors of the key:”)
model.approxNearestNeighbors(dfA, key, 2).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/BucketedRandomProjectionLSHExample.scala” in the Spark repo.
MinHash for Jaccard Distance
MinHash is an LSH family for Jaccard distance where input features are sets of natural numbers. Jaccard distance of two sets is defined by the cardinality of their intersection and union:
d(A,B)=1?|A∩B||A∪B|d(A,B)=1?|A∩B||A∪B|
MinHash applies a random hash function g to each element in the set and take the minimum of all hashed values:
h(A)=mina∈A(g(a))h(A)=mina∈A(g(a))
The input sets for MinHash are represented as binary vectors, where the vector indices represent the elements themselves and the non-zero values in the vector represent the presence of that element in the set. While both dense and sparse vectors are supported, typically sparse vectors are recommended for efficiency. For example, Vectors.sparse(10, Array[(2, 1.0), (3, 1.0), (5, 1.0)]) means there are 10 elements in the space. This set contains elem 2, elem 3 and elem 5. All non-zero values are treated as binary “1” values.
Note: Empty sets cannot be transformed by MinHash, which means any input vector must have at least 1 non-zero entry.
? Scala
? Java
? Python
Refer to the MinHashLSH Scala docs for more details on the API.
import org.apache.spark.ml.feature.MinHashLSH
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.col

val dfA = spark.createDataFrame(Seq(
(0, Vectors.sparse(6, Seq((0, 1.0), (1, 1.0), (2, 1.0)))),
(1, Vectors.sparse(6, Seq((2, 1.0), (3, 1.0), (4, 1.0)))),
(2, Vectors.sparse(6, Seq((0, 1.0), (2, 1.0), (4, 1.0))))
)).toDF(“id”, “features”)

val dfB = spark.createDataFrame(Seq(
(3, Vectors.sparse(6, Seq((1, 1.0), (3, 1.0), (5, 1.0)))),
(4, Vectors.sparse(6, Seq((2, 1.0), (3, 1.0), (5, 1.0)))),
(5, Vectors.sparse(6, Seq((1, 1.0), (2, 1.0), (4, 1.0))))
)).toDF(“id”, “features”)

val key = Vectors.sparse(6, Seq((1, 1.0), (3, 1.0)))

val mh = new MinHashLSH()
.setNumHashTables(5)
.setInputCol(“features”)
.setOutputCol(“hashes”)

val model = mh.fit(dfA)

// Feature Transformation
println(“The hashed dataset where hashed values are stored in the column ‘hashes’:”)
model.transform(dfA).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate
// similarity join.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxSimilarityJoin(transformedA, transformedB, 0.6)
println(“Approximately joining dfA and dfB on Jaccard distance smaller than 0.6:”)
model.approxSimilarityJoin(dfA, dfB, 0.6, “JaccardDistance”)
.select(col(“datasetA.id”).alias(“idA”),
col(“datasetB.id”).alias(“idB”),
col(“JaccardDistance”)).show()

// Compute the locality sensitive hashes for the input rows, then perform approximate nearest
// neighbor search.
// We could avoid computing hashes by passing in the already-transformed dataset, e.g.
// model.approxNearestNeighbors(transformedA, key, 2)
// It may return less than 2 rows when not enough approximate near-neighbor candidates are
// found.
println(“Approximately searching dfA for 2 nearest neighbors of the key:”)
model.approxNearestNeighbors(dfA, key, 2).show()
Find full example code at “examples/src/main/scala/org/apache/spark/examples/ml/MinHashLSHExample.scala” in the Spark repo.

總結

以上是生活随笔為你收集整理的特征提取,转换和选择的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

在线亚洲高清揄拍自拍一品区 | 国产精品无套呻吟在线 | 成人欧美一区二区三区 | 国产片av国语在线观看 | 欧美亚洲国产一区二区三区 | 伊人久久大香线蕉av一区二区 | 99精品无人区乱码1区2区3区 | 中国女人内谢69xxxxxa片 | 色婷婷久久一区二区三区麻豆 | 久久久久久av无码免费看大片 | 少妇人妻大乳在线视频 | 国产乱人无码伦av在线a | 欧洲美熟女乱又伦 | 国产人妻精品午夜福利免费 | 久久久国产一区二区三区 | 亚洲国产精品无码久久久久高潮 | 亚洲中文字幕乱码av波多ji | 在线a亚洲视频播放在线观看 | 免费无码av一区二区 | 国产午夜无码视频在线观看 | 在线天堂新版最新版在线8 | 国产精品亚洲五月天高清 | av无码电影一区二区三区 | 国产成人午夜福利在线播放 | 国产激情无码一区二区 | 欧美老妇交乱视频在线观看 | 欧美人与物videos另类 | 高潮喷水的毛片 | 东京无码熟妇人妻av在线网址 | 国产精品久久久久久亚洲影视内衣 | 久久久久99精品成人片 | 高清国产亚洲精品自在久久 | 国产肉丝袜在线观看 | 日日夜夜撸啊撸 | 美女极度色诱视频国产 | 久久精品中文字幕一区 | 亚洲啪av永久无码精品放毛片 | 国产无遮挡又黄又爽免费视频 | 精品国产乱码久久久久乱码 | 特级做a爰片毛片免费69 | 人妻互换免费中文字幕 | 少妇无套内谢久久久久 | 色狠狠av一区二区三区 | 女人被爽到呻吟gif动态图视看 | 天堂在线观看www | 少妇无套内谢久久久久 | 色欲人妻aaaaaaa无码 | 亚洲一区二区三区国产精华液 | 青青草原综合久久大伊人精品 | 亚洲国产精品久久久天堂 | 欧美高清在线精品一区 | 狂野欧美性猛xxxx乱大交 | 亚洲成a人片在线观看无码3d | 久久亚洲精品成人无码 | а天堂中文在线官网 | 人妻熟女一区 | 强伦人妻一区二区三区视频18 | 国产成人人人97超碰超爽8 | 国产成人精品无码播放 | 亚洲高清偷拍一区二区三区 | а√资源新版在线天堂 | 亚洲欧美国产精品久久 | 小鲜肉自慰网站xnxx | 久久久久久久久888 | 樱花草在线播放免费中文 | 在线 国产 欧美 亚洲 天堂 | 国产另类ts人妖一区二区 | 国产无av码在线观看 | 麻豆md0077饥渴少妇 | 国内综合精品午夜久久资源 | 精品国产成人一区二区三区 | 国产三级精品三级男人的天堂 | 久久综合狠狠综合久久综合88 | 天堂在线观看www | 亚洲精品一区二区三区在线观看 | 日日鲁鲁鲁夜夜爽爽狠狠 | 男女性色大片免费网站 | 精品乱码久久久久久久 | a片在线免费观看 | 午夜精品久久久久久久 | 少妇人妻av毛片在线看 | 精品少妇爆乳无码av无码专区 | 夜先锋av资源网站 | 无码中文字幕色专区 | 高清无码午夜福利视频 | 国产无遮挡又黄又爽免费视频 | 国产精品亚洲а∨无码播放麻豆 | 免费网站看v片在线18禁无码 | 日本大乳高潮视频在线观看 | 欧美猛少妇色xxxxx | 亚洲 激情 小说 另类 欧美 | 人妻夜夜爽天天爽三区 | 天天躁夜夜躁狠狠是什么心态 | 久久亚洲日韩精品一区二区三区 | 无码人妻丰满熟妇区五十路百度 | 久久99精品国产.久久久久 | 性生交片免费无码看人 | 天下第一社区视频www日本 | 日本爽爽爽爽爽爽在线观看免 | 四虎国产精品一区二区 | 欧美一区二区三区 | 骚片av蜜桃精品一区 | 国内精品人妻无码久久久影院 | 日本精品久久久久中文字幕 | 欧美成人午夜精品久久久 | 蜜桃无码一区二区三区 | 5858s亚洲色大成网站www | 亚洲国产综合无码一区 | 欧美阿v高清资源不卡在线播放 | 亚洲色在线无码国产精品不卡 | 麻豆精产国品 | 伦伦影院午夜理论片 | 高潮毛片无遮挡高清免费视频 | 日本在线高清不卡免费播放 | 老子影院午夜伦不卡 | 国产精品欧美成人 | 色婷婷综合激情综在线播放 | 久久久久久av无码免费看大片 | 疯狂三人交性欧美 | 国产69精品久久久久app下载 | 无码av最新清无码专区吞精 | 国产亚洲精品久久久久久 | 国产乱人偷精品人妻a片 | 欧美性生交xxxxx久久久 | 强伦人妻一区二区三区视频18 | 未满成年国产在线观看 | 国产午夜福利100集发布 | 野狼第一精品社区 | 国内精品一区二区三区不卡 | 亚洲无人区午夜福利码高清完整版 | 日本丰满护士爆乳xxxx | 一本无码人妻在中文字幕免费 | 未满成年国产在线观看 | 99在线 | 亚洲 | 日本一本二本三区免费 | 好男人www社区 | 熟女少妇人妻中文字幕 | 岛国片人妻三上悠亚 | 久久精品人人做人人综合 | 久青草影院在线观看国产 | 在线播放无码字幕亚洲 | √天堂中文官网8在线 | 欧美精品在线观看 | 亚洲另类伦春色综合小说 | 日本爽爽爽爽爽爽在线观看免 | 欧美熟妇另类久久久久久不卡 | 亚洲国产日韩a在线播放 | 国产三级精品三级男人的天堂 | 日本免费一区二区三区最新 | 精品熟女少妇av免费观看 | 国产成人无码av在线影院 | 日韩欧美中文字幕公布 | 亚洲最大成人网站 | 欧美freesex黑人又粗又大 | 成人三级无码视频在线观看 | www国产精品内射老师 | 丁香啪啪综合成人亚洲 | 国产凸凹视频一区二区 | 在线欧美精品一区二区三区 | 久久精品99久久香蕉国产色戒 | 久久国产自偷自偷免费一区调 | 男女爱爱好爽视频免费看 | 在线观看国产午夜福利片 | 国产精品理论片在线观看 | 曰本女人与公拘交酡免费视频 | 东京无码熟妇人妻av在线网址 | 国精产品一品二品国精品69xx | 亚洲理论电影在线观看 | 四虎国产精品一区二区 | 精品国产aⅴ无码一区二区 | 水蜜桃亚洲一二三四在线 | 日本www一道久久久免费榴莲 | 伊人久久大香线蕉亚洲 | 国产乱人伦偷精品视频 | 福利一区二区三区视频在线观看 | 国产女主播喷水视频在线观看 | 欧美三级不卡在线观看 | 午夜精品久久久内射近拍高清 | 波多野结衣av一区二区全免费观看 | 久久综合网欧美色妞网 | 亚洲精品久久久久久一区二区 | 思思久久99热只有频精品66 | 色欲av亚洲一区无码少妇 | 国产乱子伦视频在线播放 | 无码av中文字幕免费放 | 久久综合狠狠综合久久综合88 | 亚洲日本一区二区三区在线 | 欧美日本精品一区二区三区 | 午夜无码区在线观看 | 人妻人人添人妻人人爱 | 久久久久成人片免费观看蜜芽 | 精品熟女少妇av免费观看 | 一二三四在线观看免费视频 | 欧美真人作爱免费视频 | 精品成在人线av无码免费看 | 欧美阿v高清资源不卡在线播放 | 无码一区二区三区在线观看 | 日韩 欧美 动漫 国产 制服 | 国产人成高清在线视频99最全资源 | 亚洲欧洲中文日韩av乱码 | 内射巨臀欧美在线视频 | 成在人线av无码免费 | 四虎永久在线精品免费网址 | 99久久久国产精品无码免费 | 中国女人内谢69xxxxxa片 | 国产农村妇女高潮大叫 | 亚洲精品午夜国产va久久成人 | 精品国产一区av天美传媒 | 一本久道高清无码视频 | 久久综合网欧美色妞网 | 久久五月精品中文字幕 | 国产性生交xxxxx无码 | 亚洲 另类 在线 欧美 制服 | 强伦人妻一区二区三区视频18 | 大屁股大乳丰满人妻 | 欧美日韩久久久精品a片 | 成熟女人特级毛片www免费 | 高清国产亚洲精品自在久久 | 天天躁日日躁狠狠躁免费麻豆 | 真人与拘做受免费视频一 | 国产激情无码一区二区app | 亚洲综合伊人久久大杳蕉 | 大屁股大乳丰满人妻 | 色窝窝无码一区二区三区色欲 | 国产美女极度色诱视频www | 亚洲一区av无码专区在线观看 | 色欲av亚洲一区无码少妇 | 国产一精品一av一免费 | 国产婷婷色一区二区三区在线 | 女高中生第一次破苞av | 无码乱肉视频免费大全合集 | 天天摸天天碰天天添 | 亚洲va欧美va天堂v国产综合 | 熟女俱乐部五十路六十路av | 无码人中文字幕 | 国産精品久久久久久久 | 一本无码人妻在中文字幕免费 | 亚洲乱亚洲乱妇50p | 精品人妻人人做人人爽夜夜爽 | 国产色视频一区二区三区 | 亚洲欧美日韩成人高清在线一区 | 午夜福利一区二区三区在线观看 | 综合网日日天干夜夜久久 | 精品乱子伦一区二区三区 | 少妇被黑人到高潮喷出白浆 | 日日摸夜夜摸狠狠摸婷婷 | 国产色在线 | 国产 | 亚洲gv猛男gv无码男同 | 日日夜夜撸啊撸 | 99久久婷婷国产综合精品青草免费 | 国产精品人人妻人人爽 | 老熟女重囗味hdxx69 | 中文字幕亚洲情99在线 | 亚洲日韩av一区二区三区四区 | 中文字幕日韩精品一区二区三区 | 国产av无码专区亚洲awww | 日日碰狠狠躁久久躁蜜桃 | 久久亚洲中文字幕精品一区 | 国产亚洲精品久久久久久国模美 | 欧美xxxx黑人又粗又长 | 俺去俺来也在线www色官网 | 精品无人国产偷自产在线 | 欧美高清在线精品一区 | 成人动漫在线观看 | 伊人久久大香线蕉av一区二区 | 图片区 小说区 区 亚洲五月 | 国产va免费精品观看 | 亚洲国产av美女网站 | 激情亚洲一区国产精品 | 在线播放无码字幕亚洲 | 亚洲精品国偷拍自产在线观看蜜桃 | 久久久精品成人免费观看 | 99久久精品国产一区二区蜜芽 | 少妇被黑人到高潮喷出白浆 | 亚洲国产精品无码一区二区三区 | 任你躁国产自任一区二区三区 | 国产三级精品三级男人的天堂 | 天天摸天天碰天天添 | 秋霞成人午夜鲁丝一区二区三区 | 熟女体下毛毛黑森林 | 中文字幕无码免费久久99 | 永久黄网站色视频免费直播 | 亚洲自偷自拍另类第1页 | 亚洲春色在线视频 | 亚洲色在线无码国产精品不卡 | 蜜桃av抽搐高潮一区二区 | 中文字幕av伊人av无码av | 久久人妻内射无码一区三区 | 伊人色综合久久天天小片 | 久久久精品456亚洲影院 | 国产麻豆精品一区二区三区v视界 | 4hu四虎永久在线观看 | 久久99精品久久久久久动态图 | 水蜜桃色314在线观看 | 免费国产成人高清在线观看网站 | 偷窥村妇洗澡毛毛多 | 亚洲精品一区二区三区在线观看 | 欧美 日韩 亚洲 在线 | 亚洲欧美色中文字幕在线 | 亚洲成av人片在线观看无码不卡 | 国产美女极度色诱视频www | 久久国产精品偷任你爽任你 | 国模大胆一区二区三区 | 99久久99久久免费精品蜜桃 | 清纯唯美经典一区二区 | 亚洲国产精品一区二区美利坚 | 国产成人午夜福利在线播放 | 午夜时刻免费入口 | 欧美人与动性行为视频 | 亚洲一区av无码专区在线观看 | 亚洲人成影院在线观看 | 动漫av一区二区在线观看 | 夜精品a片一区二区三区无码白浆 | 日本va欧美va欧美va精品 | 欧美 丝袜 自拍 制服 另类 | 5858s亚洲色大成网站www | 两性色午夜免费视频 | 国产又爽又黄又刺激的视频 | 性生交大片免费看l | 乱码午夜-极国产极内射 | 少妇被黑人到高潮喷出白浆 | 亚洲欧洲无卡二区视頻 | 国产精品久久久一区二区三区 | 国产办公室秘书无码精品99 | 综合人妻久久一区二区精品 | 久久综合九色综合欧美狠狠 | 国产精品亚洲专区无码不卡 | 久久午夜夜伦鲁鲁片无码免费 | 久久婷婷五月综合色国产香蕉 | аⅴ资源天堂资源库在线 | 任你躁国产自任一区二区三区 | aⅴ亚洲 日韩 色 图网站 播放 | 精品国产成人一区二区三区 | 国产av无码专区亚洲awww | 无人区乱码一区二区三区 | 成人无码精品1区2区3区免费看 | 欧美丰满熟妇xxxx性ppx人交 | 宝宝好涨水快流出来免费视频 | 久久精品成人欧美大片 | 成年美女黄网站色大免费视频 | 无码人妻丰满熟妇区毛片18 | 国产午夜福利100集发布 | 中国女人内谢69xxxxxa片 | 麻豆果冻传媒2021精品传媒一区下载 | 水蜜桃亚洲一二三四在线 | 亚洲の无码国产の无码步美 | 久久亚洲精品成人无码 | 男女超爽视频免费播放 | 大屁股大乳丰满人妻 | 骚片av蜜桃精品一区 | 在线观看国产一区二区三区 | www国产亚洲精品久久网站 | 久久精品成人欧美大片 | 亚洲中文字幕成人无码 | 亚洲精品无码人妻无码 | 大地资源中文第3页 | 18禁止看的免费污网站 | 亚洲色欲久久久综合网东京热 | 小sao货水好多真紧h无码视频 | 久久国产精品偷任你爽任你 | 内射巨臀欧美在线视频 | 欧美日韩一区二区免费视频 | 久久www免费人成人片 | 精品成人av一区二区三区 | 55夜色66夜色国产精品视频 | 国产精品嫩草久久久久 | 中文字幕无码日韩专区 | 扒开双腿疯狂进出爽爽爽视频 | 久久综合给合久久狠狠狠97色 | 夜夜影院未满十八勿进 | 亚洲欧洲日本综合aⅴ在线 | а√资源新版在线天堂 | 中文字幕乱码中文乱码51精品 | 成人综合网亚洲伊人 | 中文字幕日韩精品一区二区三区 | 日日夜夜撸啊撸 | 久久精品99久久香蕉国产色戒 | 国产激情无码一区二区app | 人人澡人人妻人人爽人人蜜桃 | 国产激情综合五月久久 | 国产精品久久久 | 日日鲁鲁鲁夜夜爽爽狠狠 | 亚洲自偷自拍另类第1页 | 国产婷婷色一区二区三区在线 | 亚洲精品成人福利网站 | 国产色视频一区二区三区 | 国产精华av午夜在线观看 | 精品无码一区二区三区爱欲 | 成人亚洲精品久久久久软件 | 久久国产自偷自偷免费一区调 | 午夜嘿嘿嘿影院 | 日本又色又爽又黄的a片18禁 | 日韩无套无码精品 | 国产超级va在线观看视频 | 高清不卡一区二区三区 | 久久久久久亚洲精品a片成人 | 国产亲子乱弄免费视频 | 久久成人a毛片免费观看网站 | 国产激情一区二区三区 | 3d动漫精品啪啪一区二区中 | 国色天香社区在线视频 | 欧洲vodafone精品性 | 亚洲无人区一区二区三区 | 精品国产乱码久久久久乱码 | 国产绳艺sm调教室论坛 | 精品国产精品久久一区免费式 | a片免费视频在线观看 | 久久综合久久自在自线精品自 | 欧美一区二区三区视频在线观看 | 国产 浪潮av性色四虎 | 激情人妻另类人妻伦 | 国产激情无码一区二区 | 久久亚洲中文字幕无码 | 国产精品久久久久9999小说 | 欧美国产日韩久久mv | 国产人妻人伦精品 | 永久免费观看美女裸体的网站 | 亚洲国产精品一区二区美利坚 | 精品国产乱码久久久久乱码 | 国产av人人夜夜澡人人爽麻豆 | av无码久久久久不卡免费网站 | aⅴ在线视频男人的天堂 | 久久久中文久久久无码 | 久久99精品久久久久婷婷 | 疯狂三人交性欧美 | 国产激情综合五月久久 | 丰满护士巨好爽好大乳 | 西西人体www44rt大胆高清 | 国产精品a成v人在线播放 | 久久综合给合久久狠狠狠97色 | 日本www一道久久久免费榴莲 | 国产精品va在线观看无码 | 亚洲中文字幕久久无码 | 激情国产av做激情国产爱 | 日韩无套无码精品 | 国产乱人偷精品人妻a片 | 日韩欧美成人免费观看 | 丰满人妻一区二区三区免费视频 | 九月婷婷人人澡人人添人人爽 | 女人被男人爽到呻吟的视频 | 高潮毛片无遮挡高清免费 | 国产精品久久久久无码av色戒 | 国产亚洲日韩欧美另类第八页 | 亚洲色欲色欲欲www在线 | 国产成人av免费观看 | 人人妻人人藻人人爽欧美一区 | 国产精品美女久久久网av | a在线亚洲男人的天堂 | 亚洲精品国偷拍自产在线麻豆 | 国产网红无码精品视频 | 扒开双腿疯狂进出爽爽爽视频 | 精品国产国产综合精品 | 国产69精品久久久久app下载 | 亚欧洲精品在线视频免费观看 | 九九在线中文字幕无码 | 成人试看120秒体验区 | 亚洲精品久久久久avwww潮水 | 人妻互换免费中文字幕 | 欧美激情一区二区三区成人 | 日产国产精品亚洲系列 | 欧美乱妇无乱码大黄a片 | 97久久精品无码一区二区 | 骚片av蜜桃精品一区 | 国产无av码在线观看 | av人摸人人人澡人人超碰下载 | 成人免费视频视频在线观看 免费 | 少妇性荡欲午夜性开放视频剧场 | 国产av久久久久精东av | 精品无人国产偷自产在线 | 久久久久久a亚洲欧洲av冫 | 久久zyz资源站无码中文动漫 | 亚无码乱人伦一区二区 | 无套内射视频囯产 | 亚洲一区二区三区香蕉 | 欧美午夜特黄aaaaaa片 | 欧美日韩久久久精品a片 | 免费国产成人高清在线观看网站 | 国产无遮挡吃胸膜奶免费看 | 日韩精品乱码av一区二区 | 午夜丰满少妇性开放视频 | 国产精品理论片在线观看 | 欧美日韩一区二区免费视频 | 久久精品一区二区三区四区 | 国产凸凹视频一区二区 | 亚洲日韩av一区二区三区中文 | 国产舌乚八伦偷品w中 | 欧美 亚洲 国产 另类 | 久久精品无码一区二区三区 | 亚洲日韩中文字幕在线播放 | 狂野欧美激情性xxxx | 天干天干啦夜天干天2017 | 日韩视频 中文字幕 视频一区 | 国产综合色产在线精品 | 曰韩无码二三区中文字幕 | 99国产欧美久久久精品 | 999久久久国产精品消防器材 | 红桃av一区二区三区在线无码av | 国产成人精品三级麻豆 | 性欧美牲交xxxxx视频 | 亚洲男人av天堂午夜在 | 给我免费的视频在线观看 | 骚片av蜜桃精品一区 | 亚洲第一无码av无码专区 | 日日鲁鲁鲁夜夜爽爽狠狠 | 国产激情一区二区三区 | 中文无码精品a∨在线观看不卡 | 国产热a欧美热a在线视频 | 国产欧美精品一区二区三区 | 国产精品无码一区二区桃花视频 | 亚洲七七久久桃花影院 | 国产精品无码mv在线观看 | 亚洲另类伦春色综合小说 | 午夜免费福利小电影 | 久久精品中文字幕一区 | 中文字幕久久久久人妻 | 亚洲中文字幕无码中文字在线 | 亚洲无人区一区二区三区 | 人妻人人添人妻人人爱 | 老司机亚洲精品影院 | 日韩精品成人一区二区三区 | 巨爆乳无码视频在线观看 | 丰满少妇人妻久久久久久 | 乱人伦人妻中文字幕无码久久网 | 99久久亚洲精品无码毛片 | 亚洲色偷偷男人的天堂 | 亚洲精品国产品国语在线观看 | 日日夜夜撸啊撸 | 亚洲国产精品无码久久久久高潮 | 大乳丰满人妻中文字幕日本 | 欧洲极品少妇 | 荡女精品导航 | 国产免费久久精品国产传媒 | 亚洲精品一区二区三区婷婷月 | 粉嫩少妇内射浓精videos | 国产一区二区三区四区五区加勒比 | 欧美人与禽猛交狂配 | 亚洲精品中文字幕乱码 | 领导边摸边吃奶边做爽在线观看 | 无码国产乱人伦偷精品视频 | 久青草影院在线观看国产 | 丰满人妻翻云覆雨呻吟视频 | 精品国产一区二区三区四区 | 亚洲中文字幕av在天堂 | 少女韩国电视剧在线观看完整 | 国产精品久久福利网站 | 亚洲国产精品成人久久蜜臀 | 亚洲国产精品成人久久蜜臀 | 黑人巨大精品欧美黑寡妇 | 亚洲精品中文字幕久久久久 | 精品国产麻豆免费人成网站 | 丝袜足控一区二区三区 | 久久亚洲精品中文字幕无男同 | 无码国内精品人妻少妇 | 精品国产aⅴ无码一区二区 | 欧美亚洲日韩国产人成在线播放 | 日本饥渴人妻欲求不满 | 夜先锋av资源网站 | 2019午夜福利不卡片在线 | 国内丰满熟女出轨videos | 亚洲日韩一区二区三区 | 日韩人妻系列无码专区 | 亚洲大尺度无码无码专区 | 国产香蕉97碰碰久久人人 | a片在线免费观看 | 青草青草久热国产精品 | 国产肉丝袜在线观看 | 国产精品18久久久久久麻辣 | 国产9 9在线 | 中文 | 久久久中文久久久无码 | 少妇的肉体aa片免费 | 国产成人久久精品流白浆 | 久久久久久久久蜜桃 | 国产精品亚洲lv粉色 | 亚洲狠狠色丁香婷婷综合 | 国产精品久久久久久久9999 | 国产一区二区三区日韩精品 | 中文无码成人免费视频在线观看 | 亚洲男人av香蕉爽爽爽爽 | 无码帝国www无码专区色综合 | 少女韩国电视剧在线观看完整 | 精品一区二区三区无码免费视频 | 国产精品亚洲lv粉色 | 欧美午夜特黄aaaaaa片 | 国产偷抇久久精品a片69 | 性色欲情网站iwww九文堂 | 国产精品久久国产精品99 | 日韩欧美中文字幕在线三区 | 欧美丰满老熟妇xxxxx性 | 亚洲综合精品香蕉久久网 | 荡女精品导航 | 97夜夜澡人人双人人人喊 | 老子影院午夜精品无码 | 久久久国产精品无码免费专区 | 国产成人综合在线女婷五月99播放 | 伊在人天堂亚洲香蕉精品区 | 1000部啪啪未满十八勿入下载 | 中文字幕久久久久人妻 | 老熟妇仑乱视频一区二区 | 97人妻精品一区二区三区 | 性色av无码免费一区二区三区 | 99精品无人区乱码1区2区3区 | 夜夜夜高潮夜夜爽夜夜爰爰 | 日产精品高潮呻吟av久久 | 久久亚洲a片com人成 | 扒开双腿吃奶呻吟做受视频 | 亚洲最大成人网站 | 国精产品一品二品国精品69xx | 免费中文字幕日韩欧美 | 99精品无人区乱码1区2区3区 | 国产无遮挡吃胸膜奶免费看 | 色欲综合久久中文字幕网 | 樱花草在线播放免费中文 | 丰满岳乱妇在线观看中字无码 | 国产精品亚洲专区无码不卡 | 无码人妻黑人中文字幕 | 成人免费视频一区二区 | 日本爽爽爽爽爽爽在线观看免 | 国模大胆一区二区三区 | 99精品无人区乱码1区2区3区 | 国产真人无遮挡作爱免费视频 | 精品国产精品久久一区免费式 | 国产成人无码一二三区视频 | 又大又黄又粗又爽的免费视频 | 欧美丰满老熟妇xxxxx性 | 国产一区二区三区日韩精品 | 亚洲天堂2017无码 | 欧美日本日韩 | 日产精品99久久久久久 | 一个人看的视频www在线 | 中国女人内谢69xxxx | 又紧又大又爽精品一区二区 | 无套内谢的新婚少妇国语播放 | 无码人妻精品一区二区三区不卡 | 亚洲中文字幕无码一久久区 | 成年美女黄网站色大免费全看 | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 亲嘴扒胸摸屁股激烈网站 | 亚洲中文字幕无码中文字在线 | 无码任你躁久久久久久久 | 大色综合色综合网站 | 国产一区二区三区日韩精品 | 国产精品爱久久久久久久 | 欧美丰满老熟妇xxxxx性 | 色欲人妻aaaaaaa无码 | 国产sm调教视频在线观看 | 最新国产乱人伦偷精品免费网站 | 色婷婷久久一区二区三区麻豆 | 人妻少妇精品无码专区二区 | 国产97在线 | 亚洲 | 成人无码视频在线观看网站 | 日韩精品无码免费一区二区三区 | www成人国产高清内射 | 久久综合色之久久综合 | 午夜男女很黄的视频 | 少妇无码吹潮 | 国产精品香蕉在线观看 | 最新国产乱人伦偷精品免费网站 | 熟女俱乐部五十路六十路av | 久久这里只有精品视频9 | 狂野欧美性猛交免费视频 | 亚洲午夜福利在线观看 | 亚洲娇小与黑人巨大交 | 亚洲 激情 小说 另类 欧美 | 欧美精品一区二区精品久久 | 日日橹狠狠爱欧美视频 | 色欲久久久天天天综合网精品 | 精品亚洲韩国一区二区三区 | 女人色极品影院 | 亚欧洲精品在线视频免费观看 | 亚洲一区二区三区在线观看网站 | 一本大道伊人av久久综合 | 全黄性性激高免费视频 | 乱码av麻豆丝袜熟女系列 | 国产精品久久久一区二区三区 | 六月丁香婷婷色狠狠久久 | 又大又黄又粗又爽的免费视频 | 精品成在人线av无码免费看 | 纯爱无遮挡h肉动漫在线播放 | 一个人看的www免费视频在线观看 | 一本加勒比波多野结衣 | 久久久成人毛片无码 | 少妇性l交大片欧洲热妇乱xxx | 老熟妇乱子伦牲交视频 | 久久久久久久女国产乱让韩 | 日本高清一区免费中文视频 | 国产乱人伦偷精品视频 | 少妇人妻av毛片在线看 | 99久久久无码国产aaa精品 | aa片在线观看视频在线播放 | 国产午夜手机精彩视频 | 国产成人一区二区三区在线观看 | 亚洲欧美日韩成人高清在线一区 | 51国偷自产一区二区三区 | 国产成人精品久久亚洲高清不卡 | 国产9 9在线 | 中文 | 思思久久99热只有频精品66 | 成人性做爰aaa片免费看 | 日韩少妇白浆无码系列 | 欧美精品在线观看 | 日韩欧美成人免费观看 | 亚洲欧美日韩成人高清在线一区 | 亚洲娇小与黑人巨大交 | 国产69精品久久久久app下载 | 日韩少妇内射免费播放 | 欧美肥老太牲交大战 | 亚洲精品综合五月久久小说 | 黄网在线观看免费网站 | 99久久99久久免费精品蜜桃 | 未满小14洗澡无码视频网站 | 久久久久成人片免费观看蜜芽 | 天堂久久天堂av色综合 | 国产 精品 自在自线 | 亚洲精品www久久久 | 日日橹狠狠爱欧美视频 | 亚洲 a v无 码免 费 成 人 a v | 麻豆果冻传媒2021精品传媒一区下载 | 亚洲性无码av中文字幕 | 久久国语露脸国产精品电影 | 亚洲色偷偷偷综合网 | 国产性生交xxxxx无码 | 久久国产精品偷任你爽任你 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 亚洲人成网站色7799 | 亚洲日韩av片在线观看 | 中文字幕日产无线码一区 | 性色欲网站人妻丰满中文久久不卡 | 亚洲日韩av一区二区三区中文 | 免费观看黄网站 | 蜜桃视频插满18在线观看 | 疯狂三人交性欧美 | 亚洲午夜久久久影院 | 亚洲精品久久久久avwww潮水 | 久久亚洲精品中文字幕无男同 | 欧美黑人巨大xxxxx | 亚洲一区二区三区含羞草 | 亚洲 a v无 码免 费 成 人 a v | 18精品久久久无码午夜福利 | 国产精品亚洲五月天高清 | 久久综合香蕉国产蜜臀av | 任你躁国产自任一区二区三区 | 麻豆精品国产精华精华液好用吗 | 国产疯狂伦交大片 | 国产香蕉尹人综合在线观看 | 久久久久亚洲精品中文字幕 | 亚洲精品中文字幕久久久久 | 小sao货水好多真紧h无码视频 | 国产精品-区区久久久狼 | 波多野结衣一区二区三区av免费 | 亚洲自偷精品视频自拍 | 国产精品丝袜黑色高跟鞋 | 樱花草在线播放免费中文 | 成人毛片一区二区 | 国产亚洲视频中文字幕97精品 | 亚洲爆乳大丰满无码专区 | 99国产欧美久久久精品 | 亚拍精品一区二区三区探花 | 99久久久无码国产精品免费 | 亚洲高清偷拍一区二区三区 | 黑森林福利视频导航 | 少妇愉情理伦片bd | 丝袜足控一区二区三区 | 亚洲日韩乱码中文无码蜜桃臀网站 | 极品嫩模高潮叫床 | 欧美精品无码一区二区三区 | 国产高潮视频在线观看 | 老头边吃奶边弄进去呻吟 | 精品久久久中文字幕人妻 | 蜜桃无码一区二区三区 | 欧美人与动性行为视频 | 国产精品对白交换视频 | 色婷婷综合中文久久一本 | 一区二区传媒有限公司 | 日本护士毛茸茸高潮 | 国产激情精品一区二区三区 | 亚洲精品久久久久久久久久久 | 丁香花在线影院观看在线播放 | 免费国产成人高清在线观看网站 | 午夜熟女插插xx免费视频 | 精品偷拍一区二区三区在线看 | 激情综合激情五月俺也去 | 高潮毛片无遮挡高清免费视频 | 曰本女人与公拘交酡免费视频 | 日本丰满熟妇videos | 亚洲成a人片在线观看无码3d | 久久午夜无码鲁丝片 | 疯狂三人交性欧美 | 伊人久久大香线蕉亚洲 | 欧美三级不卡在线观看 | 亚洲成av人片天堂网无码】 | 日韩精品a片一区二区三区妖精 | 国产精品亚洲一区二区三区喷水 | 日韩精品无码一本二本三本色 | 高潮毛片无遮挡高清免费 | 夜精品a片一区二区三区无码白浆 | 麻豆精产国品 | 国产成人无码av片在线观看不卡 | 亚洲va中文字幕无码久久不卡 | 人人澡人人妻人人爽人人蜜桃 | 国内精品久久久久久中文字幕 | 波多野结衣乳巨码无在线观看 | 国产人妻精品午夜福利免费 | 婷婷丁香五月天综合东京热 | 国产亚洲精品久久久ai换 | 男女下面进入的视频免费午夜 | 亚洲欧美日韩成人高清在线一区 | 免费观看激色视频网站 | 欧美性生交xxxxx久久久 | 无码中文字幕色专区 | 无码人妻久久一区二区三区不卡 | 亚洲国产成人av在线观看 | 亚洲男人av香蕉爽爽爽爽 | 丝袜美腿亚洲一区二区 | 久久国内精品自在自线 | 久久zyz资源站无码中文动漫 | 成人片黄网站色大片免费观看 | 丰腴饱满的极品熟妇 | 黑人巨大精品欧美一区二区 | 在线a亚洲视频播放在线观看 | 欧美精品在线观看 | 蜜臀av在线观看 在线欧美精品一区二区三区 | 久久久久成人片免费观看蜜芽 | 亚洲色欲色欲欲www在线 | 男人的天堂av网站 | 97精品国产97久久久久久免费 | 日韩少妇内射免费播放 | 午夜福利试看120秒体验区 | 人妻中文无码久热丝袜 | 国产肉丝袜在线观看 | 精品久久久久香蕉网 | 5858s亚洲色大成网站www | 无码人妻久久一区二区三区不卡 | yw尤物av无码国产在线观看 | 天海翼激烈高潮到腰振不止 | 精品无码国产自产拍在线观看蜜 | 老子影院午夜精品无码 | 扒开双腿吃奶呻吟做受视频 | 黑人粗大猛烈进出高潮视频 | 午夜福利电影 | 国产明星裸体无码xxxx视频 | 99久久久无码国产aaa精品 | 国产精品理论片在线观看 | 亚洲另类伦春色综合小说 | 无码人妻久久一区二区三区不卡 | 久久zyz资源站无码中文动漫 | 欧美 亚洲 国产 另类 | 中文字幕无码日韩专区 | a片免费视频在线观看 | 亚洲国产精品久久久久久 | 东京无码熟妇人妻av在线网址 | 国产成人精品视频ⅴa片软件竹菊 | 国产女主播喷水视频在线观看 | 台湾无码一区二区 | 久久这里只有精品视频9 | 兔费看少妇性l交大片免费 | 装睡被陌生人摸出水好爽 | 中文字幕乱码人妻二区三区 | 国产精品无码久久av | 亚洲乱码中文字幕在线 | 亚洲熟熟妇xxxx | 精品国产一区av天美传媒 | 无码任你躁久久久久久久 | 日本熟妇人妻xxxxx人hd | 国产在线一区二区三区四区五区 | 性色欲网站人妻丰满中文久久不卡 | 麻豆精产国品 | 日本精品高清一区二区 | 乌克兰少妇性做爰 | 国产无遮挡又黄又爽免费视频 | 夫妻免费无码v看片 | 欧美三级不卡在线观看 | 麻花豆传媒剧国产免费mv在线 | 国产做国产爱免费视频 | 亚洲a无码综合a国产av中文 | 男人的天堂av网站 | 99国产欧美久久久精品 | 无码午夜成人1000部免费视频 | 亚洲呦女专区 | 亚洲七七久久桃花影院 | 日本免费一区二区三区最新 | 国产精品美女久久久久av爽李琼 | 久久午夜夜伦鲁鲁片无码免费 | 夜夜夜高潮夜夜爽夜夜爰爰 | 狠狠色丁香久久婷婷综合五月 | 国内揄拍国内精品少妇国语 | 人人妻人人澡人人爽欧美精品 | 久久国产精品二国产精品 | 无遮挡国产高潮视频免费观看 | 性做久久久久久久免费看 | 亚洲 a v无 码免 费 成 人 a v | 日本免费一区二区三区最新 | 亚洲 a v无 码免 费 成 人 a v | 狠狠亚洲超碰狼人久久 | 亚洲一区二区三区无码久久 | 日韩人妻系列无码专区 | 天天爽夜夜爽夜夜爽 | 日本在线高清不卡免费播放 | 18禁黄网站男男禁片免费观看 | 国产麻豆精品精东影业av网站 | 国产免费久久精品国产传媒 | 亚洲国产日韩a在线播放 | 亚洲男人av天堂午夜在 | 国产在线aaa片一区二区99 | 精品成在人线av无码免费看 | 中文字幕无码免费久久99 | 精品一区二区三区无码免费视频 | 麻豆国产丝袜白领秘书在线观看 | 美女扒开屁股让男人桶 | 少妇太爽了在线观看 | 久久久久久亚洲精品a片成人 | a国产一区二区免费入口 | 中文字幕人妻无码一夲道 | 伊人久久大香线焦av综合影院 | 少妇性荡欲午夜性开放视频剧场 | 亚洲国产精品毛片av不卡在线 | 性欧美大战久久久久久久 | 色婷婷av一区二区三区之红樱桃 | 日日躁夜夜躁狠狠躁 | 人人爽人人爽人人片av亚洲 | 东北女人啪啪对白 | 国产精品第一区揄拍无码 | 扒开双腿疯狂进出爽爽爽视频 | 性做久久久久久久免费看 | 色综合久久久无码中文字幕 | 啦啦啦www在线观看免费视频 | 黑人巨大精品欧美黑寡妇 | 亚洲精品综合一区二区三区在线 | av无码不卡在线观看免费 | 亚洲日韩精品欧美一区二区 | 免费观看的无遮挡av | 亚洲一区二区三区含羞草 | 爽爽影院免费观看 | 丝袜足控一区二区三区 | 曰韩无码二三区中文字幕 | 国产内射爽爽大片视频社区在线 | 国产精品久久久久7777 | 亚洲综合色区中文字幕 | 日本饥渴人妻欲求不满 | 午夜精品久久久久久久 | 国内揄拍国内精品人妻 | 成人免费视频视频在线观看 免费 | 成人无码精品一区二区三区 | 动漫av网站免费观看 | 熟女少妇人妻中文字幕 | 帮老师解开蕾丝奶罩吸乳网站 | 自拍偷自拍亚洲精品10p | 青青青手机频在线观看 | 丰满少妇高潮惨叫视频 | 国产精品内射视频免费 | 日本丰满护士爆乳xxxx | 内射老妇bbwx0c0ck | 国产网红无码精品视频 | 欧美熟妇另类久久久久久多毛 | 成人亚洲精品久久久久软件 | 自拍偷自拍亚洲精品被多人伦好爽 | 久久99国产综合精品 | 成人性做爰aaa片免费看 | 一个人看的www免费视频在线观看 | 成人免费视频视频在线观看 免费 | 精品水蜜桃久久久久久久 | 亚洲天堂2017无码 | 亚洲阿v天堂在线 | 在线播放亚洲第一字幕 | 超碰97人人做人人爱少妇 | 无码人妻丰满熟妇区五十路百度 | 久久国产自偷自偷免费一区调 | 小鲜肉自慰网站xnxx | 中文字幕中文有码在线 | 俺去俺来也在线www色官网 | 国产舌乚八伦偷品w中 | 成人性做爰aaa片免费看不忠 | 亚洲中文字幕无码中文字在线 | 捆绑白丝粉色jk震动捧喷白浆 | 老子影院午夜精品无码 | 狠狠cao日日穞夜夜穞av | 亚洲日韩av一区二区三区中文 | 亚洲成av人影院在线观看 | 欧美怡红院免费全部视频 | 亚洲人亚洲人成电影网站色 | 中文字幕无码日韩专区 | 99精品无人区乱码1区2区3区 | 国产精品.xx视频.xxtv | 又大又硬又黄的免费视频 | 精品一区二区三区无码免费视频 | 久久久久久亚洲精品a片成人 | 精品偷拍一区二区三区在线看 | 国产亚洲日韩欧美另类第八页 | 久久久久99精品成人片 | 亚洲无人区一区二区三区 | 久久久久99精品国产片 | 欧美人与禽zoz0性伦交 | 国产精品自产拍在线观看 | 亚洲成av人在线观看网址 | 精品乱码久久久久久久 | 免费观看激色视频网站 | 亚洲 另类 在线 欧美 制服 | 毛片内射-百度 | 中文字幕无码乱人伦 | 色情久久久av熟女人妻网站 | 98国产精品综合一区二区三区 | 最新国产乱人伦偷精品免费网站 | 国产香蕉尹人视频在线 | 桃花色综合影院 | 97资源共享在线视频 | 丰满少妇女裸体bbw | 人妻天天爽夜夜爽一区二区 | 无码毛片视频一区二区本码 | 婷婷五月综合缴情在线视频 | 夜精品a片一区二区三区无码白浆 | 亚洲中文字幕成人无码 | 伊人久久大香线蕉av一区二区 | 国产无遮挡又黄又爽又色 | 国产肉丝袜在线观看 | 一区二区三区高清视频一 | 一本加勒比波多野结衣 | 精品国产福利一区二区 | 中文字幕日韩精品一区二区三区 | 精品国产成人一区二区三区 | 亚洲中文字幕在线无码一区二区 | 国产成人综合色在线观看网站 | 精品无码国产自产拍在线观看蜜 | 又粗又大又硬又长又爽 | 久久天天躁狠狠躁夜夜免费观看 | 高潮毛片无遮挡高清免费 | 成人亚洲精品久久久久软件 | 久久精品国产日本波多野结衣 | a国产一区二区免费入口 | 中文字幕无码av激情不卡 | 国产精品无码久久av | 亚洲乱码日产精品bd | 亚洲国产高清在线观看视频 | 人妻无码久久精品人妻 | 极品尤物被啪到呻吟喷水 | 亚洲乱码国产乱码精品精 | 国内精品九九久久久精品 | 麻豆av传媒蜜桃天美传媒 | 夜精品a片一区二区三区无码白浆 | 骚片av蜜桃精品一区 | 黑人巨大精品欧美一区二区 | 亚洲熟熟妇xxxx | 亚洲一区二区三区无码久久 | 一区二区三区乱码在线 | 欧洲 | 午夜熟女插插xx免费视频 | 免费看男女做好爽好硬视频 | 中文精品久久久久人妻不卡 | 黑人大群体交免费视频 | 狠狠躁日日躁夜夜躁2020 | 小sao货水好多真紧h无码视频 | 夜夜夜高潮夜夜爽夜夜爰爰 | 色综合久久中文娱乐网 | 色欲久久久天天天综合网精品 | 中文久久乱码一区二区 | 欧美老妇交乱视频在线观看 | 国产精品久久久av久久久 | 国产xxx69麻豆国语对白 | 欧美亚洲日韩国产人成在线播放 | 国产成人精品无码播放 | 欧美35页视频在线观看 | 亚洲综合伊人久久大杳蕉 | 高清国产亚洲精品自在久久 | 亚洲熟悉妇女xxx妇女av | 亚洲s色大片在线观看 | 亚洲乱亚洲乱妇50p | 性史性农村dvd毛片 | 国产熟妇高潮叫床视频播放 | 午夜肉伦伦影院 | 免费观看的无遮挡av | 亚洲欧美精品伊人久久 | 中文字幕色婷婷在线视频 | 国产av人人夜夜澡人人爽麻豆 | 午夜嘿嘿嘿影院 | 狠狠噜狠狠狠狠丁香五月 | 色综合久久88色综合天天 | 在线播放亚洲第一字幕 | 老熟妇仑乱视频一区二区 | 少妇无套内谢久久久久 | 国产九九九九九九九a片 | 日本欧美一区二区三区乱码 | 精品久久综合1区2区3区激情 | 一本精品99久久精品77 | 婷婷五月综合激情中文字幕 | 人人妻人人澡人人爽欧美一区九九 | 国内精品一区二区三区不卡 | 波多野结衣av一区二区全免费观看 | 99久久精品国产一区二区蜜芽 | 十八禁真人啪啪免费网站 | 免费无码的av片在线观看 | 少妇高潮喷潮久久久影院 | 国产av剧情md精品麻豆 | 国产精品自产拍在线观看 | 日本一本二本三区免费 | 天堂在线观看www | 人妻aⅴ无码一区二区三区 | 中文字幕av日韩精品一区二区 | 亚洲精品欧美二区三区中文字幕 | 女人和拘做爰正片视频 | 国产在线精品一区二区三区直播 | 一本久久伊人热热精品中文字幕 | 日韩人妻系列无码专区 | av香港经典三级级 在线 | √8天堂资源地址中文在线 | 亚洲成av人综合在线观看 | 亚洲精品国偷拍自产在线观看蜜桃 | 日韩av无码一区二区三区 | 亚洲 欧美 激情 小说 另类 | 一个人免费观看的www视频 | 亚洲人成影院在线无码按摩店 | 亚洲精品久久久久中文第一幕 | aa片在线观看视频在线播放 | 性生交片免费无码看人 | 久久视频在线观看精品 | 大地资源网第二页免费观看 | 国产精品无码永久免费888 | 日韩精品无码一区二区中文字幕 | 2020久久超碰国产精品最新 | 日本精品人妻无码免费大全 | 永久免费观看国产裸体美女 | 18禁黄网站男男禁片免费观看 | 国产精品久久久久久无码 | 国产av无码专区亚洲awww | 99久久亚洲精品无码毛片 | 亚洲精品一区二区三区婷婷月 | 亚洲日韩乱码中文无码蜜桃臀网站 | 最近的中文字幕在线看视频 | 中文字幕色婷婷在线视频 | 欧美黑人性暴力猛交喷水 | 亚洲毛片av日韩av无码 | 日日夜夜撸啊撸 | 露脸叫床粗话东北少妇 | 国产黄在线观看免费观看不卡 | 欧美国产日产一区二区 | 大屁股大乳丰满人妻 | 色一情一乱一伦 | 正在播放东北夫妻内射 | 一本久久a久久精品vr综合 | 国产 浪潮av性色四虎 | 亚洲日韩一区二区 | 亲嘴扒胸摸屁股激烈网站 | 国产精品亚洲一区二区三区喷水 | 久久久久久a亚洲欧洲av冫 | 高清国产亚洲精品自在久久 | 精品无人区无码乱码毛片国产 | 亚洲熟妇色xxxxx欧美老妇 | 久久国产精品萌白酱免费 | 高潮毛片无遮挡高清免费 | 黑人巨大精品欧美黑寡妇 | 欧美xxxx黑人又粗又长 | 偷窥村妇洗澡毛毛多 | 女人被男人躁得好爽免费视频 | 久久99热只有频精品8 | 一本无码人妻在中文字幕免费 | 九一九色国产 | 自拍偷自拍亚洲精品10p | a在线亚洲男人的天堂 | 欧美色就是色 | 未满小14洗澡无码视频网站 | 97夜夜澡人人爽人人喊中国片 | 免费看男女做好爽好硬视频 | 久久亚洲精品成人无码 | 帮老师解开蕾丝奶罩吸乳网站 | 性色欲情网站iwww九文堂 | 国产特级毛片aaaaaaa高清 | 久久亚洲国产成人精品性色 | 久久这里只有精品视频9 | a在线亚洲男人的天堂 | 久久久久久久女国产乱让韩 | 日韩av激情在线观看 | 久久午夜无码鲁丝片 | 在线视频网站www色 | 牲欲强的熟妇农村老妇女视频 | 日本一区二区三区免费播放 | 国产一区二区三区影院 | 国产成人精品必看 | 成人亚洲精品久久久久软件 | 国产人成高清在线视频99最全资源 | 国产真实伦对白全集 | 最近免费中文字幕中文高清百度 | 日韩av无码一区二区三区不卡 | 亚洲一区二区三区在线观看网站 | 亚洲国产av精品一区二区蜜芽 | 亚洲国产精品美女久久久久 | 亚洲理论电影在线观看 | 51国偷自产一区二区三区 | a国产一区二区免费入口 | 久久五月精品中文字幕 | 亚洲爆乳大丰满无码专区 | 国产xxx69麻豆国语对白 | 纯爱无遮挡h肉动漫在线播放 | 国产一区二区三区日韩精品 | 日本精品人妻无码免费大全 | 亚洲精品国偷拍自产在线观看蜜桃 | 久久精品中文字幕一区 | 18禁黄网站男男禁片免费观看 | 国产xxx69麻豆国语对白 | 国产激情精品一区二区三区 | 欧美人妻一区二区三区 | 十八禁真人啪啪免费网站 | 丁香啪啪综合成人亚洲 | 亚洲熟妇色xxxxx欧美老妇y | 国产情侣作爱视频免费观看 | 久久久久久国产精品无码下载 | 日本大香伊一区二区三区 | 日韩亚洲欧美中文高清在线 | 亚洲男人av天堂午夜在 | 亚洲日本va午夜在线电影 | 全黄性性激高免费视频 | 国产后入清纯学生妹 | 亚洲国产精品久久人人爱 | 久久国语露脸国产精品电影 | 国产高清av在线播放 | 国产av人人夜夜澡人人爽麻豆 | 99国产精品白浆在线观看免费 | 成人无码精品一区二区三区 | 人人妻人人藻人人爽欧美一区 | 国产精品亚洲lv粉色 | 天堂亚洲免费视频 | 高潮毛片无遮挡高清免费 | 国产精品国产三级国产专播 | 一本一道久久综合久久 | 99精品国产综合久久久久五月天 | 国产亚洲欧美在线专区 | 一本久久a久久精品vr综合 | 少妇性俱乐部纵欲狂欢电影 | 成人无码影片精品久久久 | 麻豆精品国产精华精华液好用吗 | 亚洲日韩一区二区三区 | а天堂中文在线官网 | 人妻无码αv中文字幕久久琪琪布 | 欧美freesex黑人又粗又大 | 国产福利视频一区二区 | 正在播放老肥熟妇露脸 | 日韩视频 中文字幕 视频一区 | 国产精品99久久精品爆乳 | 狠狠色丁香久久婷婷综合五月 | 在线观看欧美一区二区三区 | 亚洲国产av美女网站 | 国产无遮挡又黄又爽免费视频 | 欧美一区二区三区视频在线观看 | 亚洲精品成人av在线 | 国产成人精品无码播放 | 国产亚洲日韩欧美另类第八页 | 色妞www精品免费视频 | 国产精品久久国产精品99 | 四虎国产精品一区二区 | 国产国语老龄妇女a片 | 妺妺窝人体色www婷婷 | 精品国精品国产自在久国产87 | 无码国产乱人伦偷精品视频 | 国产舌乚八伦偷品w中 | 在教室伦流澡到高潮hnp视频 | 亚洲精品美女久久久久久久 | 成人无码精品1区2区3区免费看 | 18黄暴禁片在线观看 | 丰满人妻精品国产99aⅴ | 黑人玩弄人妻中文在线 | 久在线观看福利视频 | 国产情侣作爱视频免费观看 | 国产精品久久久久久亚洲毛片 | 中文精品久久久久人妻不卡 | 天堂一区人妻无码 | 高潮毛片无遮挡高清免费视频 | 亚洲乱码中文字幕在线 | 国产午夜精品一区二区三区嫩草 | 国产乱子伦视频在线播放 | 天堂а√在线地址中文在线 | 日韩视频 中文字幕 视频一区 | 久久亚洲日韩精品一区二区三区 | 欧美性猛交xxxx富婆 | 图片区 小说区 区 亚洲五月 | 欧美性生交xxxxx久久久 | 亚洲精品中文字幕 | 国产高清av在线播放 | 人妻少妇精品无码专区动漫 | 亚洲人成网站色7799 | 婷婷五月综合激情中文字幕 | 亚洲国产精品久久久久久 | 国产乱码精品一品二品 | 九一九色国产 | 熟妇人妻无乱码中文字幕 | 中文精品无码中文字幕无码专区 | 窝窝午夜理论片影院 | 水蜜桃亚洲一二三四在线 | 99久久99久久免费精品蜜桃 | 中文无码精品a∨在线观看不卡 | 久久久国产精品无码免费专区 | 亚洲欧美日韩综合久久久 | 久久 国产 尿 小便 嘘嘘 | 国产精品多人p群无码 | 日本高清一区免费中文视频 | 婷婷五月综合缴情在线视频 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 无码帝国www无码专区色综合 | 亚洲 激情 小说 另类 欧美 | 国产精品久久久一区二区三区 | 九一九色国产 | 国产免费观看黄av片 | 1000部啪啪未满十八勿入下载 | 三级4级全黄60分钟 | 秋霞成人午夜鲁丝一区二区三区 | 中文字幕精品av一区二区五区 | 国产精华av午夜在线观看 | 未满小14洗澡无码视频网站 | yw尤物av无码国产在线观看 | 精品国产成人一区二区三区 | 欧美怡红院免费全部视频 | 无码午夜成人1000部免费视频 | 欧美阿v高清资源不卡在线播放 | 欧美日本精品一区二区三区 | 中文无码精品a∨在线观看不卡 | 久久久久成人精品免费播放动漫 | 日日天干夜夜狠狠爱 | 黑森林福利视频导航 | 国内少妇偷人精品视频免费 | 欧美精品一区二区精品久久 | 国产黄在线观看免费观看不卡 | 欧美人妻一区二区三区 | 丁香花在线影院观看在线播放 | 日日噜噜噜噜夜夜爽亚洲精品 | 亚洲中文字幕成人无码 | 中文字幕乱码中文乱码51精品 | 国产一区二区三区四区五区加勒比 | 中文字幕中文有码在线 | 国产精品人人妻人人爽 | 国产乱人偷精品人妻a片 | 成人性做爰aaa片免费看不忠 | 欧美猛少妇色xxxxx | 全黄性性激高免费视频 | 夜夜夜高潮夜夜爽夜夜爰爰 | 国产va免费精品观看 | 一个人看的www免费视频在线观看 | 巨爆乳无码视频在线观看 | 久久久无码中文字幕久... | 无码人妻丰满熟妇区毛片18 | 97精品人妻一区二区三区香蕉 | 欧美 日韩 亚洲 在线 | 精品无人区无码乱码毛片国产 | 综合激情五月综合激情五月激情1 | a在线亚洲男人的天堂 | 精品少妇爆乳无码av无码专区 | 成人综合网亚洲伊人 | 亚洲啪av永久无码精品放毛片 | 亚洲色在线无码国产精品不卡 | 小泽玛莉亚一区二区视频在线 | 色婷婷久久一区二区三区麻豆 | 无码人妻精品一区二区三区不卡 | 日日摸日日碰夜夜爽av | 精品国产青草久久久久福利 | 亚洲国产精品毛片av不卡在线 | 欧美自拍另类欧美综合图片区 | 性欧美videos高清精品 | 狠狠噜狠狠狠狠丁香五月 | 国内揄拍国内精品人妻 | 亚洲а∨天堂久久精品2021 | 日本乱人伦片中文三区 | 色妞www精品免费视频 | 噜噜噜亚洲色成人网站 | 亚洲熟熟妇xxxx | 国产免费久久久久久无码 | 日本精品人妻无码免费大全 | 少女韩国电视剧在线观看完整 | 日本精品少妇一区二区三区 | 又粗又大又硬毛片免费看 | 欧美精品一区二区精品久久 | 大肉大捧一进一出好爽视频 | 成人免费视频视频在线观看 免费 | 免费乱码人妻系列无码专区 | 国产精品对白交换视频 | 水蜜桃av无码 | 人妻体内射精一区二区三四 | 成人欧美一区二区三区 | 四虎影视成人永久免费观看视频 | 麻豆果冻传媒2021精品传媒一区下载 | 亚洲爆乳精品无码一区二区三区 | 国产成人精品必看 | 亚洲国精产品一二二线 | 亚洲欧美中文字幕5发布 | 男女下面进入的视频免费午夜 | 日本xxxx色视频在线观看免费 | 国产绳艺sm调教室论坛 | 精品水蜜桃久久久久久久 | 性啪啪chinese东北女人 | 日本一区二区三区免费播放 | 国内少妇偷人精品视频免费 | 久久综合九色综合97网 | 成人aaa片一区国产精品 | 亚洲国产av美女网站 | 中文字幕无码日韩欧毛 | 国产成人无码午夜视频在线观看 | 图片区 小说区 区 亚洲五月 | 欧美国产日韩亚洲中文 | 亚洲国产精品久久人人爱 | 欧美人与善在线com | 欧美喷潮久久久xxxxx | 99在线 | 亚洲 | 国产成人精品视频ⅴa片软件竹菊 | 欧美亚洲国产一区二区三区 | 亚洲第一网站男人都懂 | 香蕉久久久久久av成人 | 97资源共享在线视频 | 久久亚洲a片com人成 | 亚洲天堂2017无码 | 天天躁夜夜躁狠狠是什么心态 | 国产欧美亚洲精品a | 99er热精品视频 | 欧美丰满熟妇xxxx性ppx人交 | 日韩少妇内射免费播放 | 亚洲欧美综合区丁香五月小说 | 激情人妻另类人妻伦 | 日本精品久久久久中文字幕 | 色综合久久久久综合一本到桃花网 | 精品 日韩 国产 欧美 视频 | av小次郎收藏 | 久久久国产一区二区三区 | 亚洲国产精品久久人人爱 | 一个人免费观看的www视频 | 国产精品无码永久免费888 | 日韩少妇内射免费播放 | 丝袜 中出 制服 人妻 美腿 | 日韩亚洲欧美中文高清在线 | 国产猛烈高潮尖叫视频免费 | 久久99国产综合精品 | 久久久久亚洲精品男人的天堂 | www一区二区www免费 | 中文字幕无码日韩欧毛 | 国产精品亚洲а∨无码播放麻豆 | 中文毛片无遮挡高清免费 | 蜜臀aⅴ国产精品久久久国产老师 | 亚洲另类伦春色综合小说 | 国产精品亚洲综合色区韩国 | 5858s亚洲色大成网站www | 色综合久久中文娱乐网 | 日日橹狠狠爱欧美视频 | 丰满肥臀大屁股熟妇激情视频 | 国产精品久久久午夜夜伦鲁鲁 | 午夜丰满少妇性开放视频 | 久久精品成人欧美大片 | 久久人人爽人人人人片 | 精品人妻人人做人人爽夜夜爽 | 夜夜躁日日躁狠狠久久av | 中文字幕av无码一区二区三区电影 | 成人免费视频一区二区 | 欧美人与善在线com | 久久久亚洲欧洲日产国码αv | 成人影院yy111111在线观看 | 无码免费一区二区三区 | 亚洲热妇无码av在线播放 | 欧美国产日产一区二区 | 一本无码人妻在中文字幕免费 | 国产农村妇女高潮大叫 | 六十路熟妇乱子伦 | 欧美精品一区二区精品久久 | 亚洲码国产精品高潮在线 | 领导边摸边吃奶边做爽在线观看 | 成年美女黄网站色大免费全看 | 红桃av一区二区三区在线无码av | 狠狠色欧美亚洲狠狠色www | 四虎影视成人永久免费观看视频 | 亚洲色偷偷男人的天堂 | 国产人妻人伦精品 | 日本欧美一区二区三区乱码 | 欧美性生交活xxxxxdddd | 99国产欧美久久久精品 | 丝袜 中出 制服 人妻 美腿 | 日韩精品一区二区av在线 | 天堂一区人妻无码 | 欧美日韩视频无码一区二区三 | 国内精品一区二区三区不卡 | 东京一本一道一二三区 | 国产suv精品一区二区五 | 欧美zoozzooz性欧美 | 亚洲爆乳无码专区 | 国产麻豆精品一区二区三区v视界 | 激情人妻另类人妻伦 | 日本熟妇乱子伦xxxx | 亚洲 另类 在线 欧美 制服 | 牲欲强的熟妇农村老妇女视频 | 水蜜桃色314在线观看 | 欧美丰满少妇xxxx性 | 国产亚洲欧美日韩亚洲中文色 | 亚洲色成人中文字幕网站 | 日韩在线不卡免费视频一区 | 中文亚洲成a人片在线观看 | 波多野结衣一区二区三区av免费 | 亚洲国产av美女网站 | 77777熟女视频在线观看 а天堂中文在线官网 | 精品午夜福利在线观看 | 一二三四社区在线中文视频 | av无码久久久久不卡免费网站 | 亚洲精品美女久久久久久久 | 欧美人与禽zoz0性伦交 | 精品久久久无码人妻字幂 | 丁香花在线影院观看在线播放 | 日韩av激情在线观看 | 男人的天堂2018无码 | av人摸人人人澡人人超碰下载 | 欧美 丝袜 自拍 制服 另类 | 真人与拘做受免费视频一 | www国产亚洲精品久久久日本 | 丰满妇女强制高潮18xxxx | 亚洲爆乳无码专区 | 疯狂三人交性欧美 | 狠狠色噜噜狠狠狠狠7777米奇 | 精品乱码久久久久久久 | 乱码av麻豆丝袜熟女系列 | 亚洲国产精品无码一区二区三区 | 无人区乱码一区二区三区 | 国产激情精品一区二区三区 | 亚洲无人区午夜福利码高清完整版 | 大色综合色综合网站 | 乌克兰少妇性做爰 | 久久久久免费看成人影片 | 久久99精品国产麻豆蜜芽 | 中文字幕 亚洲精品 第1页 | 久在线观看福利视频 | 少妇人妻偷人精品无码视频 | 久久国产精品偷任你爽任你 | 熟妇激情内射com | 亚洲男人av香蕉爽爽爽爽 | 亲嘴扒胸摸屁股激烈网站 | 狠狠噜狠狠狠狠丁香五月 | 骚片av蜜桃精品一区 | 亚洲精品国偷拍自产在线麻豆 | 亚洲国产成人av在线观看 | 亚无码乱人伦一区二区 | 国产一区二区三区四区五区加勒比 | 亚洲精品久久久久中文第一幕 | 欧美国产亚洲日韩在线二区 | 伊在人天堂亚洲香蕉精品区 | 高潮毛片无遮挡高清免费 | 欧美激情一区二区三区成人 | www成人国产高清内射 | 少妇人妻偷人精品无码视频 | 欧美成人家庭影院 | 日本一区二区三区免费播放 | 97无码免费人妻超级碰碰夜夜 | 久久天天躁狠狠躁夜夜免费观看 | 久久精品一区二区三区四区 | 97夜夜澡人人爽人人喊中国片 | 免费网站看v片在线18禁无码 | 丰满护士巨好爽好大乳 | 亚洲の无码国产の无码影院 | 中文无码伦av中文字幕 | 亚洲乱亚洲乱妇50p | 欧美亚洲国产一区二区三区 | 精品久久久久久人妻无码中文字幕 | 色窝窝无码一区二区三区色欲 | 免费男性肉肉影院 | 欧美性猛交xxxx富婆 | 精品一区二区三区波多野结衣 | 国产美女极度色诱视频www | 亚洲欧洲日本综合aⅴ在线 | 久久国产精品萌白酱免费 | 免费观看又污又黄的网站 | 麻豆果冻传媒2021精品传媒一区下载 | ass日本丰满熟妇pics | 欧美高清在线精品一区 | 国产明星裸体无码xxxx视频 | 图片区 小说区 区 亚洲五月 | 成人无码影片精品久久久 | 亚洲日韩一区二区 | 久9re热视频这里只有精品 | 在线成人www免费观看视频 | 性生交片免费无码看人 | 国产真人无遮挡作爱免费视频 | 精品国精品国产自在久国产87 | 真人与拘做受免费视频一 | 亚洲成a人片在线观看日本 | 国产三级精品三级男人的天堂 | 亚洲欧美中文字幕5发布 | 国产精品亚洲а∨无码播放麻豆 | 九九热爱视频精品 | 亚洲 日韩 欧美 成人 在线观看 | 色综合久久中文娱乐网 | 99久久久无码国产aaa精品 | 国产精品人人妻人人爽 | 波多野结衣乳巨码无在线观看 | 97精品国产97久久久久久免费 | 午夜丰满少妇性开放视频 | 亚洲综合另类小说色区 | 欧美日本精品一区二区三区 | 亚洲精品综合一区二区三区在线 | 午夜丰满少妇性开放视频 | 久久久中文久久久无码 | 中文字幕无码人妻少妇免费 | 国产成人一区二区三区别 | 免费国产成人高清在线观看网站 | 日本爽爽爽爽爽爽在线观看免 | 又大又硬又黄的免费视频 | 国产午夜福利100集发布 | 国产欧美熟妇另类久久久 | 十八禁视频网站在线观看 | 动漫av网站免费观看 | 正在播放老肥熟妇露脸 | 亚洲精品欧美二区三区中文字幕 | 最新国产麻豆aⅴ精品无码 | 婷婷五月综合缴情在线视频 | 日本护士xxxxhd少妇 | 中文无码成人免费视频在线观看 | 精品成人av一区二区三区 | 精品久久久中文字幕人妻 | 久久久久国色av免费观看性色 | 久久婷婷五月综合色国产香蕉 | 成人亚洲精品久久久久 | 久9re热视频这里只有精品 | 亚洲精品www久久久 | 无码午夜成人1000部免费视频 | 无码人妻少妇伦在线电影 | 欧美性生交xxxxx久久久 | 内射老妇bbwx0c0ck | 无人区乱码一区二区三区 | 国产精品人人爽人人做我的可爱 | 亚洲区欧美区综合区自拍区 | 国内综合精品午夜久久资源 | 亚洲国产高清在线观看视频 | 成人免费视频视频在线观看 免费 | 少妇无码av无码专区在线观看 | 国产精品亚洲а∨无码播放麻豆 | 国产猛烈高潮尖叫视频免费 | 欧美自拍另类欧美综合图片区 | 嫩b人妻精品一区二区三区 | 九九久久精品国产免费看小说 | √天堂资源地址中文在线 | 亚洲va中文字幕无码久久不卡 | www国产亚洲精品久久网站 | 欧美精品免费观看二区 | 国产区女主播在线观看 | 国产成人无码av片在线观看不卡 | 老头边吃奶边弄进去呻吟 | aⅴ亚洲 日韩 色 图网站 播放 | 色欲久久久天天天综合网精品 | 免费男性肉肉影院 | 久久久久久亚洲精品a片成人 | 日本精品高清一区二区 | 无码国产色欲xxxxx视频 | 亚洲第一无码av无码专区 | 久久99精品国产麻豆蜜芽 | 免费观看的无遮挡av | 人妻aⅴ无码一区二区三区 | 人人澡人人妻人人爽人人蜜桃 | 亚洲国产欧美日韩精品一区二区三区 | 人人澡人摸人人添 | 爱做久久久久久 | 亚洲国产一区二区三区在线观看 | 人妻有码中文字幕在线 | 精品无人国产偷自产在线 | 亚洲精品成a人在线观看 | 日韩精品成人一区二区三区 | 日韩人妻无码一区二区三区久久99 | 无码吃奶揉捏奶头高潮视频 | 日日摸夜夜摸狠狠摸婷婷 | 国产婷婷色一区二区三区在线 | 曰韩少妇内射免费播放 | 中文字幕无码免费久久9一区9 | 丁香啪啪综合成人亚洲 | 亚洲欧洲无卡二区视頻 | 中文字幕中文有码在线 | 综合激情五月综合激情五月激情1 | 国产va免费精品观看 | 国产精品成人av在线观看 | 中文字幕无码av波多野吉衣 | 内射白嫩少妇超碰 | 国产三级精品三级男人的天堂 | www国产亚洲精品久久久日本 | 国产午夜视频在线观看 | 夜夜高潮次次欢爽av女 | 中文字幕无线码免费人妻 | 乱人伦中文视频在线观看 | 在线观看国产一区二区三区 | 国产亚洲精品久久久ai换 | 国产农村妇女高潮大叫 | a在线亚洲男人的天堂 | 国产热a欧美热a在线视频 | 欧美激情内射喷水高潮 | 精品国产乱码久久久久乱码 | 漂亮人妻洗澡被公强 日日躁 | 亚洲国产av精品一区二区蜜芽 | 自拍偷自拍亚洲精品10p |