久久精品国产精品国产精品污,男人扒开添女人下部免费视频,一级国产69式性姿势免费视频,夜鲁夜鲁很鲁在线视频 视频,欧美丰满少妇一区二区三区,国产偷国产偷亚洲高清人乐享,中文 在线 日韩 亚洲 欧美,熟妇人妻无乱码中文字幕真矢织江,一区二区三区人妻制服国产

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Logistic regression--转

發布時間:2025/4/5 编程问答 19 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Logistic regression--转 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

原文地址:https://en.wikipedia.org/wiki/Logistic_regression

In?statistics,?logistic regression, or?logit regression, or?logit model[1]?is a?regression?model where the?dependent variable (DV)?is?categorical.

Logistic regression was developed by statistician?David Cox?in 1958[2][3]. The binary logistic model is used to estimate the probability of a binary response based on one or more predictor (or independent) variables (features). As such it is not a classification method. It could be called a?qualitative response/discrete choice model?in the terminology ofeconomics.

Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a?logistic function, which is the cumulative logistic distribution. Thus, it treats the same set of problems as?probit regression?using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard?logistic distribution?of errors and the second a standard?normal distribution?of errors.[citation needed]

Logistic regression can be seen as a special case of the?generalized linear model?and thus analogous to?linear regression. The model of logistic regression, however, is based on quite different assumptions (about the relationship between dependent and independent variables) from those of linear regression. In particular the key differences of these two models can be seen in the following two features of logistic regression. First, the conditional distribution?{\displaystyle y\mid x}?is a?Bernoulli distribution?rather than a?Gaussian distribution, because the dependent variable is binary. Second, the predicted values are probabilities and are therefore restricted to (0,1) through the?logistic distribution functionbecause logistic regression predicts the?probability?of particular outcomes.

Logistic regression is an alternative to Fisher's 1936 method,?linear discriminant analysis.[4]?If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis.[citation needed]

?

Contents

??[hide]?
  • 1Fields and example applications
    • 1.1Example: Probability of passing an exam versus hours of study
  • 2Basics
  • 3Latent variable interpretation
  • 4Logistic function, odds, odds ratio, and logit
    • 4.1Definition of the logistic function
    • 4.2Definition of the inverse of the logistic function
    • 4.3Interpretation of these terms
    • 4.4Definition of the odds
    • 4.5Definition of the odds ratio
    • 4.6Multiple explanatory variables
  • 5Model fitting
    • 5.1Estimation
      • 5.1.1Maximum likelihood estimation
    • 5.2Evaluating goodness of fit
      • 5.2.1Deviance and likelihood ratio tests
      • 5.2.2Pseudo-R2s
      • 5.2.3Hosmer–Lemeshow test
  • 6Coefficients
    • 6.1Likelihood ratio test
    • 6.2Wald statistic
    • 6.3Case-control sampling
  • 7Formal mathematical specification
    • 7.1Setup
    • 7.2As a generalized linear model
    • 7.3As a latent-variable model
    • 7.4As a two-way latent-variable model
      • 7.4.1Example
    • 7.5As a "log-linear" model
    • 7.6As a single-layer perceptron
    • 7.7In terms of binomial data
  • 8Bayesian logistic regression
    • 8.1Gibbs sampling with an approximating distribution
  • 9Extensions
  • 10Software
  • 11See also
  • 12References
  • 13Further reading
  • 14External links

?

Fields and example applications

Logistic regression is used widely in many fields, including the medical and social sciences. For example, the Trauma and Injury Severity Score (TRISS), which is widely used to predict mortality in injured patients, was originally developed by Boyd et al. using logistic regression.[5]?Many other medical scales used to assess severity of a patient have been developed using logistic regression.[6][7][8][9]?Logistic regression may be used to predict whether a patient has a given disease (e.g.?diabetes;?coronary heart disease), based on observed characteristics of the patient (age, sex,?body mass index, results of various?blood tests, etc.).[1][10]?Another example might be to predict whether an American voter will vote Democratic or Republican, based on age, income, sex, race, state of residence, votes in previous elections, etc.[11]?The technique can also be used in?engineering, especially for predicting the probability of failure of a given process, system or product.[12][13]?It is also used in?marketing?applications such as prediction of a customer's propensity to purchase a product or halt a subscription, etc.[citation needed]?In?economics?it can be used to predict the likelihood of a person's choosing to be in the labor force, and a business application would be to predict the likelihood of a homeowner defaulting on a?mortgage.?Conditional random fields, an extension of logistic regression to sequential data, are used in?natural language processing.

Example: Probability of passing an exam versus hours of study[edit]

A group of 20 students spend between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability that the student will pass the exam?

The table shows the number of hours each student spent studying, and whether they passed (1) or failed (0).

Hours0.500.751.001.251.501.751.752.002.252.502.753.003.253.504.004.254.504.755.005.50
Pass00000010101010111111

The graph shows the probability of passing the exam versus the number of hours studying, with the logistic regression curve fitted to the data.

Graph of a logistic regression curve showing probability of passing an exam versus hours studying

The logistic regression analysis gives the following output.

?CoefficientStd.Errorz-valueP-value (Wald)
Intercept-4.07771.7610-2.3160.0206
Hours1.50460.62872.3930.0167

The output indicates that hours studying is significantly associated with the probability of passing the exam (p=0.0167,?Wald test). The output also provides the coefficients for Intercept = -4.0777 and Hours = 1.5046. These coefficients are entered in the logistic regression equation to estimate the probability of passing the exam:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046* Hours)))

For example, for a student who studies 2 hours, entering the value Hours =2 in the equation gives the estimated probability of passing the exam of p=0.26:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046*2))) = 0.26.

Similarly, for a student who studies 4 hours, the estimated probability of passing the exam is p=0.87:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046*4))) = 0.87.

This table shows the probability of passing the exam for several values of hours studying.

Hours of studyProbability of passing exam
10.07
20.26
30.61
40.87
50.97

The output from the logistic regression analysis gives a p-value of p=0.0167, which is based on the Wald z-score. Rather than the Wald method, the recommended method to calculate the p-value for logistic regression is the?Likelihood Ratio Test?(LRT), which for this data gives p=0.0006.

Basics

Logistic regression can be binomial, ordinal or multinomial. Binomial or binary logistic regression deals with situations in which the observed outcome for adependent variable?can have only two possible types (for example, "dead" vs. "alive" or "win" vs. "loss").?Multinomial logistic regression?deals with situations where the outcome can have three or more possible types (e.g., "disease A" vs. "disease B" vs. "disease C") that are not ordered.?Ordinal logistic regressiondeals with dependent variables that are ordered. In binary logistic regression, the outcome is usually coded as "0" or "1", as this leads to the most straightforward interpretation.[14]?If a particular observed outcome for the dependent variable is the noteworthy possible outcome (referred to as a "success" or a "case") it is usually coded as "1" and the contrary outcome (referred to as a "failure" or a "noncase") as "0". Logistic regression is used to predict theodds?of being a case based on the values of the?independent variables?(predictors). The odds are defined as the probability that a particular outcome is a case divided by the probability that it is a noncase.

Like other forms of?regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting binary dependent variables (treating the dependent variable as the outcome of aBernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated. In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive). To do that logistic regression first takes the?odds?of the event happening for different levels of each independent variable, then takes the ratio of those odds (which is continuous but cannot be negative) and then takes thelogarithm?of that ratio. This is referred to as?logit?or log-odds) to create a continuous criterion as a transformed version of the dependent variable.

Thus the logit transformation is referred to as the?link function?in logistic regression—although the dependent variable in logistic regression is binomial, the logit is the continuous criterion upon which linear regression is conducted.[14]

The logit of success is then fitted to the predictors using?linear regression?analysis. The predicted value of the logit is converted back into predicted odds via the inverse of the natural logarithm, namely the?exponential function. Thus, although the observed dependent variable in logistic regression is a zero-or-one variable, the logistic regression estimates the odds, as a continuous variable, that the dependent variable is a success (a case). In some applications the odds are all that is needed. In others, a specific yes-or-no prediction is needed for whether the dependent variable is or is not a case; this categorical prediction can be based on the computed odds of a success, with predicted odds above some chosen cutoff value being translated into a prediction of a success.

Latent variable interpretation

The logistic regression can be understood simply as finding the?{\displaystyle \beta }?parameters that best fit:

{\displaystyle y=1}?if?{\displaystyle \beta _{0}+\beta _{1}x+\epsilon >0}
{\displaystyle y=0}, otherwise

where?{\displaystyle \epsilon }?is an error distributed by the standard?logistic distribution. (If the standard normal distribution is used instead, it is a probit regression.)

The associated latent variable is?{\displaystyle y\prime =\beta _{0}+\beta _{1}x+\epsilon }. The error term?{\displaystyle \epsilon }?is not observed, and so the?{\displaystyle y\prime }?is also an unobservable, hence termed "latent". (The observed data are values of?{\displaystyle y}?and?{\displaystyle x}.) Unlike ordinary regression, however, the?{\displaystyle \beta }?parameters cannot be expressed by any direct formula of the?{\displaystyle y}?and?{\displaystyle x}?values in the observed data. Instead they are to be found by an iterative search process, usually implemented by a software program, that finds the maximum of a complicated "likelihood expression" that is a function of all of the observed?{\displaystyle y}?and?{\displaystyle x}?values. The estimation approach is explained below.

Logistic function, odds, odds ratio, and logit

Figure 1. The standard logistic function?{\displaystyle \sigma (t)}; note that?{\displaystyle \sigma (t)\in (0,1)}?for all?{\displaystyle t}.

Definition of the logistic function

An explanation of logistic regression can begin with an explanation of the standard?logistic function. The logistic function is useful because it can take an input with any value from negative to positive infinity, whereas the output always takes values between zero and one[14]?and hence is interpretable as a probability. The logistic function?{\displaystyle \sigma (t)}?is defined as follows:

{\displaystyle \sigma (t)={\frac {e^{t}}{e^{t}+1}}={\frac {1}{1+e^{-t}}}}

A graph of the logistic function on the?t-interval (-6,6) is shown in Figure 1.

Let us assume that?{\displaystyle t}?is a linear function of a single?explanatory variable?{\displaystyle x}?(the case where?{\displaystyle t}?is a?linear combination?of multiple explanatory variables is treated similarly). We can then express?{\displaystyle t}?as follows:

{\displaystyle t=\beta _{0}+\beta _{1}x}

And the logistic function can now be written as:

{\displaystyle F(x)={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x)}}}}

Note that?{\displaystyle F(x)}?is interpreted as the probability of the dependent variable equaling a "success" or "case" rather than a failure or non-case. It's clear that theresponse variables?{\displaystyle Y_{i}}?are not identically distributed:?{\displaystyle P(Y_{i}=1\mid X)}?differs from one data point?{\displaystyle X_{i}}?to another, though they are independent given?design matrix?{\displaystyle X}?and shared with parameters?{\displaystyle \beta }.[1]

Definition of the inverse of the logistic function

We can now define the inverse of the logistic function,?{\displaystyle g}, the?logit?(log odds):

{\displaystyle g(F(x))=\ln \left({\frac {F(x)}{1-F(x)}}\right)=\beta _{0}+\beta _{1}x,}

and equivalently, after exponentiating both sides:

{\displaystyle {\frac {F(x)}{1-F(x)}}=e^{\beta _{0}+\beta _{1}x}.}

Interpretation of these terms

In the above equations, the terms are as follows:

  • {\displaystyle g(\cdot )}?refers to the logit function. The equation for?{\displaystyle g(F(x))}?illustrates that the?logit?(i.e., log-odds or natural logarithm of the odds) is equivalent to the linear regression expression.
  • {\displaystyle \ln }?denotes the?natural logarithm.
  • {\displaystyle F(x)}?is the probability that the dependent variable equals a case, given some linear combination of the predictors. The formula for?{\displaystyle F(x)}?illustrates that the probability of the dependent variable equaling a case is equal to the value of the logistic function of the linear regression expression. This is important in that it shows that the value of the linear regression expression can vary from negative to positive infinity and yet, after transformation, the resulting expression for the probability?{\displaystyle F(x)}?ranges between 0 and 1.
  • {\displaystyle \beta _{0}}?is the?intercept?from the linear regression equation (the value of the criterion when the predictor is equal to zero).
  • {\displaystyle \beta _{1}x}?is the regression coefficient multiplied by some value of the predictor.
  • base?{\displaystyle e}?denotes the exponential function.

Definition of the odds

The odds of the dependent variable equaling a case (given some linear combination?{\displaystyle x}?of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the?logit?serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.[14]

So we define odds of the dependent variable equaling a case (given some linear combination?{\displaystyle x}?of the predictors) as follows:

{\displaystyle {\text{odds}}=e^{\beta _{0}+\beta _{1}x}.}

Definition of the odds ratio

For a continuous independent variable the odds ratio can be defined as:

{\displaystyle \mathrm {OR} ={\frac {\operatorname {odds} (x+1)}{\operatorname {odds} (x)}}={\frac {\frac {F(x+1)}{1-F(x+1)}}{\frac {F(x)}{1-F(x)}}}={\frac {e^{\beta _{0}+\beta _{1}(x+1)}}{e^{\beta _{0}+\beta _{1}x}}}=e^{\beta _{1}}}

This exponential relationship provides an interpretation for?{\displaystyle \beta _{1}}: The odds multiply by?{\displaystyle e^{\beta _{1}}}?for every 1-unit increase in x.[15]

For a binary independent variable the odds ratio is defined as?{\displaystyle {\frac {ad}{bc}}}?where a, b, c and d are cells in a 2x2?contingency table.[16]

Multiple explanatory variables

If there are multiple explanatory variables, the above expression?{\displaystyle \beta _{0}+\beta _{1}x}?can be revised to?{\displaystyle \beta _{0}+\beta _{1}x_{1}+\beta _{2}x_{2}+\cdots +\beta _{m}x_{m}.}?Then when this is used in the equation relating the logged odds of a success to the values of the predictors, the linear regression will be a?multiple regression?with?m?explanators; the parameters?{\displaystyle \beta _{j}}?for all?j?= 0, 1, 2, ...,?m?are all estimated.

Model fitting

Estimation

Because the model can be expressed as a?generalized linear model?(see?below), for 0<p<1,?ordinary least squares?can suffice, with?R-squared?as the measure ofgoodness of fit?in the fitting space. When p=0 or 1, more complex methods are required.[citation needed]

Maximum likelihood estimation[edit]

The regression coefficients are usually estimated using?maximum likelihood?estimation.[17]?Unlike linear regression with normally distributed residuals, it is not possible to find a closed-form expression for the coefficient values that maximize the likelihood function, so that an iterative process must be used instead; for example?Newton's method. This process begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this revision until improvement is minute, at which point the process is said to have converged.[18]

In some instances the model may not reach convergence. Nonconvergence of a model indicates that the coefficients are not meaningful because the iterative process was unable to find appropriate solutions. A failure to converge may occur for a number of reasons: having a large ratio of predictors to cases,multicollinearity,?sparseness, or complete separation.

  • Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to nonconvergence.
  • Multicollinearity refers to unacceptably high correlations between predictors. As multicollinearity increases, coefficients remain unbiased but standard errors increase and the likelihood of model convergence decreases.[17]?To detect multicollinearity amongst the predictors, one can conduct a linear regression analysis with the predictors of interest for the sole purpose of examining the tolerance statistic?[17]?used to assess whether multicollinearity is unacceptably high.
  • Sparseness in the data refers to having a large proportion of empty cells (cells with zero counts). Zero cell counts are particularly problematic with categorical predictors. With continuous predictors, the model can infer values for the zero cell counts, but this is not the case with categorical predictors. The model will not converge with zero cell counts for categorical predictors because the natural logarithm of zero is an undefined value, so that final solutions to the model cannot be reached. To remedy this problem, researchers may collapse categories in a theoretically meaningful way or add a constant to all cells.[17]
  • Another numerical problem that may lead to a lack of convergence is complete separation, which refers to the instance in which the predictors perfectly predict the criterion?– all cases are accurately classified. In such instances, one should reexamine the data, as there is likely some kind of error.[14]

As a rule of thumb, logistic regression models require a minimum of about 10 events per explaining variable (where?event?denotes the cases belonging to the less frequent category in the dependent variable).[19]

Evaluating goodness of fit[edit]

Discrimination?in linear regression models is generally measured using?R2. Since this has no direct analog in logistic regression, various methods[20]:ch.21including the following can be used instead.

Deviance and likelihood ratio tests[edit]

In linear regression analysis, one is concerned with partitioning variance via the?sum of squares?calculations – variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis,?deviance?is used in lieu of sum of squares calculations.[21]?Deviance is analogous to the sum of squares calculations in linear regression[14]?and is a measure of the lack of fit to the data in a logistic regression model.[21]?When a "saturated" model is available (a model with a theoretically perfect fit), deviance is calculated by comparing a given model with the saturated model.[14]?This computation gives the?likelihood-ratio test:[14]

{\displaystyle D=-2\ln {\frac {\text{likelihood of the fitted model}}{\text{likelihood of the saturated model}}}.}

In the above equation?D?represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign.?D?can be shown to follow an approximate?chi-squared distribution.[14]Smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, nonsignificant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained.

When the saturated model is not available (a common case), deviance is calculated simply as -2·(log likelihood of the fitted model), and the reference to the saturated model's log likelihood can be removed from all that follows without harm.

Two measures of deviance are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept (which means "no predictors") and the saturated model. The model deviance represents the difference between a model with at least one predictor and the saturated model.[21]?In this respect, the null model provides a baseline upon which to compare predictor models. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit. Thus, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on a?{\displaystyle \chi _{s-p}^{2},}?chi-square distribution withdegrees of freedom[14]?equal to the difference in the number of parameters estimated.

Let

{\displaystyle {\begin{aligned}D_{\text{null}}&=-2\ln {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}\\\ D_{\text{fitted}}&=-2\ln {\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}.\end{aligned}}}

Then the difference of both is:

{\displaystyle {\begin{aligned}D_{\text{null}}-D_{\text{fitted}}&=-2\left(\ln {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}-\ln {\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}\right)\\&=-2\ln {\frac {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}{\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}}\\&=-2\ln {\frac {\text{likelihood of the null model}}{\text{likelihood of fitted model}}}.\end{aligned}}}

If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improved model fit. This is analogous to the?F-test used in linear regression analysis to assess the significance of prediction.[21]

Pseudo-R2s

In linear regression the squared multiple correlation,?R2?is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.[21]?In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.[21][22]

Four of the most commonly used indices and one less commonly used one are examined on this page:

  • Likelihood ratio?R2L
  • Cox and Snell?R2CS
  • Nagelkerke?R2N
  • McFadden?R2McF
  • Tjur?R2T

R2L?is given by?[21]

{\displaystyle R_{\text{L}}^{2}={\frac {D_{\text{null}}-D_{\text{fitted}}}{D_{\text{null}}}}.}

This is the most analogous index to the squared multiple correlation in linear regression.[17]?It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the?variance?in?linear regression?analysis.[17]?One limitation of the likelihood ratio?R2?is that it is not monotonically related to the odds ratio,[21]?meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases.

R2CS?is an alternative index of goodness of fit related to the?R2?value from linear regression.[22]?It is given by:

{\displaystyle R_{\text{CS}}^{2}=1-\left({\frac {L_{M}}{L_{0}}}\right)^{2/n}}.

where?LM?and?L0?are the likelihoods for the model being fitted and the null model, respectively. The Cox and Snell index is problematic as its maximum value is?{\displaystyle 1-L_{0}^{2/n}}. The highest this upper bound can be is 0.75, but it can easily be as low as 0.48 when the marginal proportion of cases is small.[22]

R2N?provides a correction to the Cox and Snell?R2?so that the maximum value is equal to 1. Nevertheless, the Cox and Snell and likelihood ratio?R2s show greater agreement with each other than either does with the Nagelkerke?R2.[21]?Of course, this might not be the case for values exceeding .75 as the Cox and Snell index is capped at this value. The likelihood ratio?R2?is often preferred to the alternatives as it is most analogous to?R2?in?linear regression, is independent of the base rate (both Cox and Snell and Nagelkerke?R2s increase as the proportion of cases increase from 0 to .5) and varies between 0 and 1.

R2McF?is defined as

{\displaystyle R_{\text{McF}}^{2}=1-{\frac {\ln(L_{M})}{\ln(L_{0})}}},

and is preferred over?R2CS?by Allison.[22]?The two expressions?R2McF?and?R2CS?are then related respectively by,

{\displaystyle {\begin{matrix}R_{\text{CS}}^{2}=1-\left({\dfrac {1}{L_{0}}}\right)^{\frac {2(R_{\text{McF}}^{2})}{n}}\\[1.5em]R_{\text{McF}}^{2}=-{\dfrac {n}{2}}\cdot {\dfrac {\ln(1-R_{\text{CS}}^{2})}{\ln(L_{0})}}\end{matrix}}}

However, Allison now prefers?R2T?which is a relatively new measure developed by Tjur.[23]?It can be calculated in two steps:[22]

  • For each level of the dependent variable, find the mean of the predicted probabilities of an event.
  • Take the absolute value of the difference between these means
  • A word of caution is in order when interpreting pseudo-R2?statistics. The reason these indices of fit are referred to as?pseudo?R2?is that they do not represent the proportionate reduction in error as the?R2?in?linear regression?does.[21]?Linear regression assumes?homoscedasticity, that the error variance is the same for all values of the criterion. Logistic regression will always be?heteroscedastic?– the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of?R2?as a proportionate reduction in error in a universal sense in logistic regression.[21]

    Hosmer–Lemeshow test

    The?Hosmer–Lemeshow test?uses a test statistic that asymptotically follows a?{\displaystyle \chi ^{2}}?distribution?to assess whether or not the observed event rates match expected event rates in subgroups of the model population. This test is considered to be obsolete by some statisticians because of its dependence on arbitrary binning of predicted probabilities and relative low power.[24]

    Coefficients

    After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor.[21]?In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient – the odds ratio (see?definition). In linear regression, the significance of a regression coefficient is assessed by computing a?t?test. In logistic regression, there are several different tests designed to assess the significance of an individual predictor, most notably the likelihood ratio test and the Wald statistic.

    Likelihood ratio test

    The?likelihood-ratio test?discussed above to assess model fit is also the recommended procedure to assess the contribution of individual "predictors" to a given model.[14][17][21]?In the case of a single predictor model, one simply compares the deviance of the predictor model with that of the null model on a chi-square distribution with a single degree of freedom. If the predictor model has a significantly smaller deviance (c.f chi-square using the difference in degrees of freedom of the two models), then one can conclude that there is a significant association between the "predictor" and the outcome. Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor.[21]?There is some debate among statisticians about the appropriateness of so-called "stepwise" procedures. The fear is that they may not preserve nominal statistical properties and may become misleading.[1]

    Wald statistic

    Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the?Wald statistic. The Wald statistic, analogous to the?t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution.[17]

    {\displaystyle W_{j}={\frac {B_{j}^{2}}{SE_{B_{j}}^{2}}}}

    Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be large increasing the probability ofType-II error. The Wald statistic also tends to be biased when data are sparse.[21]

    Case-control sampling

    Suppose cases are rare. Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals. Thus, we may evaluate more diseased individuals. This is also called unbalanced data. As a rule of thumb, sampling controls at a rate of five times the number of cases will produce sufficient control data.[25]

    If we form a logistic model from such data, if the model is correct, the?{\displaystyle \beta _{j}}?parameters are all correct except for?{\displaystyle \beta _{0}}. We can correct?{\displaystyle \beta _{0}}?if we know the true prevalence as follows:[25]

    {\displaystyle {\hat {\beta _{0}^{*}}}={\hat {\beta _{0}}}+\log {{\pi } \over {1-\pi }}-\log {{\tilde {\pi }} \over {1-{\tilde {\pi }}}}}

    where?{\displaystyle \pi }?is the true prevalence and?{\displaystyle {\tilde {\pi }}}?is the prevalence in the sample.

    Formal mathematical specification

    There are various equivalent specifications of logistic regression, which fit into different types of more general models. These different specifications allow for different sorts of useful generalizations.

    Setup

    The basic setup of logistic regression is the same as for standard?linear regression.

    It is assumed that we have a series of?N?observed data points. Each data point?i?consists of a set of?m?explanatory variables?x1,i?...?xm,i?(also calledindependent variables, predictor variables, input variables, features, or attributes), and an associated?binary-valued?outcome variable?Yi?(also known as adependent variable, response variable, output variable, outcome variable or class variable), i.e. it can assume only the two possible values 0 (often meaning "no" or "failure") or 1 (often meaning "yes" or "success"). The goal of logistic regression is to explain the relationship between the explanatory variables and the outcome, so that an outcome can be predicted for a new set of explanatory variables.

    Some examples:

    • The observed outcomes are the presence or absence of a given disease (e.g. diabetes) in a set of patients, and the explanatory variables might be characteristics of the patients thought to be pertinent (sex, race, age,?blood pressure,?body-mass index, etc.).
    • The observed outcomes are the votes (e.g.?Democratic?or?Republican) of a set of people in an election, and the explanatory variables are the demographic characteristics of each person (e.g. sex, race, age, income, etc.). In such a case, one of the two outcomes is arbitrarily coded as 1, and the other as 0.

    As in linear regression, the outcome variables?Yi?are assumed to depend on the explanatory variables?x1,i?...?xm,i.

    Explanatory variables

    As shown above in the above examples, the explanatory variables may be of any?type:?real-valued,?binary,?categorical, etc. The main distinction is betweencontinuous variables?(such as income, age and?blood pressure) and?discrete variables?(such as sex or race). Discrete variables referring to more than two possible choices are typically coded using?dummy variables?(or?indicator variables), that is, separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have that value". For example, a four-way discrete variable of?blood type?with the possible values "A, B, AB, O" can be converted to four separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. (In a case like this, only three of the four dummy variables are independent of each other, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it is necessary to encode only three of the four possibilities as dummy variables. This also means that when all four possibilities are encoded, the overall model is not?identifiable?in the absence of additional constraints such as a regularization constraint. Theoretically, this could cause problems, but in reality almost all logistic regression models are fitted with regularization constraints.)

    Outcome variables

    Formally, the outcomes?Yi?are described as being?Bernoulli-distributed?data, where each outcome is determined by an unobserved probability?pi?that is specific to the outcome at hand, but related to the explanatory variables. This can be expressed in any of the following equivalent forms:

    {\displaystyle {\begin{aligned}Y_{i}\mid x_{1,i},\ldots ,x_{m,i}\ &\sim \operatorname {Bernoulli} (p_{i})\\\mathbb {E} [Y_{i}\mid x_{1,i},\ldots ,x_{m,i}]&=p_{i}\\\Pr(Y_{i}=y\mid x_{1,i},\ldots ,x_{m,i})&={\begin{cases}p_{i}&{\text{if }}y=1\\1-p_{i}&{\text{if }}y=0\end{cases}}\\\Pr(Y_{i}=y\mid x_{1,i},\ldots ,x_{m,i})&=p_{i}^{y}(1-p_{i})^{(1-y)}\end{aligned}}}

    The meanings of these four lines are:

  • The first line expresses the?probability distribution?of each?Yi: Conditioned on the explanatory variables, it follows a?Bernoulli distribution?with parameters?pi, the probability of the outcome of 1 for trial?i. As noted above, each separate trial has its own probability of success, just as each trial has its own explanatory variables. The probability of success?pi?is not observed, only the outcome of an individual Bernoulli trial using that probability.
  • The second line expresses the fact that the?expected value?of each?Yi?is equal to the probability of success?pi, which is a general property of the Bernoulli distribution. In other words, if we run a large number of Bernoulli trials using the same probability of success?pi, then take the average of all the 1 and 0 outcomes, then the result would be close to?pi. This is because doing an average this way simply computes the proportion of successes seen, which we expect to converge to the underlying probability of success.
  • The third line writes out the?probability mass function?of the Bernoulli distribution, specifying the probability of seeing each of the two possible outcomes.
  • The fourth line is another way of writing the probability mass function, which avoids having to write separate cases and is more convenient for certain types of calculations. This relies on the fact that?Yi?can take only the value 0 or 1. In each case, one of the exponents will be 1, "choosing" the value under it, while the other is 0, "canceling out" the value under it. Hence, the outcome is either?pi?or 1???pi, as in the previous line.
  • Linear predictor function

    The basic idea of logistic regression is to use the mechanism already developed for?linear regression?by modeling the probability?pi?using a?linear predictor function, i.e. a?linear combination?of the explanatory variables and a set of?regression coefficients?that are specific to the model at hand but the same for all trials. The linear predictor function?{\displaystyle f(i)}?for a particular data point?i?is written as:

    {\displaystyle f(i)=\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{m}x_{m,i},}

    where?{\displaystyle \beta _{0},\ldots ,\beta _{m}}?are?regression coefficients?indicating the relative effect of a particular explanatory variable on the outcome.

    The model is usually put into a more compact form as follows:

    • The regression coefficients?β0,?β1, ...,?βm?are grouped into a single vector?β?of size?m?+?1.
    • For each data point?i, an additional explanatory pseudo-variable?x0,i?is added, with a fixed value of 1, corresponding to the?intercept?coefficient?β0.
    • The resulting explanatory variables?x0,i,?x1,i, ...,?xm,i?are then grouped into a single vector?Xi?of size?m?+?1.

    This makes it possible to write the linear predictor function as follows:

    {\displaystyle f(i)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i},}

    using the notation for a?dot product?between two vectors.

    As a generalized linear model

    The particular model used by logistic regression, which distinguishes it from standard?linear regression?and from other types of?regression analysis?used forbinary-valued?outcomes, is the way the probability of a particular outcome is linked to the linear predictor function:

    {\displaystyle \operatorname {logit} (\mathbb {E} [Y_{i}\mid x_{1,i},\ldots ,x_{m,i}])=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)=\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{m}x_{m,i}}

    Written using the more compact notation described above, this is:

    {\displaystyle \operatorname {logit} (\mathbb {E} [Y_{i}\mid \mathbf {X} _{i}])=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}

    This formulation expresses logistic regression as a type of?generalized linear model, which predicts variables with various types of?probability distributionsby fitting a linear predictor function of the above form to some sort of arbitrary transformation of the expected value of the variable.

    The intuition for transforming using the logit function (the natural log of the odds) was explained above. It also has the practical effect of converting the probability (which is bounded to be between 0 and 1) to a variable that ranges over?{\displaystyle (-\infty ,+\infty )}?— thereby matching the potential range of the linear prediction function on the right side of the equation.

    Note that both the probabilities?pi?and the regression coefficients are unobserved, and the means of determining them is not part of the model itself. They are typically determined by some sort of optimization procedure, e.g.?maximum likelihood estimation, that finds values that best fit the observed data (i.e. that give the most accurate predictions for the data already observed), usually subject to?regularization?conditions that seek to exclude unlikely values, e.g. extremely large values for any of the regression coefficients. The use of a regularization condition is equivalent to doing?maximum a posteriori?(MAP) estimation, an extension of maximum likelihood. (Regularization is most commonly done using?a squared regularizing function, which is equivalent to placing a zero-mean?Gaussian?prior distribution?on the coefficients, but other regularizers are also possible.) Whether or not regularization is used, it is usually not possible to find a closed-form solution; instead, an iterative numerical method must be used, such as?iteratively reweighted least squares?(IRLS) or, more commonly these days, a?quasi-Newton method?such as the?L-BFGS method.

    The interpretation of the?βj?parameter estimates is as the additive effect on the log of the?odds?for a unit change in the?jth explanatory variable. In the case of a dichotomous explanatory variable, for instance gender,?{\displaystyle e^{\beta }}?is the estimate of the odds of having the outcome for, say, males compared with females.

    An equivalent formula uses the inverse of the logit function, which is the?logistic function, i.e.:

    {\displaystyle \mathbb {E} [Y_{i}\mid \mathbf {X} _{i}]=p_{i}=\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})={\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}}

    The formula can also be written as a?probability distribution?(specifically, using a?probability mass function):

    {\displaystyle \operatorname {Pr} (Y_{i}=y\mid \mathbf {X} _{i})={p_{i}}^{y}(1-p_{i})^{1-y}=\left({\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{y}\left(1-{\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{1-y}={\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}\cdot y}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}}

    As a latent-variable model

    The above model has an equivalent formulation as a?latent-variable model. This formulation is common in the theory of?discrete choice?models, and makes it easier to extend to certain more complicated models with multiple, correlated choices, as well as to compare logistic regression to the closely related?probit model.

    Imagine that, for each trial?i, there is a continuous?latent variable?Yi*?(i.e. an unobserved?random variable) that is distributed as follows:

    {\displaystyle Y_{i}^{\ast }={\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon \,}

    where

    {\displaystyle \varepsilon \sim \operatorname {Logistic} (0,1)\,}

    i.e. the latent variable can be written directly in terms of the linear predictor function and an additive random?error variable?that is distributed according to a standard?logistic distribution.

    Then?Yi?can be viewed as an indicator for whether this latent variable is positive:

    {\displaystyle Y_{i}={\begin{cases}1&{\text{if }}Y_{i}^{\ast }>0\ {\text{ i.e. }}-\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i},\\0&{\text{otherwise.}}\end{cases}}}

    The choice of modeling the error variable specifically with a standard logistic distribution, rather than a general logistic distribution with the location and scale set to arbitrary values, seems restrictive, but in fact it is not. It must be kept in mind that we can choose the regression coefficients ourselves, and very often can use them to offset changes in the parameters of the error variable's distribution. For example, a logistic error-variable distribution with a non-zero location parameter?μ?(which sets the mean) is equivalent to a distribution with a zero location parameter, where?μ?has been added to the intercept coefficient. Both situations produce the same value for?Yi*?regardless of settings of explanatory variables. Similarly, an arbitrary scale parameter?s?is equivalent to setting the scale parameter to 1 and then dividing all regression coefficients by?s. In the latter case, the resulting value of?Yi*?will be smaller by a factor of?s?than in the former case, for all sets of explanatory variables — but critically, it will always remain on the same side of 0, and hence lead to the same?Yi?choice.

    (Note that this predicts that the irrelevancy of the scale parameter may not carry over into more complex models where more than two choices are available.)

    It turns out that this formulation is exactly equivalent to the preceding one, phrased in terms of the?generalized linear model?and without any?latent variables. This can be shown as follows, using the fact that the?cumulative distribution function?(CDF) of the standard?logistic distribution?is the?logistic function, which is the inverse of the?logit function, i.e.

    {\displaystyle \Pr(\varepsilon <x)=\operatorname {logit} ^{-1}(x)}

    Then:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=1\mid \mathbf {X} _{i})&=\Pr(Y_{i}^{\ast }>0\mid \mathbf {X} _{i})&\\&=\Pr({\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon >0)&\\&=\Pr(\varepsilon >-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\&=\Pr(\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&&{\text{(because the logistic distribution is symmetric)}}\\&=\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\&=p_{i}&&{\text{(see above)}}\end{aligned}}}

    This formulation—which is standard in?discrete choice?models—makes clear the relationship between logistic regression (the "logit model") and the?probit model, which uses an error variable distributed according to a standard?normal distribution?instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, "bell curve" shape. The only difference is that the logistic distribution has somewhat?heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more?robust?to model mis-specifications or erroneous data).

    As a two-way latent-variable model

    Yet another formulation uses two separate latent variables:

    {\displaystyle {\begin{aligned}Y_{i}^{0\ast }&={\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}+\varepsilon _{0}\,\\Y_{i}^{1\ast }&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}\,\end{aligned}}}

    where

    {\displaystyle {\begin{aligned}\varepsilon _{0}&\sim \operatorname {EV} _{1}(0,1)\\\varepsilon _{1}&\sim \operatorname {EV} _{1}(0,1)\end{aligned}}}

    where?EV1(0,1) is a standard type-1?extreme value distribution: i.e.

    {\displaystyle \Pr(\varepsilon _{0}=x)=\Pr(\varepsilon _{1}=x)=e^{-x}e^{-e^{-x}}}

    Then

    {\displaystyle Y_{i}={\begin{cases}1&{\text{if }}Y_{i}^{1\ast }>Y_{i}^{0\ast },\\0&{\text{otherwise.}}\end{cases}}}

    This model has a separate latent variable and a separate set of regression coefficients for each possible outcome of the dependent variable. The reason for this separation is that it makes it easy to extend logistic regression to multi-outcome categorical variables, as in the?multinomial logit?model. In such a model, it is natural to model each possible outcome using a different set of regression coefficients. It is also possible to motivate each of the separate latent variables as the theoretical?utility?associated with making the associated choice, and thus motivate logistic regression in terms of?utility theory. (In terms of utility theory, a rational actor always chooses the choice with the greatest associated utility.) This is the approach taken by economists when formulatingdiscrete choice?models, because it both provides a theoretically strong foundation and facilitates intuitions about the model, which in turn makes it easy to consider various sorts of extensions. (See the example below.)

    The choice of the type-1?extreme value distribution?seems fairly arbitrary, but it makes the mathematics work out, and it may be possible to justify its use through?rational choice theory.

    It turns out that this model is equivalent to the previous model, although this seems non-obvious, since there are now two sets of regression coefficients and error variables, and the error variables have a different distribution. In fact, this model reduces directly to the previous one with the following substitutions:

    {\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}
    {\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}}

    An intuition for this comes from the fact that, since we choose based on the maximum of two values, only their difference matters, not the exact values — and this effectively removes one?degree of freedom. Another critical fact is that the difference of two type-1 extreme-value-distributed variables is a logistic distribution, i.e. if?{\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}\sim \operatorname {Logistic} (0,1).}

    We can demonstrate the equivalent as follows:

    {\displaystyle {\begin{aligned}&\Pr(Y_{i}=1\mid \mathbf {X} _{i})\\[4pt]={}&\Pr(Y_{i}^{1\ast }>Y_{i}^{0\ast }\mid \mathbf {X} _{i})&\\={}&\Pr(Y_{i}^{1\ast }-Y_{i}^{0\ast }>0\mid \mathbf {X} _{i})&\\={}&\Pr({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}-({\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}+\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i})+(\varepsilon _{1}-\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0})\cdot \mathbf {X} _{i}+(\varepsilon _{1}-\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0})\cdot \mathbf {X} _{i}+\varepsilon >0)&&{\text{(substitute }}\varepsilon {\text{ as above)}}\\={}&\Pr({\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon >0)&&{\text{(substitute }}{\boldsymbol {\beta }}{\text{ as above)}}\\={}&\Pr(\varepsilon >-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&&{\text{(now, same as above model)}}\\={}&\Pr(\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\={}&\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\={}&p_{i}\end{aligned}}}

    Example

    As an example, consider a province-level election where the choice is between a right-of-center party, a left-of-center party, and a secessionist party (e.g. the?Parti Québécois, which wants?Quebec?to secede from?Canada). We would then use three latent variables, one for each choice. Then, in accordance with?utility theory, we can then interpret the latent variables as expressing the?utility?that results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility — or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice. A voter might expect that the right-of-center party would lower taxes, especially on rich people. This would give low-income people no benefit, i.e. no change in utility (since they usually don't pay taxes); would cause moderate benefit (i.e. somewhat more money, or moderate utility increase) for middle-incoming people; and would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps weak benefit to middle-income people, and significant negative benefit to high-income people. Finally, the secessionist party would take no direct actions on the economy, but simply secede. A low-income or middle-income voter might expect basically no clear utility gain or loss from this, but a high-income voter might expect negative utility, since he/she is likely to own companies, which will have a harder time doing business in such an environment and probably lose money.

    These intuitions can be expressed as follows:

    Estimated strength of regression coefficient for different outcomes (party choices) and different values of explanatory variables?Center-rightCenter-leftSecessionistHigh-incomeMiddle-incomeLow-income
    strong +strong ?strong ?
    moderate +weak +none
    nonestrong +none

    This clearly shows that

  • Separate sets of regression coefficients need to exist for each choice. When phrased in terms of utility, this can be seen very easily. Different choices have different effects on net utility; furthermore, the effects vary in complex ways that depend on the characteristics of each individual, so there need to be separate sets of coefficients for each characteristic, not simply a single extra per-choice characteristic.
  • Even though income is a continuous variable, its effect on utility is too complex for it to be treated as a single variable. Either it needs to be directly split up into ranges, or higher powers of income need to be added so that?polynomial regression?on income is effectively done.
  • As a "log-linear" model

    Yet another formulation combines the two-way latent variable formulation above with the original formulation higher up without latent variables, and in the process provides a link to one of the standard formulations of the?multinomial logit.

    Here, instead of writing the?logit?of the probabilities?pi?as a linear predictor, we separate the linear predictor into two, one for each of the two outcomes:

    {\displaystyle {\begin{aligned}\ln \Pr(Y_{i}=0)&={\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}-\ln Z\,\\\ln \Pr(Y_{i}=1)&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-\ln Z\,\\\end{aligned}}}

    Note that two separate sets of regression coefficients have been introduced, just as in the two-way latent variable model, and the two equations appear a form that writes the?logarithm?of the associated probability as a linear predictor, with an extra term?{\displaystyle -lnZ}?at the end. This term, as it turns out, serves as thenormalizing factor?ensuring that the result is a distribution. This can be seen by exponentiating both sides:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=0)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}\,\\\Pr(Y_{i}=1)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}\,\\\end{aligned}}}

    In this form it is clear that the purpose of?Z?is to ensure that the resulting distribution over?Yi?is in fact a?probability distribution, i.e. it sums to 1. This means that?Z?is simply the sum of all un-normalized probabilities, and by dividing each probability by?Z, the probabilities become "normalized". That is:

    {\displaystyle Z=e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}

    and the resulting equations are

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=0)&={\frac {e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\\\Pr(Y_{i}=1)&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\end{aligned}}}

    Or generally:

    {\displaystyle \Pr(Y_{i}=c)={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{h}e^{{\boldsymbol {\beta }}_{h}\cdot \mathbf {X} _{i}}}}}

    This shows clearly how to generalize this formulation to more than two outcomes, as in?multinomial logit. Note that this general formulation is exactly theSoftmax function?as in

    {\displaystyle \Pr(Y_{i}=c)=\operatorname {softmax} (c,{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i},{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i},\dots ).}

    In order to prove that this is equivalent to the previous model, note that the above model is overspecified, in that?{\displaystyle \Pr(Y_{i}=0)}?and?{\displaystyle \Pr(Y_{i}=1)}?cannot be independently specified: rather?{\displaystyle \Pr(Y_{i}=0)+\Pr(Y_{i}=1)=1}?so knowing one automatically determines the other. As a result, the model is?nonidentifiable, in that multiple combinations of?β0?and?β1?will produce the same probabilities for all possible explanatory variables. In fact, it can be seen that adding any constant vector to both of them will produce the same probabilities:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {e^{({\boldsymbol {\beta }}_{1}+\mathbf {C} )\cdot \mathbf {X} _{i}}}{e^{({\boldsymbol {\beta }}_{0}+\mathbf {C} )\cdot \mathbf {X} _{i}}+e^{({\boldsymbol {\beta }}_{1}+\mathbf {C} )\cdot \mathbf {X} _{i}}}}\,\\&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}}}\,\\&={\frac {e^{\mathbf {C} \cdot \mathbf {X} _{i}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{\mathbf {C} \cdot \mathbf {X} _{i}}(e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}})}}\,\\&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\\\end{aligned}}}

    As a result, we can simplify matters, and restore identifiability, by picking an arbitrary value for one of the two vectors. We choose to set?{\displaystyle {\boldsymbol {\beta }}_{0}=\mathbf {0} .}?Then,

    {\displaystyle e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}=e^{\mathbf {0} \cdot \mathbf {X} _{i}}=1}

    and so

    {\displaystyle \Pr(Y_{i}=1)={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}={\frac {1}{1+e^{-{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}=p_{i}}

    which shows that this formulation is indeed equivalent to the previous formulation. (As in the two-way latent variable formulation, any settings where?{\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}?will produce equivalent results.)

    Note that most treatments of the?multinomial logit?model start out either by extending the "log-linear" formulation presented here or the two-way latent variable formulation presented above, since both clearly show the way that the model could be extended to multi-way outcomes. In general, the presentation with latent variables is more common in?econometrics?and?political science, where?discrete choice?models and?utility theory?reign, while the "log-linear" formulation here is more common in?computer science, e.g.?machine learning?and?natural language processing.

    As a single-layer perceptron[edit]

    The model has an equivalent formulation

    {\displaystyle p_{i}={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{k}x_{k,i})}}}.\,}

    This functional form is commonly called a single-layer?perceptron?or single-layer?artificial neural network. A single-layer neural network computes a continuous output instead of a?step function. The derivative of?pi?with respect to?X?=?(x1, ...,?xk) is computed from the general form:

    {\displaystyle y={\frac {1}{1+e^{-f(X)}}}}

    where?f(X) is an?analytic function?in?X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in?backpropagation. This function is also preferred because its derivative is easily calculated:

    {\displaystyle {\frac {\mathrm ze8trgl8bvbq y}{\mathrm ze8trgl8bvbq X}}=y(1-y){\frac {\mathrm ze8trgl8bvbq f}{\mathrm ze8trgl8bvbq X}}.\,}

    In terms of binomial data[edit]

    A closely related model assumes that each?i?is associated not with a single Bernoulli trial but with?ni?independent identically distributed?trials, where the observation?Yi?is the number of successes observed (the sum of the individual Bernoulli-distributed random variables), and hence follows a?binomial distribution:

    {\displaystyle Y_{i}\ \sim \operatorname {Bin} (n_{i},p_{i}),{\text{ for }}i=1,\dots ,n}

    An example of this distribution is the fraction of seeds (pi) that germinate after?ni?are planted.

    In terms of?expected values, this model is expressed as follows:

    {\displaystyle p_{i}=\mathbb {E} \left[\left.{\frac {Y_{i}}{n_{i}}}\,\right|\,\mathbf {X} _{i}\right],}

    so that

    {\displaystyle \operatorname {logit} \left(\mathbb {E} \left[\left.{\frac {Y_{i}}{n_{i}}}\,\right|\,\mathbf {X} _{i}\right]\right)=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i},}

    Or equivalently:

    {\displaystyle \operatorname {Pr} (Y_{i}=y\mid \mathbf {X} _{i})={n_{i} \choose y}p_{i}^{y}(1-p_{i})^{n_{i}-y}={n_{i} \choose y}\left({\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{y}\left(1-{\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{n_{i}-y}}

    This model can be fit using the same sorts of methods as the above more basic model.

    Bayesian logistic regression

    Comparison of?logistic function?with a scaled inverse?probit function?(i.e. the?CDF?of the?normal distribution), comparing?{\displaystyle \sigma (x)}?vs.?{\displaystyle \Phi ({\sqrt {\frac {\pi }{8}}}x)}, which makes the slopes the same at the origin. This shows the?heavier tails?of the logistic distribution.

    In a?Bayesian statistics?context,?prior distributions?are normally placed on the regression coefficients, usually in the form of?Gaussian distributions. Unfortunately, the Gaussian distribution is not the?conjugate prior?of thelikelihood function?in logistic regression. As a result, the?posterior distribution?is difficult to calculate, even using standard simulation algorithms (e.g.?Gibbs sampling)[citation needed].

    There are various possibilities:

    • Don't do a proper Bayesian analysis, but simply compute a?maximum a posteriori?point estimate of the parameters. This is common, for example, in "maximum entropy" classifiers in?machine learning.
    • Use a more general approximation method such as the?Metropolis–Hastings algorithm.
    • Draw a Markov chain Monte Carlo sample from the exact posterior by using the Independent Metropolis–Hastings algorithm with heavy-tailed multivariate candidate distribution found by matching the mode and curvature at the mode of the normal approximation to the posterior and then using the Student’s t shape with low degrees of freedom.[26]?This is shown to have excellent convergence properties.
    • Use a?latent variable model?and approximate the logistic distribution using a more tractable distribution, e.g. a?Student's t-distribution?or a?mixture?of?normal distributions.
    • Do?probit regression?instead of logistic regression. This is actually a special case of the previous situation, using a?normal distribution?in place of a Student's t, mixture of normals, etc. This will be less accurate but has the advantage that probit regression is extremely common, and a ready-made Bayesian implementation may already be available.
    • Use the?Laplace approximation?of the posterior distribution.[27]?This approximates the posterior with a Gaussian distribution. This is not a terribly good approximation, but it suffices if all that is desired is an estimate of the posterior mean and variance. In such a case, an approximation scheme such as?variational Bayescan be used.[28]

    Gibbs sampling with an approximating distribution[edit]

    As shown above, logistic regression is equivalent to a?latent variable model?with an?error variable?distributed according to a standard?logistic distribution. The overall distribution of the latent variable?{\displaystyle Y_{i}\ast }?is also a logistic distribution, with the mean equal to?{\displaystyle {\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}?(i.e. the fixed quantity added to the error variable). This model considerably simplifies the application of techniques such as?Gibbs sampling. However, sampling the regression coefficients is still difficult, because of the lack of?conjugacy?between the normal and logistic distributions. Changing the prior distribution over the regression coefficients is of no help, because the logistic distribution is not in the?exponential family?and thus has no?conjugate prior.

    One possibility is to use a more general?Markov chain Monte Carlo?technique, such as the?Metropolis–Hastings algorithm, which can sample arbitrary distributions. Another possibility, however, is to replace the logistic distribution with a similar-shaped distribution that is easier to work with using Gibbs sampling. In fact, the logistic and normal distributions have a similar shape, and thus one possibility is simply to have normally distributed errors. Because the normal distribution is conjugate to itself, sampling the regression coefficients becomes easy. In fact, this model is exactly the model used in?probit regression.

    However, the normal and logistic distributions differ in that the logistic has?heavier tails. As a result, it is more?robust?to inaccuracies in the underlying model (which are inevitable, in that the model is essentially always an approximation) or to errors in the data. Probit regression loses some of this robustness.

    Another alternative is to use errors distributed as a?Student's t-distribution. The Student's t-distribution has heavy tails, and is easy to sample from because it is the?compound distribution?of a normal distribution with variance distributed as an?inverse gamma distribution. In other words, if a normal distribution is used for the error variable, and another?latent variable, following an inverse gamma distribution, is added corresponding to the variance of this error variable, the?marginal distribution?of the error variable will follow a Student's t distribution. Because of the various conjugacy relationships, all variables in this model are easy to sample from.

    The Student's t distribution that best approximates a standard logistic distribution can be determined by?matching the moments?of the two distributions. The Student's t distribution has three parameters, and since the?skewness?of both distributions is always 0, the first four moments can all be matched, using the following equations:

    {\displaystyle {\begin{aligned}\mu &=0\\{\frac {\nu }{\nu -2}}s^{2}&={\frac {\pi ^{2}}{3}}\\{\frac {6}{\nu -4}}&={\frac {6}{5}}\end{aligned}}}

    This yields the following values:

    {\displaystyle {\begin{aligned}\mu &=0\\s&={\sqrt {{\frac {7}{9}}{\frac {\pi ^{2}}{3}}}}\\\nu &=9\end{aligned}}}

    The following graphs compare the standard logistic distribution with the Student's t distribution that matches the first four moments using the above-determined values, as well as the normal distribution that matches the first two moments. Note how much closer the Student's t distribution agrees, especially in the tails. Beyond about two standard deviations from the mean, the logistic and normal distributions diverge rapidly, but the logistic and Student's t distributions don't start diverging significantly until more than 5 standard deviations away.

    (Another possibility, also amenable to Gibbs sampling, is to approximate the logistic distribution using a?mixture density?of?normal distributions.)

    Comparison of logistic and approximating distributions (t, normal). Tails of distributions.
    Further tails of distributions. Extreme tails of distributions.

    Extensions

    There are large numbers of extensions:

    • Multinomial logistic regression?(or?multinomial logit) handles the case of a multi-way?categorical?dependent variable (with unordered values, also called "classification"). Note that the general case of having dependent variables with more than two values is termed?polytomous regression.
    • Ordered logistic regression?(or?ordered logit) handles?ordinal?dependent variables (ordered values).
    • Mixed logit?is an extension of multinomial logit that allows for correlations among the choices of the dependent variable.
    • An extension of the logistic model to sets of interdependent variables is the?conditional random field.

    Software

    Most?statistical software?can do binary logistic regression.

    • SAS
      • PROC LOGISTIC for basic logistic regression.[29]
      • PROC CATMOD when all the variables are categorical.[30]
      • PROC GLIMMIX for?multilevel model?logistic regression.[31]
    • R
      • glm?in the stats package (using family = binomial)[32]
      • GLMNET package for an efficient implementation regularized logistic regression
      • lmer for mixed effects logistic regression
    • python
      • Logistic Regression with ARD prior?code?,?tutorial
      • Bayesian Logistic Regression with Laplace Approximation?code,?tutorial
      • Variational Logistic Regression?code,?tutorial
    • NCSS
      • Logistic Regression in NCSS

    See also

    • Logistic function
    • Discrete choice
    • Jarrow–Turnbull model
    • Limited dependent variable
    • Multinomial logit model
    • Ordered logit
    • Hosmer–Lemeshow test
    • Brier score
    • MLPACK?- contains a?C++?implementation of logistic regression
    • Local case-control sampling
    • Logistic model tree

    References[edit]

  • David A. Freedman?(2009).?Statistical Models: Theory and Practice.Cambridge University Press. p.?128.
  • Walker, SH; Duncan, DB (1967). "Estimation of the probability of an event as a function of several independent variables".?Biometrika?54: 167–178.doi:10.2307/2333860.
  • Cox, DR (1958). "The regression analysis of binary sequences (with discussion)".?J Roy Stat Soc B?20: 215–242.
  • ?Gareth James; Daniela Witten; Trevor Hastie; Robert Tibshirani (2013).?An Introduction to Statistical Learning. Springer. p.?6.
  • Boyd, C. R.; Tolson, M. A.; Copes, W. S. (1987). "Evaluating trauma care: The TRISS method. Trauma Score and the Injury Severity Score".?The Journal of trauma27?(4): 370–378.?doi:10.1097/00005373-198704000-00005.?PMID?3106646.
  • Kologlu M., Elker D., Altun H., Sayek I. Validation of MPI and OIA II in two different groups of patients with secondary peritonitis // Hepato-Gastroenterology. – 2001. – Vol. 48, № 37. – P. 147-151.
  • ?Biondo S., Ramos E., Deiros M. et al. Prognostic factors for mortality in left colonic peritonitis: a new scoring system // J. Am. Coll. Surg. – 2000. – Vol. 191, № 6. – Р. 635-642.
  • ?Marshall J.C., Cook D.J., Christou N.V. et al. Multiple Organ Dysfunction Score: A reliable descriptor of a complex clinical outcome // Crit. Care Med. – 1995. – Vol. 23. – P. 1638-1652.
  • ?Le Gall J.-R., Lemeshow S., Saulnier F. A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study // JAMA. – 1993. – Vol. 270. – P. 2957-2963.
  • ?Truett, J; Cornfield, J; Kannel, W (1967). "A multivariate analysis of the risk of coronary heart disease in Framingham".?Journal of chronic diseases?20?(7): 511–24.?doi:10.1016/0021-9681(67)90082-3.?PMID?6028270.
  • ?Harrell, Frank E. (2001).?Regression Modeling Strategies. Springer-Verlag.ISBN?0-387-95232-2.
  • ?M. Strano; B.M. Colosimo (2006).?"Logistic regression analysis for experimental determination of forming limit diagrams".?International Journal of Machine Tools and Manufacture?46?(6): 673–682.?doi:10.1016/j.ijmachtools.2005.07.005.
  • ?Palei, S. K.; Das, S. K. (2009). "Logistic regression model for prediction of roof fall risks in bord and pillar workings in coal mines: An approach".?Safety Science?47: 88–96.?doi:10.1016/j.ssci.2008.01.002.
  • ^?Jump up to:a?b?c?d?e?f?g?h?i?j?k?Hosmer, David W.; Lemeshow, Stanley (2000).?Applied Logistic Regression?(2nd ed.). Wiley.?ISBN?0-471-35632-8.[page?needed]
  • http://www.planta.cn/forum/files_planta/introduction_to_categorical_data_analysis_805.pdf
  • ?Everitt, Brian (1998).?The Cambridge Dictionary of Statistics. Cambridge, UK New York: Cambridge University Press.?ISBN?0521593468.
  • ?Menard, Scott W. (2002).?Applied Logistic Regression?(2nd ed.). SAGE.?ISBN?978-0-7619-2208-7.[page?needed]
  • Menard ch 1.3
  • Peduzzi, P; Concato, J; Kemper, E; Holford, TR; Feinstein, AR (December 1996). "A simulation study of the number of events per variable in logistic regression analysis.".?Journal of Clinical Epidemiology?49?(12): 1373–9.?doi:10.1016/s0895-4356(96)00236-3.?PMID?8970487.
  • ?Greene, William N. (2003).?Econometric Analysis?(Fifth ed.). Prentice-Hall.ISBN?0-13-066189-9.
  • ?Cohen, Jacob; Cohen, Patricia; West, Steven G.; Aiken, Leona S. (2002).?Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences?(3rd ed.). Routledge.?ISBN?978-0-8058-2223-6.[page?needed]
  • Measures of Fit for Logistic Regression
  • ?Tjur, Tue (2009). "Coefficients of determination in logistic regression models".?American Statistician: 366–372.
  • ?Hosmer, D.W. (1997). "A comparison of goodness-of-fit tests for the logistic regression model".?Stat in Med?16: 965–980.?doi:10.1002/(sici)1097-0258(19970515)16:9<965::aid-sim509>3.3.co;2-f.
  • https://class.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/classification.pdf?slide 16
  • ?Bolstad, William M. (2010).?Understandeing Computational Bayesian Statistics. Wiley.?ISBN?978-0-470-04609-8.[page?needed]
  • ?Bishop, Christopher M. "Chapter 4. Linear Models for Classification".?Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp.?217–218.?ISBN?978-0387-31073-2.
  • ?Bishop, Christopher M. "Chapter 10. Approximate Inference".?Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp.?498–505.ISBN?978-0387-31073-2.
  • https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#logistic_toc.htm
  • https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_catmod_sect003.htm
  • https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#glimmix_toc.htm
  • Gelman, Andrew; Hill, Jennifer (2007).?Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp.?79–108.?ISBN?978-0-521-68689-1.
  • Further reading

    • Agresti, Alan. (2002).?Categorical Data Analysis. New York: Wiley-Interscience.?ISBN?0-471-36093-7.
    • Amemiya, Takeshi (1985).?"Qualitative Response Models".?Advanced Econometrics. Oxford: Basil Blackwell. pp.?267–359.?ISBN?0-631-13345-3.
    • Balakrishnan, N. (1991).?Handbook of the Logistic Distribution. Marcel Dekker, Inc.?ISBN?978-0-8247-8587-1.
    • Gouriéroux, Christian?(2000).?"The Simple Dichotomy".?Econometrics of Qualitative Dependent Variables. New York: Cambridge University Press. pp.?6–37.ISBN?0-521-58985-1.
    • Greene, William H. (2003).?Econometric Analysis, fifth edition. Prentice Hall.?ISBN?0-13-066189-9.
    • Hilbe, Joseph M. (2009).?Logistic Regression Models. Chapman & Hall/CRC Press.?ISBN?978-1-4200-7575-5.
    • Hosmer, David (2013).?Applied logistic regression. Hoboken, New Jersey: Wiley.?ISBN?978-0470582473.
    • Howell, David C. (2010).?Statistical Methods for Psychology, 7th ed. Belmont, CA; Thomson Wadsworth.?ISBN?978-0-495-59786-5.
    • Peduzzi, P.; J. Concato; E. Kemper; T.R. Holford; A.R. Feinstein (1996). "A simulation study of the number of events per variable in logistic regression analysis".?Journal of Clinical Epidemiology?49?(12): 1373–1379.?doi:10.1016/s0895-4356(96)00236-3.?PMID?8970487.

    External links


    • Econometrics Lecture (topic: Logit model)
      ?on?YouTube?by?Mark Thoma
    • Logistic Regression Interpretation
    • Logistic Regression tutorial
    • Open source Excel add-in implementation of Logistic Regressio
      n

    轉載于:https://www.cnblogs.com/davidwang456/articles/5592886.html

    總結

    以上是生活随笔為你收集整理的Logistic regression--转的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    色窝窝无码一区二区三区色欲 | 四虎影视成人永久免费观看视频 | 亚洲中文字幕av在天堂 | a片免费视频在线观看 | 国产真实夫妇视频 | 天干天干啦夜天干天2017 | 蜜桃臀无码内射一区二区三区 | 99精品久久毛片a片 | 天堂а√在线地址中文在线 | 中文字幕无码乱人伦 | 黑人粗大猛烈进出高潮视频 | 亚洲国产av美女网站 | 久久精品女人天堂av免费观看 | 亚洲人亚洲人成电影网站色 | 无码人中文字幕 | 2020最新国产自产精品 | 成人精品视频一区二区三区尤物 | 九九综合va免费看 | 澳门永久av免费网站 | 色一情一乱一伦一视频免费看 | 给我免费的视频在线观看 | 人妻少妇精品久久 | 国产一区二区三区四区五区加勒比 | 一区二区三区乱码在线 | 欧洲 | 亚洲精品一区二区三区在线 | 骚片av蜜桃精品一区 | 欧美日韩人成综合在线播放 | 女高中生第一次破苞av | 色综合久久久无码网中文 | yw尤物av无码国产在线观看 | 国产明星裸体无码xxxx视频 | 大地资源网第二页免费观看 | 久久久久久a亚洲欧洲av冫 | 人人超人人超碰超国产 | 亚洲国产高清在线观看视频 | 水蜜桃色314在线观看 | 国产人成高清在线视频99最全资源 | 亚洲综合色区中文字幕 | 日本精品人妻无码77777 天堂一区人妻无码 | 青草视频在线播放 | 在线观看免费人成视频 | 国产真实乱对白精彩久久 | 99久久久无码国产aaa精品 | 久久综合香蕉国产蜜臀av | 国产免费无码一区二区视频 | 中文字幕人妻无码一区二区三区 | 国产精品久久福利网站 | 欧美国产日韩亚洲中文 | 国产精品久久国产三级国 | 又大又黄又粗又爽的免费视频 | 中文字幕人妻无码一区二区三区 | 天天综合网天天综合色 | 夫妻免费无码v看片 | 一区二区传媒有限公司 | 国产热a欧美热a在线视频 | 国产超级va在线观看视频 | 国产午夜福利亚洲第一 | 又粗又大又硬毛片免费看 | 无码人妻丰满熟妇区五十路百度 | 97夜夜澡人人爽人人喊中国片 | 98国产精品综合一区二区三区 | 四十如虎的丰满熟妇啪啪 | 东京无码熟妇人妻av在线网址 | 久久99久久99精品中文字幕 | 99视频精品全部免费免费观看 | 少妇性l交大片欧洲热妇乱xxx | 夜夜夜高潮夜夜爽夜夜爰爰 | 国产精品亚洲综合色区韩国 | 国产国产精品人在线视 | 欧美怡红院免费全部视频 | 久久国产精品精品国产色婷婷 | 自拍偷自拍亚洲精品被多人伦好爽 | 婷婷丁香五月天综合东京热 | 一区二区传媒有限公司 | 国产免费久久久久久无码 | 久久久久久久久888 | 亚洲成a人片在线观看无码 | 中文字幕无码日韩欧毛 | 国产高清不卡无码视频 | 夜夜躁日日躁狠狠久久av | 国产亚洲精品久久久ai换 | 性色av无码免费一区二区三区 | 欧美 日韩 人妻 高清 中文 | 精品久久久久久人妻无码中文字幕 | 丰满少妇女裸体bbw | 亚洲成av人影院在线观看 | 狠狠色噜噜狠狠狠7777奇米 | 无码人妻黑人中文字幕 | 麻豆av传媒蜜桃天美传媒 | 久久伊人色av天堂九九小黄鸭 | 狂野欧美激情性xxxx | 天堂а√在线中文在线 | 国产精品久久国产精品99 | 国产午夜视频在线观看 | 色狠狠av一区二区三区 | 欧美性生交xxxxx久久久 | 天天爽夜夜爽夜夜爽 | 国产精品99爱免费视频 | 国产小呦泬泬99精品 | av小次郎收藏 | 永久黄网站色视频免费直播 | 国产av一区二区精品久久凹凸 | 亚洲欧美精品aaaaaa片 | 欧美高清在线精品一区 | 在线看片无码永久免费视频 | 国产精品久久久久久亚洲影视内衣 | 波多野42部无码喷潮在线 | 三级4级全黄60分钟 | 无码精品国产va在线观看dvd | 乱码午夜-极国产极内射 | 欧洲vodafone精品性 | 国产手机在线αⅴ片无码观看 | 久久精品国产一区二区三区肥胖 | 亚洲色成人中文字幕网站 | 久久精品人妻少妇一区二区三区 | 国产国语老龄妇女a片 | 色婷婷欧美在线播放内射 | ass日本丰满熟妇pics | 精品aⅴ一区二区三区 | 大地资源网第二页免费观看 | 国产精品久久久久7777 | 熟女体下毛毛黑森林 | 久久 国产 尿 小便 嘘嘘 | 人妻少妇精品视频专区 | 国产精品久久久久影院嫩草 | 麻豆国产人妻欲求不满谁演的 | 国内老熟妇对白xxxxhd | 亚洲欧美综合区丁香五月小说 | 大屁股大乳丰满人妻 | 国产成人一区二区三区别 | 久久精品无码一区二区三区 | 无码免费一区二区三区 | 日本乱人伦片中文三区 | 久久精品女人的天堂av | 亚洲精品久久久久中文第一幕 | 东京热男人av天堂 | 天堂无码人妻精品一区二区三区 | 永久免费精品精品永久-夜色 | 亚洲爆乳精品无码一区二区三区 | 亚洲色欲久久久综合网东京热 | 亚洲一区二区三区偷拍女厕 | 天海翼激烈高潮到腰振不止 | 97无码免费人妻超级碰碰夜夜 | 99久久亚洲精品无码毛片 | 国产乱子伦视频在线播放 | 蜜桃臀无码内射一区二区三区 | 午夜精品久久久久久久久 | 精品人妻人人做人人爽 | 国产极品视觉盛宴 | 又大又硬又爽免费视频 | 国产精品va在线观看无码 | 国产成人综合美国十次 | 亚洲精品国偷拍自产在线麻豆 | 成熟女人特级毛片www免费 | 亚洲日韩av一区二区三区四区 | 亚洲熟悉妇女xxx妇女av | 久久久精品国产sm最大网站 | 久久久久久国产精品无码下载 | 天天拍夜夜添久久精品 | 国产午夜无码视频在线观看 | 黑人巨大精品欧美黑寡妇 | 欧美性色19p | 欧美激情综合亚洲一二区 | 人妻人人添人妻人人爱 | 性欧美大战久久久久久久 | 日本一区二区三区免费播放 | 国产色xx群视频射精 | 久久久精品欧美一区二区免费 | 无码人妻黑人中文字幕 | 国产做国产爱免费视频 | 久久久精品人妻久久影视 | 扒开双腿吃奶呻吟做受视频 | 久久久精品人妻久久影视 | 国产精品igao视频网 | 亚洲一区二区三区在线观看网站 | 亚洲精品午夜无码电影网 | 国产精品久久久久影院嫩草 | 欧美激情一区二区三区成人 | 国产精品久久久一区二区三区 | 久久久无码中文字幕久... | 久久99精品国产.久久久久 | 正在播放东北夫妻内射 | 亚洲国产成人a精品不卡在线 | 久久久www成人免费毛片 | 久久 国产 尿 小便 嘘嘘 | 国产真人无遮挡作爱免费视频 | 国内精品九九久久久精品 | 高中生自慰www网站 | 亚洲伊人久久精品影院 | 精品国精品国产自在久国产87 | 成在人线av无码免费 | 国产精品va在线播放 | 99久久人妻精品免费二区 | 扒开双腿吃奶呻吟做受视频 | 一本色道久久综合狠狠躁 | 亚洲中文字幕在线无码一区二区 | 久久久久免费精品国产 | 久久久精品欧美一区二区免费 | 丝袜足控一区二区三区 | 99国产精品白浆在线观看免费 | 波多野结衣乳巨码无在线观看 | 亚洲熟悉妇女xxx妇女av | 中文字幕av伊人av无码av | 久久精品国产一区二区三区 | 久久国产精品二国产精品 | 欧美成人高清在线播放 | 国产黄在线观看免费观看不卡 | 精品久久久久久人妻无码中文字幕 | 亚洲国产欧美国产综合一区 | 2019午夜福利不卡片在线 | 性啪啪chinese东北女人 | 国产精品高潮呻吟av久久4虎 | 婷婷综合久久中文字幕蜜桃三电影 | 国产精品国产自线拍免费软件 | 国产亲子乱弄免费视频 | 秋霞特色aa大片 | 无码av最新清无码专区吞精 | 欧美猛少妇色xxxxx | 久久久亚洲欧洲日产国码αv | 老太婆性杂交欧美肥老太 | 久久精品人人做人人综合试看 | 久久久亚洲欧洲日产国码αv | 清纯唯美经典一区二区 | 人人澡人人妻人人爽人人蜜桃 | 黑人玩弄人妻中文在线 | 国产女主播喷水视频在线观看 | 奇米影视888欧美在线观看 | 久久国语露脸国产精品电影 | 亚洲国产一区二区三区在线观看 | 亚洲精品久久久久久一区二区 | 国产精品对白交换视频 | 国产偷国产偷精品高清尤物 | 久久综合九色综合欧美狠狠 | 国产69精品久久久久app下载 | 欧美日韩在线亚洲综合国产人 | 日日摸夜夜摸狠狠摸婷婷 | 少妇性俱乐部纵欲狂欢电影 | 一本久久a久久精品亚洲 | 中国女人内谢69xxxx | 日日天日日夜日日摸 | 无码午夜成人1000部免费视频 | 亚洲天堂2017无码中文 | 女人高潮内射99精品 | 久久久久亚洲精品男人的天堂 | 99re在线播放 | 草草网站影院白丝内射 | 婷婷色婷婷开心五月四房播播 | 国产av无码专区亚洲a∨毛片 | 天天躁夜夜躁狠狠是什么心态 | 在线观看欧美一区二区三区 | 东京一本一道一二三区 | 亚洲国精产品一二二线 | 亚洲国产精品成人久久蜜臀 | 少妇人妻大乳在线视频 | 国产偷自视频区视频 | 国精产品一品二品国精品69xx | 日本一卡二卡不卡视频查询 | 爆乳一区二区三区无码 | 亚洲精品中文字幕乱码 | 亚洲一区av无码专区在线观看 | 亚洲高清偷拍一区二区三区 | 亚洲爆乳大丰满无码专区 | 熟女少妇人妻中文字幕 | 99精品久久毛片a片 | 真人与拘做受免费视频 | 国产亚洲人成在线播放 | 婷婷五月综合缴情在线视频 | 日本又色又爽又黄的a片18禁 | 亚洲の无码国产の无码步美 | 国产精品欧美成人 | 久久久久免费精品国产 | 国产一区二区三区日韩精品 | 四虎永久在线精品免费网址 | 又大又硬又黄的免费视频 | 超碰97人人做人人爱少妇 | 欧美日韩一区二区三区自拍 | 国产精品无码mv在线观看 | 国产亚洲日韩欧美另类第八页 | 性生交大片免费看女人按摩摩 | 亚洲中文字幕久久无码 | 国产人妻精品一区二区三区不卡 | 无码精品人妻一区二区三区av | 一本大道久久东京热无码av | 中文字幕无码免费久久9一区9 | 亚洲国产综合无码一区 | 波多野结衣高清一区二区三区 | 国产97色在线 | 免 | 国产精品永久免费视频 | 学生妹亚洲一区二区 | 久久综合久久自在自线精品自 | 欧美成人午夜精品久久久 | 国产乱子伦视频在线播放 | 国产精品国产自线拍免费软件 | 久久99精品久久久久久动态图 | 国产激情一区二区三区 | 久久午夜无码鲁丝片午夜精品 | 中文字幕色婷婷在线视频 | 曰本女人与公拘交酡免费视频 | 人人妻人人澡人人爽人人精品浪潮 | 欧美丰满少妇xxxx性 | 国产无遮挡又黄又爽又色 | 亚洲色无码一区二区三区 | 丰腴饱满的极品熟妇 | 久久99精品国产.久久久久 | 欧美喷潮久久久xxxxx | 亚洲精品一区二区三区婷婷月 | 国产精品内射视频免费 | 一本精品99久久精品77 | 亚洲色无码一区二区三区 | 人妻少妇精品无码专区动漫 | 精品久久久久久亚洲精品 | 国产精品igao视频网 | 欧美熟妇另类久久久久久多毛 | 久久亚洲日韩精品一区二区三区 | 国产在热线精品视频 | 中国女人内谢69xxxx | 曰韩无码二三区中文字幕 | 午夜丰满少妇性开放视频 | 国产精品久久久久9999小说 | 日本大乳高潮视频在线观看 | 88国产精品欧美一区二区三区 | 香蕉久久久久久av成人 | 无套内射视频囯产 | 国产成人精品视频ⅴa片软件竹菊 | 人妻少妇被猛烈进入中文字幕 | 国内精品久久毛片一区二区 | 久久亚洲中文字幕精品一区 | 精品国产精品久久一区免费式 | 日韩欧美中文字幕公布 | 亚洲综合伊人久久大杳蕉 | 色综合视频一区二区三区 | 欧美猛少妇色xxxxx | 国产情侣作爱视频免费观看 | 99久久99久久免费精品蜜桃 | 亚洲国产精品无码一区二区三区 | 久久精品人人做人人综合试看 | 国产人妻精品一区二区三区不卡 | 国产9 9在线 | 中文 | 成人免费视频一区二区 | 亚洲七七久久桃花影院 | 亚洲国产精品成人久久蜜臀 | 麻豆av传媒蜜桃天美传媒 | 大色综合色综合网站 | 老头边吃奶边弄进去呻吟 | 国产精品第一区揄拍无码 | 色综合久久88色综合天天 | 亚洲一区二区三区在线观看网站 | 强奷人妻日本中文字幕 | 国精产品一品二品国精品69xx | 人人妻人人藻人人爽欧美一区 | 国产人妻精品一区二区三区 | 国产9 9在线 | 中文 | 露脸叫床粗话东北少妇 | 亚洲一区二区三区含羞草 | 老熟妇乱子伦牲交视频 | 高中生自慰www网站 | 亚洲欧洲日本综合aⅴ在线 | 亚洲精品国产a久久久久久 | 亚洲欧美精品aaaaaa片 | 日韩人妻无码一区二区三区久久99 | 人妻插b视频一区二区三区 | 日本va欧美va欧美va精品 | 2019午夜福利不卡片在线 | 欧美性猛交内射兽交老熟妇 | 精品一区二区三区波多野结衣 | 国产sm调教视频在线观看 | 日日天日日夜日日摸 | 久久国产自偷自偷免费一区调 | 捆绑白丝粉色jk震动捧喷白浆 | 欧美日韩在线亚洲综合国产人 | 欧美高清在线精品一区 | 玩弄少妇高潮ⅹxxxyw | 久久久久久久久蜜桃 | 男人和女人高潮免费网站 | 中文字幕人成乱码熟女app | 亚洲国产精品久久人人爱 | 77777熟女视频在线观看 а天堂中文在线官网 | 搡女人真爽免费视频大全 | 欧美日韩一区二区免费视频 | 午夜熟女插插xx免费视频 | 福利一区二区三区视频在线观看 | 国产精品久久国产精品99 | 天堂亚洲免费视频 | 99久久无码一区人妻 | 99精品无人区乱码1区2区3区 | 亚洲综合久久一区二区 | 又粗又大又硬毛片免费看 | 无遮挡国产高潮视频免费观看 | 亚洲阿v天堂在线 | 精品一区二区三区无码免费视频 | 日产精品99久久久久久 | 午夜时刻免费入口 | 成人一区二区免费视频 | 无码av岛国片在线播放 | 精品久久久久久人妻无码中文字幕 | 成人免费视频视频在线观看 免费 | 国产综合色产在线精品 | 中文无码成人免费视频在线观看 | 日本乱人伦片中文三区 | 十八禁真人啪啪免费网站 | 成人精品视频一区二区三区尤物 | 色偷偷人人澡人人爽人人模 | 国产69精品久久久久app下载 | 国産精品久久久久久久 | 亚洲中文字幕无码一久久区 | 国产激情无码一区二区app | 亚洲の无码国产の无码步美 | a在线亚洲男人的天堂 | 亚洲中文字幕乱码av波多ji | 国产精品亚洲五月天高清 | 国产精品资源一区二区 | 在线天堂新版最新版在线8 | 中文字幕无线码 | v一区无码内射国产 | 国产偷抇久久精品a片69 | 在教室伦流澡到高潮hnp视频 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 久久久中文字幕日本无吗 | 久久久久99精品成人片 | 女人色极品影院 | 国产精品美女久久久 | 亚洲精品无码国产 | 丰满少妇女裸体bbw | 午夜无码人妻av大片色欲 | 夜夜躁日日躁狠狠久久av | 国产成人精品无码播放 | 欧美丰满少妇xxxx性 | 青草青草久热国产精品 | 久久久久久久女国产乱让韩 | 亚洲精品一区国产 | 无码人妻丰满熟妇区五十路百度 | 嫩b人妻精品一区二区三区 | 亚洲 另类 在线 欧美 制服 | 色一情一乱一伦一区二区三欧美 | 又大又硬又黄的免费视频 | 丝袜人妻一区二区三区 | 精品国精品国产自在久国产87 | 久久久久99精品成人片 | 国产suv精品一区二区五 | 欧美老妇交乱视频在线观看 | 无套内射视频囯产 | 中国大陆精品视频xxxx | 水蜜桃亚洲一二三四在线 | 久久久久久久人妻无码中文字幕爆 | 成人片黄网站色大片免费观看 | 波多野结衣高清一区二区三区 | 久久久久se色偷偷亚洲精品av | 天天躁日日躁狠狠躁免费麻豆 | 黑人巨大精品欧美黑寡妇 | 久久99精品国产.久久久久 | 一个人免费观看的www视频 | 妺妺窝人体色www在线小说 | 国内精品人妻无码久久久影院 | 欧美日韩一区二区免费视频 | 国产肉丝袜在线观看 | 国产xxx69麻豆国语对白 | 红桃av一区二区三区在线无码av | 午夜成人1000部免费视频 | 18精品久久久无码午夜福利 | 熟妇人妻无乱码中文字幕 | 中文字幕av日韩精品一区二区 | 色一情一乱一伦一区二区三欧美 | 激情五月综合色婷婷一区二区 | 激情爆乳一区二区三区 | 国产成人一区二区三区在线观看 | 一本色道久久综合亚洲精品不卡 | 色情久久久av熟女人妻网站 | 久久国产精品精品国产色婷婷 | 亚洲成色www久久网站 | 亚洲精品一区二区三区在线 | 色偷偷人人澡人人爽人人模 | 久久精品女人的天堂av | 亚洲第一无码av无码专区 | 久久久久成人精品免费播放动漫 | 欧美性生交活xxxxxdddd | 久久精品国产一区二区三区肥胖 | 亚洲阿v天堂在线 | 国内丰满熟女出轨videos | 日本成熟视频免费视频 | 国产suv精品一区二区五 | 亚洲色大成网站www国产 | 久久国语露脸国产精品电影 | 国产免费久久久久久无码 | 亚洲精品国产精品乱码不卡 | 久久亚洲日韩精品一区二区三区 | 亚洲成熟女人毛毛耸耸多 | 国产精品毛片一区二区 | 男人和女人高潮免费网站 | 久久久无码中文字幕久... | 国产内射爽爽大片视频社区在线 | 成年美女黄网站色大免费全看 | 国产欧美熟妇另类久久久 | 露脸叫床粗话东北少妇 | 风流少妇按摩来高潮 | 欧美日韩色另类综合 | 免费无码一区二区三区蜜桃大 | 亚洲欧洲日本无在线码 | 精品国产麻豆免费人成网站 | a片在线免费观看 | 乱人伦人妻中文字幕无码 | 国产人成高清在线视频99最全资源 | 国产高潮视频在线观看 | 在线 国产 欧美 亚洲 天堂 | 乱人伦人妻中文字幕无码久久网 | 亚洲aⅴ无码成人网站国产app | 午夜丰满少妇性开放视频 | 在线a亚洲视频播放在线观看 | 国产高潮视频在线观看 | 久久精品国产日本波多野结衣 | 小泽玛莉亚一区二区视频在线 | 亚洲精品国产品国语在线观看 | 国产精品18久久久久久麻辣 | 99re在线播放 | 人妻aⅴ无码一区二区三区 | 人人澡人人妻人人爽人人蜜桃 | 高潮毛片无遮挡高清免费视频 | 亚洲一区二区三区香蕉 | 国产av无码专区亚洲awww | 日日摸天天摸爽爽狠狠97 | 小sao货水好多真紧h无码视频 | 一本加勒比波多野结衣 | 精品国产一区二区三区av 性色 | 18无码粉嫩小泬无套在线观看 | 免费视频欧美无人区码 | 97资源共享在线视频 | 无码纯肉视频在线观看 | 午夜福利试看120秒体验区 | 国产又爽又猛又粗的视频a片 | 国产在线精品一区二区三区直播 | 未满小14洗澡无码视频网站 | 特级做a爰片毛片免费69 | 搡女人真爽免费视频大全 | 中国女人内谢69xxxxxa片 | 国产熟妇另类久久久久 | 久久亚洲中文字幕精品一区 | 麻豆md0077饥渴少妇 | 成人无码精品一区二区三区 | 日产精品99久久久久久 | 成人无码影片精品久久久 | 爽爽影院免费观看 | 国产精品久久久久影院嫩草 | 亚洲精品久久久久avwww潮水 | 未满成年国产在线观看 | 99久久久无码国产精品免费 | 欧洲熟妇色 欧美 | 国内综合精品午夜久久资源 | 性生交片免费无码看人 | 久久精品女人天堂av免费观看 | 国产热a欧美热a在线视频 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 一区二区三区乱码在线 | 欧洲 | av无码电影一区二区三区 | 少妇一晚三次一区二区三区 | 中文字幕色婷婷在线视频 | 久久午夜无码鲁丝片 | 我要看www免费看插插视频 | 国产激情无码一区二区 | 国产在线精品一区二区三区直播 | 亚洲日韩中文字幕在线播放 | 国产成人综合色在线观看网站 | 国产精品亚洲一区二区三区喷水 | 亲嘴扒胸摸屁股激烈网站 | 久久久久久久久888 | 欧美熟妇另类久久久久久多毛 | 未满成年国产在线观看 | 国精产品一品二品国精品69xx | 亚洲乱码日产精品bd | 亚洲精品鲁一鲁一区二区三区 | 亚洲精品一区二区三区在线 | 国产精品.xx视频.xxtv | 少妇被黑人到高潮喷出白浆 | 国产在线一区二区三区四区五区 | 永久免费观看美女裸体的网站 | 人妻少妇精品无码专区二区 | 精品国产一区二区三区四区在线看 | 午夜男女很黄的视频 | 99re在线播放 | 好爽又高潮了毛片免费下载 | 久久国产36精品色熟妇 | 一本加勒比波多野结衣 | 日韩精品无码免费一区二区三区 | 给我免费的视频在线观看 | 无套内谢老熟女 | 一本大道伊人av久久综合 | 亚洲熟妇色xxxxx欧美老妇 | 丁香啪啪综合成人亚洲 | 无码av中文字幕免费放 | 久久午夜无码鲁丝片秋霞 | 亚洲中文字幕在线无码一区二区 | 亚洲国产av美女网站 | 国产舌乚八伦偷品w中 | 中文精品久久久久人妻不卡 | 国产精品无码久久av | 色一情一乱一伦 | 久久久久人妻一区精品色欧美 | 香港三级日本三级妇三级 | 国产午夜无码视频在线观看 | 久久综合久久自在自线精品自 | v一区无码内射国产 | 久青草影院在线观看国产 | 国产亚洲精品久久久久久国模美 | 久久精品成人欧美大片 | 99久久人妻精品免费二区 | 97夜夜澡人人爽人人喊中国片 | 国产精品久免费的黄网站 | 亚洲精品一区二区三区大桥未久 | 影音先锋中文字幕无码 | 国产成人综合在线女婷五月99播放 | 日本精品少妇一区二区三区 | 中文字幕 人妻熟女 | 久久久久99精品成人片 | 日日鲁鲁鲁夜夜爽爽狠狠 | 午夜熟女插插xx免费视频 | 俺去俺来也www色官网 | 亚洲精品www久久久 | 国内精品久久久久久中文字幕 | www国产亚洲精品久久久日本 | 亚洲中文字幕无码中文字在线 | 少妇邻居内射在线 | 偷窥村妇洗澡毛毛多 | 一区二区三区高清视频一 | 真人与拘做受免费视频一 | 亚洲人成影院在线观看 | 国产精品毛片一区二区 | 国产精品-区区久久久狼 | 亚洲国产欧美国产综合一区 | 女人被爽到呻吟gif动态图视看 | 超碰97人人射妻 | 人妻少妇被猛烈进入中文字幕 | 亚洲人成网站免费播放 | 理论片87福利理论电影 | 亚洲精品一区三区三区在线观看 | 无码人妻少妇伦在线电影 | 伦伦影院午夜理论片 | 少妇性俱乐部纵欲狂欢电影 | 5858s亚洲色大成网站www | 伊人久久婷婷五月综合97色 | 蜜桃av抽搐高潮一区二区 | 在线观看免费人成视频 | 日韩人妻少妇一区二区三区 | 无码人妻丰满熟妇区毛片18 | 成人无码精品一区二区三区 | 国产做国产爱免费视频 | 久久精品国产亚洲精品 | 小泽玛莉亚一区二区视频在线 | 精品国产aⅴ无码一区二区 | 日韩av无码中文无码电影 | 国产在线无码精品电影网 | 女人被爽到呻吟gif动态图视看 | 免费无码肉片在线观看 | 四虎影视成人永久免费观看视频 | 亚洲精品无码人妻无码 | 欧洲美熟女乱又伦 | 成年美女黄网站色大免费全看 | 免费无码一区二区三区蜜桃大 | 亚洲中文字幕av在天堂 | 欧美日韩一区二区三区自拍 | 内射老妇bbwx0c0ck | 熟女少妇人妻中文字幕 | 伊人色综合久久天天小片 | 国产97在线 | 亚洲 | 国产精品亚洲一区二区三区喷水 | 久久99精品久久久久久动态图 | 在线 国产 欧美 亚洲 天堂 | 国产精品沙发午睡系列 | 日本一区二区更新不卡 | 日本精品人妻无码免费大全 | 牛和人交xxxx欧美 | 无码帝国www无码专区色综合 | 国产凸凹视频一区二区 | 伊人久久大香线蕉av一区二区 | 国产色xx群视频射精 | 图片区 小说区 区 亚洲五月 | 久久成人a毛片免费观看网站 | 2020久久超碰国产精品最新 | 国产在线aaa片一区二区99 | 无码吃奶揉捏奶头高潮视频 | 日韩精品久久久肉伦网站 | 高清不卡一区二区三区 | 99在线 | 亚洲 | 99久久久无码国产aaa精品 | 久久精品国产99久久6动漫 | 久久zyz资源站无码中文动漫 | 国产高清av在线播放 | 日本一区二区三区免费播放 | 少妇性l交大片 | 激情内射亚州一区二区三区爱妻 | 国产电影无码午夜在线播放 | 大乳丰满人妻中文字幕日本 | 综合激情五月综合激情五月激情1 | 久久精品99久久香蕉国产色戒 | 精品厕所偷拍各类美女tp嘘嘘 | 99久久精品无码一区二区毛片 | 国产日产欧产精品精品app | 熟女俱乐部五十路六十路av | aa片在线观看视频在线播放 | 国产精品无码一区二区三区不卡 | 久久人人爽人人爽人人片av高清 | 国产va免费精品观看 | 扒开双腿吃奶呻吟做受视频 | 国产精品久久国产精品99 | 成人试看120秒体验区 | 亚洲一区二区三区四区 | 国产三级久久久精品麻豆三级 | 国产偷国产偷精品高清尤物 | 99精品国产综合久久久久五月天 | 欧美猛少妇色xxxxx | 又大又硬又黄的免费视频 | 日本一区二区更新不卡 | 国产成人综合色在线观看网站 | 乱人伦中文视频在线观看 | 18无码粉嫩小泬无套在线观看 | 少妇人妻av毛片在线看 | 亚洲人成人无码网www国产 | 中文精品久久久久人妻不卡 | 亚洲人成影院在线观看 | 激情人妻另类人妻伦 | 国产精品无码成人午夜电影 | 蜜桃臀无码内射一区二区三区 | 东京热无码av男人的天堂 | 国产午夜福利100集发布 | 久久久久国色av免费观看性色 | 中文字幕无线码免费人妻 | 好男人社区资源 | 激情内射日本一区二区三区 | 东京一本一道一二三区 | 欧美人与动性行为视频 | 东京热一精品无码av | 国语自产偷拍精品视频偷 | 一二三四社区在线中文视频 | 丝袜人妻一区二区三区 | 日日天日日夜日日摸 | a片免费视频在线观看 | 国产情侣作爱视频免费观看 | 亚洲中文字幕无码一久久区 | 青青草原综合久久大伊人精品 | 精品aⅴ一区二区三区 | 日本精品久久久久中文字幕 | 色狠狠av一区二区三区 | 国产精品久久久 | 国内老熟妇对白xxxxhd | 97人妻精品一区二区三区 | 久久亚洲国产成人精品性色 | 中文字幕无码av激情不卡 | 国产绳艺sm调教室论坛 | 97se亚洲精品一区 | 亚洲欧美国产精品专区久久 | 精品无码一区二区三区爱欲 | 国产超碰人人爽人人做人人添 | 国产亚洲精品久久久ai换 | 综合网日日天干夜夜久久 | 中文字幕无码av波多野吉衣 | 熟妇人妻无乱码中文字幕 | 无码人妻丰满熟妇区五十路百度 | 日韩人妻无码中文字幕视频 | 亚洲中文字幕在线无码一区二区 | 国産精品久久久久久久 | 一区二区三区乱码在线 | 欧洲 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 精品午夜福利在线观看 | 国内精品久久毛片一区二区 | 久久 国产 尿 小便 嘘嘘 | 国产免费观看黄av片 | 99国产欧美久久久精品 | 内射巨臀欧美在线视频 | 国产无遮挡吃胸膜奶免费看 | 中文字幕 人妻熟女 | 性欧美牲交在线视频 | 精品人妻人人做人人爽夜夜爽 | 国产精品亚洲一区二区三区喷水 | 国产午夜视频在线观看 | 久久久无码中文字幕久... | 亚洲精品久久久久中文第一幕 | 日本在线高清不卡免费播放 | 未满小14洗澡无码视频网站 | 中文精品久久久久人妻不卡 | 日韩欧美群交p片內射中文 | 图片小说视频一区二区 | 领导边摸边吃奶边做爽在线观看 | 久久www免费人成人片 | 亚洲国产精品一区二区美利坚 | 亚洲中文字幕在线无码一区二区 | 无码av岛国片在线播放 | 国产熟妇另类久久久久 | 国内精品久久久久久中文字幕 | 亚洲国产精品久久久久久 | 亚洲国产精品一区二区美利坚 | 亚洲精品一区三区三区在线观看 | 精品无码av一区二区三区 | 日韩av无码中文无码电影 | 欧美野外疯狂做受xxxx高潮 | 波多野结衣av在线观看 | 午夜精品一区二区三区在线观看 | 永久免费观看美女裸体的网站 | 乌克兰少妇xxxx做受 | 动漫av网站免费观看 | 无码播放一区二区三区 | 亚洲人成人无码网www国产 | 青青久在线视频免费观看 | 天堂一区人妻无码 | 亚洲精品国产品国语在线观看 | 中文字幕无线码 | 99视频精品全部免费免费观看 | 九九综合va免费看 | 亚洲精品欧美二区三区中文字幕 | 永久免费观看美女裸体的网站 | 东京一本一道一二三区 | 久久人妻内射无码一区三区 | 国产精品久久久久久久9999 | 亚洲国产av美女网站 | 精品偷自拍另类在线观看 | 久久精品成人欧美大片 | 国内精品九九久久久精品 | 国产成人一区二区三区在线观看 | 国产激情艳情在线看视频 | 青青草原综合久久大伊人精品 | 成人免费视频在线观看 | 未满小14洗澡无码视频网站 | 亚洲一区二区三区偷拍女厕 | 少妇久久久久久人妻无码 | 久久人人爽人人爽人人片av高清 | 亚洲狠狠婷婷综合久久 | 精品无码一区二区三区爱欲 | 高中生自慰www网站 | 国产三级精品三级男人的天堂 | 亚洲国产精品美女久久久久 | 一本久久a久久精品vr综合 | 日本免费一区二区三区最新 | 亚洲码国产精品高潮在线 | 亚洲s码欧洲m码国产av | 18无码粉嫩小泬无套在线观看 | 亚洲а∨天堂久久精品2021 | 精品厕所偷拍各类美女tp嘘嘘 | 日日天干夜夜狠狠爱 | 国产午夜手机精彩视频 | 国精品人妻无码一区二区三区蜜柚 | а天堂中文在线官网 | 欧美日韩一区二区三区自拍 | 两性色午夜视频免费播放 | 精品一区二区不卡无码av | 日日碰狠狠躁久久躁蜜桃 | 丝袜人妻一区二区三区 | 国产色视频一区二区三区 | 综合人妻久久一区二区精品 | 国产成人久久精品流白浆 | 真人与拘做受免费视频一 | 东京无码熟妇人妻av在线网址 | 国产高清不卡无码视频 | 久久久久99精品成人片 | 国产av一区二区精品久久凹凸 | 在线观看国产一区二区三区 | 无码国模国产在线观看 | 欧美日本日韩 | 久9re热视频这里只有精品 | 日韩av无码中文无码电影 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 99久久精品日本一区二区免费 | 国产偷国产偷精品高清尤物 | 国内精品九九久久久精品 | 在线看片无码永久免费视频 | 男女性色大片免费网站 | 人妻aⅴ无码一区二区三区 | 久久久久免费精品国产 | 人妻尝试又大又粗久久 | 麻豆国产97在线 | 欧洲 | 午夜不卡av免费 一本久久a久久精品vr综合 | 男人扒开女人内裤强吻桶进去 | 乱码午夜-极国产极内射 | 丰满人妻一区二区三区免费视频 | 国产精品人妻一区二区三区四 | 精品久久久久久人妻无码中文字幕 | 亚欧洲精品在线视频免费观看 | 婷婷丁香五月天综合东京热 | 丰满人妻一区二区三区免费视频 | 亚洲综合色区中文字幕 | 国产精品无码永久免费888 | 无码精品人妻一区二区三区av | 国产深夜福利视频在线 | 在线观看国产午夜福利片 | 丁香花在线影院观看在线播放 | 无码国产乱人伦偷精品视频 | 国内精品一区二区三区不卡 | 亚洲精品国产精品乱码视色 | 国产无套内射久久久国产 | 中文精品久久久久人妻不卡 | 久久国内精品自在自线 | 蜜桃av抽搐高潮一区二区 | 国产网红无码精品视频 | 久久综合给合久久狠狠狠97色 | 久久久久免费精品国产 | 亚洲日韩av一区二区三区中文 | 亚洲欧美综合区丁香五月小说 | 天堂亚洲2017在线观看 | 四十如虎的丰满熟妇啪啪 | 夜夜夜高潮夜夜爽夜夜爰爰 | 成人免费视频视频在线观看 免费 | 水蜜桃色314在线观看 | 蜜桃av蜜臀av色欲av麻 999久久久国产精品消防器材 | 成熟女人特级毛片www免费 | 亚洲 a v无 码免 费 成 人 a v | 人人爽人人澡人人人妻 | 国产99久久精品一区二区 | 亚洲欧美精品伊人久久 | 欧美日韩一区二区免费视频 | 日日摸夜夜摸狠狠摸婷婷 | 日韩在线不卡免费视频一区 | 国产免费久久久久久无码 | 扒开双腿疯狂进出爽爽爽视频 | 色一情一乱一伦 | 成人免费视频视频在线观看 免费 | 粉嫩少妇内射浓精videos | 日本一卡2卡3卡4卡无卡免费网站 国产一区二区三区影院 | 99久久人妻精品免费一区 | 欧美 日韩 亚洲 在线 | av香港经典三级级 在线 | 一本大道久久东京热无码av | 欧美怡红院免费全部视频 | 成人女人看片免费视频放人 | 小sao货水好多真紧h无码视频 | 国产偷抇久久精品a片69 | 香蕉久久久久久av成人 | 人妻无码αv中文字幕久久琪琪布 | 国产亚洲精品精品国产亚洲综合 | 精品一区二区三区波多野结衣 | 老司机亚洲精品影院 | 日韩少妇白浆无码系列 | 欧美黑人乱大交 | 四虎4hu永久免费 | 青青青手机频在线观看 | 亚洲精品中文字幕久久久久 | 强辱丰满人妻hd中文字幕 | 亚洲乱码国产乱码精品精 | 啦啦啦www在线观看免费视频 | 国产精品国产三级国产专播 | 国产欧美熟妇另类久久久 | 精品少妇爆乳无码av无码专区 | 国产国产精品人在线视 | 精品无码一区二区三区爱欲 | 永久免费观看美女裸体的网站 | 日韩欧美群交p片內射中文 | 亚洲精品国产a久久久久久 | 俺去俺来也在线www色官网 | 中文精品久久久久人妻不卡 | 久久99精品国产.久久久久 | 无遮挡啪啪摇乳动态图 | 综合人妻久久一区二区精品 | 婷婷色婷婷开心五月四房播播 | 国内综合精品午夜久久资源 | 久久人人97超碰a片精品 | 一个人看的www免费视频在线观看 | 亚洲综合另类小说色区 | 国产精品免费大片 | 色欲久久久天天天综合网精品 | 国产香蕉97碰碰久久人人 | 日韩亚洲欧美中文高清在线 | 成 人 网 站国产免费观看 | 俄罗斯老熟妇色xxxx | 少妇太爽了在线观看 | 国产香蕉尹人视频在线 | 在线亚洲高清揄拍自拍一品区 | 亚洲熟妇色xxxxx亚洲 | 国产激情艳情在线看视频 | 亚洲精品综合一区二区三区在线 | 国产亚洲美女精品久久久2020 | 久久久精品欧美一区二区免费 | 欧美一区二区三区 | 国产女主播喷水视频在线观看 | 亚洲 a v无 码免 费 成 人 a v | 18禁止看的免费污网站 | 欧美丰满熟妇xxxx | 无码国内精品人妻少妇 | 欧美性生交活xxxxxdddd | 久久久久成人片免费观看蜜芽 | aa片在线观看视频在线播放 | 久久精品国产一区二区三区肥胖 | 色婷婷久久一区二区三区麻豆 | 国产日产欧产精品精品app | 捆绑白丝粉色jk震动捧喷白浆 | 帮老师解开蕾丝奶罩吸乳网站 | 日本成熟视频免费视频 | 内射老妇bbwx0c0ck | 色窝窝无码一区二区三区色欲 | 免费播放一区二区三区 | 成人精品一区二区三区中文字幕 | 国产莉萝无码av在线播放 | 亚洲欧美国产精品久久 | 精品成人av一区二区三区 | 精品午夜福利在线观看 | 国産精品久久久久久久 | 婷婷六月久久综合丁香 | 精品国产aⅴ无码一区二区 | 水蜜桃亚洲一二三四在线 | 国产又粗又硬又大爽黄老大爷视 | 久久 国产 尿 小便 嘘嘘 | 亚洲狠狠婷婷综合久久 | 亚洲午夜久久久影院 | 久久综合久久自在自线精品自 | 亚洲欧美精品aaaaaa片 | 久久99精品久久久久久 | 无码人妻av免费一区二区三区 | 国产人妻精品一区二区三区不卡 | 成人试看120秒体验区 | 波多野结衣aⅴ在线 | 久久综合给合久久狠狠狠97色 | 曰韩无码二三区中文字幕 | 又大又紧又粉嫩18p少妇 | 蜜桃av抽搐高潮一区二区 | 色五月五月丁香亚洲综合网 | 亚洲一区二区三区 | 亚洲中文字幕乱码av波多ji | 亚洲s码欧洲m码国产av | 最近的中文字幕在线看视频 | 亚洲春色在线视频 | 高清不卡一区二区三区 | 人妻少妇精品视频专区 | 国内精品一区二区三区不卡 | 婷婷丁香六月激情综合啪 | 国产超碰人人爽人人做人人添 | 国产午夜手机精彩视频 | 少妇久久久久久人妻无码 | 兔费看少妇性l交大片免费 | 日本www一道久久久免费榴莲 | 亚洲人成网站免费播放 | 无码av中文字幕免费放 | 国产成人综合色在线观看网站 | 日韩精品一区二区av在线 | 未满成年国产在线观看 | 亚洲精品国产a久久久久久 | 野狼第一精品社区 | 久久久久亚洲精品中文字幕 | 国产97人人超碰caoprom | www成人国产高清内射 | 日本大乳高潮视频在线观看 | 久久国产精品萌白酱免费 | 大乳丰满人妻中文字幕日本 | 噜噜噜亚洲色成人网站 | 亚洲精品成a人在线观看 | 麻花豆传媒剧国产免费mv在线 | 日产国产精品亚洲系列 | 亚洲国产精品一区二区第一页 | 欧美成人免费全部网站 | 久久无码人妻影院 | 狂野欧美性猛xxxx乱大交 | 国产人妻精品一区二区三区 | 国产suv精品一区二区五 | 亚洲gv猛男gv无码男同 | 国产精品爱久久久久久久 | 国内精品一区二区三区不卡 | 久久久久成人精品免费播放动漫 | 久久国产精品_国产精品 | 色欲久久久天天天综合网精品 | 成人毛片一区二区 | 欧美人与动性行为视频 | 日本乱人伦片中文三区 | 国产熟妇另类久久久久 | 国产精品人人爽人人做我的可爱 | 亚洲国产高清在线观看视频 | 狠狠噜狠狠狠狠丁香五月 | 欧美丰满少妇xxxx性 | 人人妻人人澡人人爽精品欧美 | 久久综合色之久久综合 | 少妇人妻av毛片在线看 | 国语自产偷拍精品视频偷 | 亚洲成av人影院在线观看 | 少妇人妻av毛片在线看 | 无码人妻出轨黑人中文字幕 | 亚洲欧洲日本综合aⅴ在线 | 天天做天天爱天天爽综合网 | 中文精品久久久久人妻不卡 | 亚洲欧洲无卡二区视頻 | 99久久无码一区人妻 | 自拍偷自拍亚洲精品被多人伦好爽 | 国产乱人无码伦av在线a | 成人无码影片精品久久久 | 亚洲成av人在线观看网址 | 午夜福利不卡在线视频 | 欧美熟妇另类久久久久久多毛 | 国产另类ts人妖一区二区 | 一个人看的www免费视频在线观看 | 亚洲中文字幕在线观看 | 亚洲无人区午夜福利码高清完整版 | 伦伦影院午夜理论片 | 丝袜美腿亚洲一区二区 | 欧美国产日韩亚洲中文 | 国产激情综合五月久久 | 亚洲啪av永久无码精品放毛片 | 色欲久久久天天天综合网精品 | 国产精品无码mv在线观看 | 99久久久国产精品无码免费 | 曰韩无码二三区中文字幕 | 激情国产av做激情国产爱 | 人妻互换免费中文字幕 | 少妇无码吹潮 | 日本一本二本三区免费 | 一本久道久久综合婷婷五月 | 欧美熟妇另类久久久久久不卡 | 亚洲国产av精品一区二区蜜芽 | 欧美人与牲动交xxxx | 大地资源网第二页免费观看 | 婷婷五月综合缴情在线视频 | 国产精品二区一区二区aⅴ污介绍 | 欧美人与牲动交xxxx | 日本熟妇乱子伦xxxx | 亚洲精品一区二区三区婷婷月 | 色婷婷久久一区二区三区麻豆 | 毛片内射-百度 | 18精品久久久无码午夜福利 | 欧美野外疯狂做受xxxx高潮 | 日本va欧美va欧美va精品 | 午夜精品久久久内射近拍高清 | 日本高清一区免费中文视频 | av无码不卡在线观看免费 | 亚洲色欲久久久综合网东京热 | aa片在线观看视频在线播放 | 欧美丰满熟妇xxxx | 男人的天堂av网站 | 天天做天天爱天天爽综合网 | 性欧美牲交xxxxx视频 | 国产精品99久久精品爆乳 | 日日夜夜撸啊撸 | 亚洲精品成a人在线观看 | 日本丰满护士爆乳xxxx | 麻豆av传媒蜜桃天美传媒 | 亚洲日韩中文字幕在线播放 | 一本精品99久久精品77 | 欧美精品无码一区二区三区 | 国产精品久久久av久久久 | 久激情内射婷内射蜜桃人妖 | 国产美女极度色诱视频www | 成人无码视频免费播放 | 亚洲高清偷拍一区二区三区 | 5858s亚洲色大成网站www | 国产av一区二区三区最新精品 | 国产莉萝无码av在线播放 | 国产精品va在线播放 | 日本一区二区三区免费播放 | 精品无码成人片一区二区98 | 精品国产麻豆免费人成网站 | 成人亚洲精品久久久久软件 | 色综合久久88色综合天天 | 精品无码一区二区三区爱欲 | 日韩人妻无码一区二区三区久久99 | 亚洲娇小与黑人巨大交 | 天堂一区人妻无码 | 久久熟妇人妻午夜寂寞影院 | 中国女人内谢69xxxx | 久久亚洲中文字幕精品一区 | 欧美阿v高清资源不卡在线播放 | 久久精品人妻少妇一区二区三区 | 国产 浪潮av性色四虎 | 色 综合 欧美 亚洲 国产 | 亚洲精品久久久久中文第一幕 | 一个人看的www免费视频在线观看 | 性色av无码免费一区二区三区 | 亚洲综合伊人久久大杳蕉 | av无码不卡在线观看免费 | 日产精品99久久久久久 | www一区二区www免费 | 久久精品人人做人人综合 | 精品国产麻豆免费人成网站 | 麻豆国产人妻欲求不满谁演的 | 亚洲熟妇色xxxxx欧美老妇y | 久久久亚洲欧洲日产国码αv | 国模大胆一区二区三区 | 国模大胆一区二区三区 | 少妇太爽了在线观看 | 未满小14洗澡无码视频网站 | 色噜噜亚洲男人的天堂 | 欧美人与善在线com | 日本丰满熟妇videos | 国产精品久久福利网站 | 国産精品久久久久久久 | 丰满少妇弄高潮了www | 天堂在线观看www | 东京一本一道一二三区 | 久久精品人妻少妇一区二区三区 | 国产午夜福利100集发布 | 在线看片无码永久免费视频 | 99久久精品无码一区二区毛片 | 久久综合色之久久综合 | 无码人妻精品一区二区三区下载 | 亚洲va中文字幕无码久久不卡 | 性欧美videos高清精品 | 国产精品久久国产三级国 | 国产无遮挡吃胸膜奶免费看 | 精品夜夜澡人妻无码av蜜桃 | 精品无人区无码乱码毛片国产 | 久久无码中文字幕免费影院蜜桃 | 日日躁夜夜躁狠狠躁 | 免费人成网站视频在线观看 | 国产午夜手机精彩视频 | 中文精品无码中文字幕无码专区 | 亚洲伊人久久精品影院 | 日本高清一区免费中文视频 | 国产激情精品一区二区三区 | 初尝人妻少妇中文字幕 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 永久黄网站色视频免费直播 | 在线精品国产一区二区三区 | 亚洲精品成人福利网站 | 国产激情综合五月久久 | 午夜精品一区二区三区的区别 | 精品乱子伦一区二区三区 | 久久综合色之久久综合 | 午夜精品久久久内射近拍高清 | 国产日产欧产精品精品app | 97无码免费人妻超级碰碰夜夜 | 十八禁视频网站在线观看 | 欧美人与牲动交xxxx | www一区二区www免费 | 伦伦影院午夜理论片 | 国产精品久久久久久久影院 | 亚洲国产成人a精品不卡在线 | 性开放的女人aaa片 | 无码吃奶揉捏奶头高潮视频 | 国产成人综合美国十次 | 久久久久se色偷偷亚洲精品av | 夜精品a片一区二区三区无码白浆 | 国产精品无码一区二区三区不卡 | 久久 国产 尿 小便 嘘嘘 | 草草网站影院白丝内射 | 精品偷自拍另类在线观看 | 东京热无码av男人的天堂 | 国产亚洲日韩欧美另类第八页 | 国产猛烈高潮尖叫视频免费 | 性开放的女人aaa片 | а天堂中文在线官网 | 欧美刺激性大交 | 国产激情无码一区二区app | 精品人人妻人人澡人人爽人人 | 精品偷拍一区二区三区在线看 | 蜜桃无码一区二区三区 | 国产精品理论片在线观看 | 四十如虎的丰满熟妇啪啪 | 亚洲国产欧美国产综合一区 | 亚洲国产精品久久久天堂 | 欧美性黑人极品hd | 久久99热只有频精品8 | 日本丰满熟妇videos | 亚洲国产高清在线观看视频 | 亚洲日韩中文字幕在线播放 | 人妻夜夜爽天天爽三区 | 蜜臀av无码人妻精品 | 伊人久久大香线焦av综合影院 | 日本乱偷人妻中文字幕 | 黑人粗大猛烈进出高潮视频 | 狂野欧美性猛xxxx乱大交 | 中文字幕乱妇无码av在线 | 无遮挡啪啪摇乳动态图 | 国模大胆一区二区三区 | 国产亚洲精品久久久ai换 | 中文字幕乱妇无码av在线 | 捆绑白丝粉色jk震动捧喷白浆 | 国产极品美女高潮无套在线观看 | 高潮毛片无遮挡高清免费 | aa片在线观看视频在线播放 | 天天躁日日躁狠狠躁免费麻豆 | 国产乱人无码伦av在线a | 99久久久无码国产精品免费 | 国产精品99久久精品爆乳 | 国色天香社区在线视频 | 欧美日韩一区二区综合 | 捆绑白丝粉色jk震动捧喷白浆 | 精品厕所偷拍各类美女tp嘘嘘 | 国产明星裸体无码xxxx视频 | 任你躁在线精品免费 | 久久精品女人的天堂av | 野狼第一精品社区 | 中文字幕乱码中文乱码51精品 | 玩弄中年熟妇正在播放 | 欧美人与善在线com | 国产无套粉嫩白浆在线 | 亚洲成av人综合在线观看 | 亚洲精品综合一区二区三区在线 | 国产人成高清在线视频99最全资源 | 国产综合久久久久鬼色 | 青青青爽视频在线观看 | av无码不卡在线观看免费 | 日韩亚洲欧美中文高清在线 | 人妻aⅴ无码一区二区三区 | 亚洲成av人在线观看网址 | 妺妺窝人体色www婷婷 | 狠狠综合久久久久综合网 | 无码午夜成人1000部免费视频 | 日韩亚洲欧美中文高清在线 | 小泽玛莉亚一区二区视频在线 | 东京热一精品无码av | 午夜精品久久久久久久久 | 男女猛烈xx00免费视频试看 | 三级4级全黄60分钟 | 久久精品女人的天堂av | 一本精品99久久精品77 | 欧美性猛交内射兽交老熟妇 | 国产乱人伦app精品久久 国产在线无码精品电影网 国产国产精品人在线视 | 一本久道久久综合婷婷五月 | 亚洲七七久久桃花影院 | 国产无套内射久久久国产 | 国内揄拍国内精品人妻 | 亚洲国产欧美日韩精品一区二区三区 | 十八禁真人啪啪免费网站 | 国产精品怡红院永久免费 | 久久久久av无码免费网 | 红桃av一区二区三区在线无码av | 麻豆国产人妻欲求不满 | 人妻天天爽夜夜爽一区二区 | 福利一区二区三区视频在线观看 | 亚洲日本va午夜在线电影 | 亚洲成a人片在线观看无码 | 一本大道伊人av久久综合 | 日日摸日日碰夜夜爽av | 丰满妇女强制高潮18xxxx | 久久久久人妻一区精品色欧美 | 国产精品久久国产三级国 | 日本大乳高潮视频在线观看 | 丰满人妻精品国产99aⅴ | 97资源共享在线视频 | 麻豆成人精品国产免费 | 性色欲网站人妻丰满中文久久不卡 | 国产特级毛片aaaaaaa高清 | 欧美自拍另类欧美综合图片区 | 人妻无码αv中文字幕久久琪琪布 | 波多野结衣av在线观看 | 亚洲色欲久久久综合网东京热 | 久久亚洲精品中文字幕无男同 | 丰满妇女强制高潮18xxxx | 欧美猛少妇色xxxxx | 丰满岳乱妇在线观看中字无码 | 又大又硬又爽免费视频 | 国产一区二区三区影院 | 亚洲日韩一区二区 | 亚洲男人av天堂午夜在 | 97精品人妻一区二区三区香蕉 | 天下第一社区视频www日本 | 妺妺窝人体色www在线小说 | 亚洲欧洲中文日韩av乱码 | 久久久亚洲欧洲日产国码αv | 国产成人无码av片在线观看不卡 | 亚洲一区二区观看播放 | 亚拍精品一区二区三区探花 | 日本高清一区免费中文视频 | 婷婷综合久久中文字幕蜜桃三电影 | 在线观看国产午夜福利片 | 狠狠色色综合网站 | 亚洲精品一区三区三区在线观看 | 久久无码专区国产精品s | 内射巨臀欧美在线视频 | 久久亚洲日韩精品一区二区三区 | 无码精品国产va在线观看dvd | 国产精品久久久久影院嫩草 | 久久国产精品精品国产色婷婷 | 欧美性生交活xxxxxdddd | 性色欲网站人妻丰满中文久久不卡 | 奇米影视7777久久精品人人爽 | 一二三四社区在线中文视频 | 中文无码成人免费视频在线观看 | 国产黄在线观看免费观看不卡 | 久久精品99久久香蕉国产色戒 | 蜜桃无码一区二区三区 | 亚洲国产精品一区二区美利坚 | 国产精品美女久久久网av | 亚洲 欧美 激情 小说 另类 | 青青青爽视频在线观看 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 学生妹亚洲一区二区 | 粗大的内捧猛烈进出视频 | 中文字幕无码免费久久9一区9 | 欧美人与善在线com | 99久久婷婷国产综合精品青草免费 | 国产 浪潮av性色四虎 | 中文久久乱码一区二区 | 国产精品无码一区二区三区不卡 | 日日橹狠狠爱欧美视频 | 成人无码影片精品久久久 | 99久久久无码国产精品免费 | 四虎影视成人永久免费观看视频 | 久久午夜无码鲁丝片 | 亚洲最大成人网站 | 成人免费视频一区二区 | 中文字幕av无码一区二区三区电影 | 中文字幕乱码人妻无码久久 | 亚洲一区av无码专区在线观看 | 国产精品久久久久久亚洲毛片 | 国产精品亚洲а∨无码播放麻豆 | 天天躁夜夜躁狠狠是什么心态 | 国产成人精品一区二区在线小狼 | 欧美一区二区三区视频在线观看 | 丝袜 中出 制服 人妻 美腿 | 久久天天躁狠狠躁夜夜免费观看 | 亚洲狠狠婷婷综合久久 | 88国产精品欧美一区二区三区 | 久久成人a毛片免费观看网站 | 国产97在线 | 亚洲 | 日本精品少妇一区二区三区 | 欧美丰满熟妇xxxx性ppx人交 | 国产精品无码永久免费888 | 日韩欧美群交p片內射中文 | 国产亚洲美女精品久久久2020 | 欧美喷潮久久久xxxxx | 欧美大屁股xxxxhd黑色 | 国产午夜精品一区二区三区嫩草 | 国模大胆一区二区三区 | 国产热a欧美热a在线视频 | 无套内谢的新婚少妇国语播放 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | 亚洲区欧美区综合区自拍区 | 日日麻批免费40分钟无码 | 国产深夜福利视频在线 | 鲁鲁鲁爽爽爽在线视频观看 | 国内老熟妇对白xxxxhd | 国产精品久久久久久无码 | 欧美日韩色另类综合 | 国产福利视频一区二区 | 色偷偷人人澡人人爽人人模 | 国产熟女一区二区三区四区五区 | 久久久久成人精品免费播放动漫 | 国产明星裸体无码xxxx视频 | 亚洲啪av永久无码精品放毛片 | 97夜夜澡人人双人人人喊 | 亚洲中文字幕无码中文字在线 | 亚洲成av人在线观看网址 | 真人与拘做受免费视频一 | 亚洲日韩av一区二区三区四区 | 久久久久久久女国产乱让韩 | 亚洲国产精品久久人人爱 | 三上悠亚人妻中文字幕在线 | 老熟妇乱子伦牲交视频 | 国产成人精品视频ⅴa片软件竹菊 | 曰本女人与公拘交酡免费视频 | 亚洲精品无码人妻无码 | 欧美刺激性大交 | 国产午夜无码视频在线观看 | 国产精品美女久久久久av爽李琼 | 国产人妖乱国产精品人妖 | 免费网站看v片在线18禁无码 | 午夜精品一区二区三区在线观看 | 欧美freesex黑人又粗又大 | 国精产品一区二区三区 | 国产农村妇女aaaaa视频 撕开奶罩揉吮奶头视频 | 亚洲精品国产品国语在线观看 | 女人和拘做爰正片视频 | 亚洲午夜久久久影院 | 亚洲精品国产品国语在线观看 | 99国产精品白浆在线观看免费 | 国产性猛交╳xxx乱大交 国产精品久久久久久无码 欧洲欧美人成视频在线 | a在线观看免费网站大全 | 国产香蕉尹人视频在线 | 亚洲男人av天堂午夜在 | 久久精品国产亚洲精品 | 亚洲国产高清在线观看视频 | 精品久久久久久人妻无码中文字幕 | av无码不卡在线观看免费 | 77777熟女视频在线观看 а天堂中文在线官网 | 久9re热视频这里只有精品 | 99riav国产精品视频 | 狠狠噜狠狠狠狠丁香五月 | 亚洲国产av美女网站 | 老熟妇乱子伦牲交视频 | 久久久久成人精品免费播放动漫 | 亚洲精品国产第一综合99久久 | 丰满少妇高潮惨叫视频 | 日韩精品一区二区av在线 | 在线精品国产一区二区三区 | 丰满少妇熟乱xxxxx视频 | 国产一区二区三区精品视频 | 中文字幕av日韩精品一区二区 | 少妇人妻大乳在线视频 | 国产黄在线观看免费观看不卡 | 草草网站影院白丝内射 | 一本久久a久久精品亚洲 | 免费看男女做好爽好硬视频 | 免费视频欧美无人区码 | 亚洲区欧美区综合区自拍区 | 狂野欧美性猛xxxx乱大交 | 人妻互换免费中文字幕 | 成人无码精品1区2区3区免费看 | 日本熟妇大屁股人妻 | 日韩精品久久久肉伦网站 | 特大黑人娇小亚洲女 | 国产三级久久久精品麻豆三级 | 国产高潮视频在线观看 | 97夜夜澡人人爽人人喊中国片 | 午夜理论片yy44880影院 | 国产激情一区二区三区 | 中文字幕日产无线码一区 | 精品国产国产综合精品 | 97久久超碰中文字幕 | 成在人线av无码免观看麻豆 | 免费网站看v片在线18禁无码 | 2020最新国产自产精品 | 亚洲欧美日韩成人高清在线一区 | 久9re热视频这里只有精品 | 熟女体下毛毛黑森林 | 欧美国产日韩亚洲中文 | 国产精品久久久午夜夜伦鲁鲁 | 久久综合色之久久综合 | 亚洲色欲色欲天天天www | 亚洲欧美精品aaaaaa片 | 亚洲成av人片天堂网无码】 | 无码人妻丰满熟妇区毛片18 | 久久99精品久久久久久动态图 | 国产精品亚洲一区二区三区喷水 | 中文字幕人妻无码一区二区三区 | 初尝人妻少妇中文字幕 | 色诱久久久久综合网ywww | 欧美 日韩 人妻 高清 中文 | 国产精品人人妻人人爽 | 久久精品一区二区三区四区 | 国产成人无码av一区二区 | 又大又硬又黄的免费视频 | 日韩精品成人一区二区三区 | 国产亚洲视频中文字幕97精品 | 国产在热线精品视频 | 99久久精品日本一区二区免费 | 精品 日韩 国产 欧美 视频 | 伊在人天堂亚洲香蕉精品区 | 中文字幕无码av激情不卡 | 99久久婷婷国产综合精品青草免费 | 精品久久久久久人妻无码中文字幕 | 国产乱码精品一品二品 | 亚洲国产精品无码久久久久高潮 | 中文字幕无线码 | 中国女人内谢69xxxx | 欧美性猛交内射兽交老熟妇 | 丰满肥臀大屁股熟妇激情视频 | 亚洲一区二区三区播放 | 欧美老妇交乱视频在线观看 | 日韩少妇白浆无码系列 | 欧美人与牲动交xxxx | 无码纯肉视频在线观看 | 无码帝国www无码专区色综合 | 青春草在线视频免费观看 | 久久婷婷五月综合色国产香蕉 | 亚洲爆乳大丰满无码专区 | 99精品国产综合久久久久五月天 | 国产乱子伦视频在线播放 | 中国女人内谢69xxxxxa片 | 天堂久久天堂av色综合 | 久久99精品国产.久久久久 | 日韩欧美中文字幕公布 | 国产成人精品视频ⅴa片软件竹菊 | 偷窥村妇洗澡毛毛多 | 日本护士xxxxhd少妇 | 内射巨臀欧美在线视频 | 亚洲精品久久久久中文第一幕 | 天天爽夜夜爽夜夜爽 | 国产麻豆精品一区二区三区v视界 | 思思久久99热只有频精品66 | 天天摸天天碰天天添 | 亲嘴扒胸摸屁股激烈网站 | 正在播放老肥熟妇露脸 | 欧美日韩一区二区免费视频 | 99久久久无码国产aaa精品 | 欧美 亚洲 国产 另类 | 波多野结衣av一区二区全免费观看 | av无码不卡在线观看免费 | 夜精品a片一区二区三区无码白浆 | 人妻少妇精品久久 | 亚洲人亚洲人成电影网站色 | 亚洲国产精品一区二区第一页 | 亚洲国产综合无码一区 | 中文字幕中文有码在线 | 少妇久久久久久人妻无码 | 精品无码一区二区三区的天堂 | 亚洲精品国产第一综合99久久 | 精品国产乱码久久久久乱码 | 亚洲国产精品无码久久久久高潮 | 久久久久亚洲精品中文字幕 | 日韩在线不卡免费视频一区 | 精品国精品国产自在久国产87 | 人人澡人人妻人人爽人人蜜桃 | 宝宝好涨水快流出来免费视频 | 亚洲欧美中文字幕5发布 | 国产成人无码一二三区视频 | 55夜色66夜色国产精品视频 | 精品久久久无码人妻字幂 | 18禁黄网站男男禁片免费观看 | 国产人成高清在线视频99最全资源 | 亚洲一区二区三区在线观看网站 | 亚洲国产精华液网站w | 国产精品无码永久免费888 | 成在人线av无码免费 | 中文字幕色婷婷在线视频 | 天堂а√在线地址中文在线 | 精品无码国产自产拍在线观看蜜 | 永久黄网站色视频免费直播 | 亚洲の无码国产の无码步美 | www一区二区www免费 | 国产激情一区二区三区 | 在线观看欧美一区二区三区 | 国产精品嫩草久久久久 | 国产成人无码av片在线观看不卡 | 18禁黄网站男男禁片免费观看 | 亚洲中文无码av永久不收费 | 国产在热线精品视频 | 激情内射亚州一区二区三区爱妻 | 国产激情无码一区二区app | 18无码粉嫩小泬无套在线观看 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 无码人妻av免费一区二区三区 | 日韩少妇内射免费播放 | 亚洲国产欧美在线成人 | 人妻少妇精品无码专区动漫 | 麻豆国产97在线 | 欧洲 | 亚洲自偷精品视频自拍 | 日本又色又爽又黄的a片18禁 | 精品日本一区二区三区在线观看 | 国产精品久久精品三级 | 牲欲强的熟妇农村老妇女视频 | 国产精品欧美成人 | 亚洲国产av美女网站 | 国产成人亚洲综合无码 | 国产精品久久精品三级 | 扒开双腿疯狂进出爽爽爽视频 | www成人国产高清内射 | 免费观看的无遮挡av | 丝袜 中出 制服 人妻 美腿 | 免费人成网站视频在线观看 | 永久免费观看美女裸体的网站 | 精品夜夜澡人妻无码av蜜桃 | 久久99精品国产麻豆蜜芽 | 人人爽人人澡人人人妻 | 装睡被陌生人摸出水好爽 | 一本色道久久综合亚洲精品不卡 | 男女超爽视频免费播放 | 奇米综合四色77777久久 东京无码熟妇人妻av在线网址 | 性欧美牲交在线视频 | 国产精品99久久精品爆乳 | 内射白嫩少妇超碰 | 国产精品美女久久久 | 亚洲综合色区中文字幕 | 国产午夜福利亚洲第一 | 精品人妻人人做人人爽 | 色婷婷欧美在线播放内射 | 欧洲熟妇精品视频 | 夜先锋av资源网站 | 中国女人内谢69xxxx | 暴力强奷在线播放无码 | 亚洲国精产品一二二线 | 日本熟妇人妻xxxxx人hd | 国产无av码在线观看 | 亚洲一区二区三区偷拍女厕 | 亚洲国产精品美女久久久久 | 久久无码人妻影院 | 精品偷自拍另类在线观看 | 亚洲国产成人av在线观看 | 国产在线一区二区三区四区五区 | 妺妺窝人体色www婷婷 | 日韩精品无码免费一区二区三区 | 少妇被粗大的猛进出69影院 | 美女黄网站人色视频免费国产 | 日韩欧美中文字幕公布 | 国产亚洲美女精品久久久2020 | 亚洲春色在线视频 | 色综合久久久无码中文字幕 | 国产做国产爱免费视频 | 少妇无码一区二区二三区 | 在线亚洲高清揄拍自拍一品区 | 精品久久久无码中文字幕 | 少妇久久久久久人妻无码 | 亚洲日韩精品欧美一区二区 | 日本成熟视频免费视频 | 久久久久免费看成人影片 | 国内综合精品午夜久久资源 | 青青青爽视频在线观看 | 午夜无码人妻av大片色欲 | 最近免费中文字幕中文高清百度 | 亚洲午夜福利在线观看 | 欧洲欧美人成视频在线 | 又黄又爽又色的视频 | 国产午夜福利100集发布 | 国产欧美熟妇另类久久久 | 国产艳妇av在线观看果冻传媒 | 乱人伦人妻中文字幕无码久久网 | 精品久久综合1区2区3区激情 | 天堂无码人妻精品一区二区三区 | 国产女主播喷水视频在线观看 |