日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Logistic regression--转

發(fā)布時間:2025/4/5 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Logistic regression--转 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

原文地址:https://en.wikipedia.org/wiki/Logistic_regression

In?statistics,?logistic regression, or?logit regression, or?logit model[1]?is a?regression?model where the?dependent variable (DV)?is?categorical.

Logistic regression was developed by statistician?David Cox?in 1958[2][3]. The binary logistic model is used to estimate the probability of a binary response based on one or more predictor (or independent) variables (features). As such it is not a classification method. It could be called a?qualitative response/discrete choice model?in the terminology ofeconomics.

Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables by estimating probabilities using a?logistic function, which is the cumulative logistic distribution. Thus, it treats the same set of problems as?probit regression?using similar techniques, with the latter using a cumulative normal distribution curve instead. Equivalently, in the latent variable interpretations of these two methods, the first assumes a standard?logistic distribution?of errors and the second a standard?normal distribution?of errors.[citation needed]

Logistic regression can be seen as a special case of the?generalized linear model?and thus analogous to?linear regression. The model of logistic regression, however, is based on quite different assumptions (about the relationship between dependent and independent variables) from those of linear regression. In particular the key differences of these two models can be seen in the following two features of logistic regression. First, the conditional distribution?{\displaystyle y\mid x}?is a?Bernoulli distribution?rather than a?Gaussian distribution, because the dependent variable is binary. Second, the predicted values are probabilities and are therefore restricted to (0,1) through the?logistic distribution functionbecause logistic regression predicts the?probability?of particular outcomes.

Logistic regression is an alternative to Fisher's 1936 method,?linear discriminant analysis.[4]?If the assumptions of linear discriminant analysis hold, the conditioning can be reversed to produce logistic regression. The converse is not true, however, because logistic regression does not require the multivariate normal assumption of discriminant analysis.[citation needed]

?

Contents

??[hide]?
  • 1Fields and example applications
    • 1.1Example: Probability of passing an exam versus hours of study
  • 2Basics
  • 3Latent variable interpretation
  • 4Logistic function, odds, odds ratio, and logit
    • 4.1Definition of the logistic function
    • 4.2Definition of the inverse of the logistic function
    • 4.3Interpretation of these terms
    • 4.4Definition of the odds
    • 4.5Definition of the odds ratio
    • 4.6Multiple explanatory variables
  • 5Model fitting
    • 5.1Estimation
      • 5.1.1Maximum likelihood estimation
    • 5.2Evaluating goodness of fit
      • 5.2.1Deviance and likelihood ratio tests
      • 5.2.2Pseudo-R2s
      • 5.2.3Hosmer–Lemeshow test
  • 6Coefficients
    • 6.1Likelihood ratio test
    • 6.2Wald statistic
    • 6.3Case-control sampling
  • 7Formal mathematical specification
    • 7.1Setup
    • 7.2As a generalized linear model
    • 7.3As a latent-variable model
    • 7.4As a two-way latent-variable model
      • 7.4.1Example
    • 7.5As a "log-linear" model
    • 7.6As a single-layer perceptron
    • 7.7In terms of binomial data
  • 8Bayesian logistic regression
    • 8.1Gibbs sampling with an approximating distribution
  • 9Extensions
  • 10Software
  • 11See also
  • 12References
  • 13Further reading
  • 14External links

?

Fields and example applications

Logistic regression is used widely in many fields, including the medical and social sciences. For example, the Trauma and Injury Severity Score (TRISS), which is widely used to predict mortality in injured patients, was originally developed by Boyd et al. using logistic regression.[5]?Many other medical scales used to assess severity of a patient have been developed using logistic regression.[6][7][8][9]?Logistic regression may be used to predict whether a patient has a given disease (e.g.?diabetes;?coronary heart disease), based on observed characteristics of the patient (age, sex,?body mass index, results of various?blood tests, etc.).[1][10]?Another example might be to predict whether an American voter will vote Democratic or Republican, based on age, income, sex, race, state of residence, votes in previous elections, etc.[11]?The technique can also be used in?engineering, especially for predicting the probability of failure of a given process, system or product.[12][13]?It is also used in?marketing?applications such as prediction of a customer's propensity to purchase a product or halt a subscription, etc.[citation needed]?In?economics?it can be used to predict the likelihood of a person's choosing to be in the labor force, and a business application would be to predict the likelihood of a homeowner defaulting on a?mortgage.?Conditional random fields, an extension of logistic regression to sequential data, are used in?natural language processing.

Example: Probability of passing an exam versus hours of study[edit]

A group of 20 students spend between 0 and 6 hours studying for an exam. How does the number of hours spent studying affect the probability that the student will pass the exam?

The table shows the number of hours each student spent studying, and whether they passed (1) or failed (0).

Hours0.500.751.001.251.501.751.752.002.252.502.753.003.253.504.004.254.504.755.005.50
Pass00000010101010111111

The graph shows the probability of passing the exam versus the number of hours studying, with the logistic regression curve fitted to the data.

Graph of a logistic regression curve showing probability of passing an exam versus hours studying

The logistic regression analysis gives the following output.

?CoefficientStd.Errorz-valueP-value (Wald)
Intercept-4.07771.7610-2.3160.0206
Hours1.50460.62872.3930.0167

The output indicates that hours studying is significantly associated with the probability of passing the exam (p=0.0167,?Wald test). The output also provides the coefficients for Intercept = -4.0777 and Hours = 1.5046. These coefficients are entered in the logistic regression equation to estimate the probability of passing the exam:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046* Hours)))

For example, for a student who studies 2 hours, entering the value Hours =2 in the equation gives the estimated probability of passing the exam of p=0.26:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046*2))) = 0.26.

Similarly, for a student who studies 4 hours, the estimated probability of passing the exam is p=0.87:

  • Probability of passing exam =1/(1+exp(-(-4.0777+1.5046*4))) = 0.87.

This table shows the probability of passing the exam for several values of hours studying.

Hours of studyProbability of passing exam
10.07
20.26
30.61
40.87
50.97

The output from the logistic regression analysis gives a p-value of p=0.0167, which is based on the Wald z-score. Rather than the Wald method, the recommended method to calculate the p-value for logistic regression is the?Likelihood Ratio Test?(LRT), which for this data gives p=0.0006.

Basics

Logistic regression can be binomial, ordinal or multinomial. Binomial or binary logistic regression deals with situations in which the observed outcome for adependent variable?can have only two possible types (for example, "dead" vs. "alive" or "win" vs. "loss").?Multinomial logistic regression?deals with situations where the outcome can have three or more possible types (e.g., "disease A" vs. "disease B" vs. "disease C") that are not ordered.?Ordinal logistic regressiondeals with dependent variables that are ordered. In binary logistic regression, the outcome is usually coded as "0" or "1", as this leads to the most straightforward interpretation.[14]?If a particular observed outcome for the dependent variable is the noteworthy possible outcome (referred to as a "success" or a "case") it is usually coded as "1" and the contrary outcome (referred to as a "failure" or a "noncase") as "0". Logistic regression is used to predict theodds?of being a case based on the values of the?independent variables?(predictors). The odds are defined as the probability that a particular outcome is a case divided by the probability that it is a noncase.

Like other forms of?regression analysis, logistic regression makes use of one or more predictor variables that may be either continuous or categorical. Unlike ordinary linear regression, however, logistic regression is used for predicting binary dependent variables (treating the dependent variable as the outcome of aBernoulli trial) rather than a continuous outcome. Given this difference, the assumptions of linear regression are violated. In particular, the residuals cannot be normally distributed. In addition, linear regression may make nonsensical predictions for a binary dependent variable. What is needed is a way to convert a binary variable into a continuous one that can take on any real value (negative or positive). To do that logistic regression first takes the?odds?of the event happening for different levels of each independent variable, then takes the ratio of those odds (which is continuous but cannot be negative) and then takes thelogarithm?of that ratio. This is referred to as?logit?or log-odds) to create a continuous criterion as a transformed version of the dependent variable.

Thus the logit transformation is referred to as the?link function?in logistic regression—although the dependent variable in logistic regression is binomial, the logit is the continuous criterion upon which linear regression is conducted.[14]

The logit of success is then fitted to the predictors using?linear regression?analysis. The predicted value of the logit is converted back into predicted odds via the inverse of the natural logarithm, namely the?exponential function. Thus, although the observed dependent variable in logistic regression is a zero-or-one variable, the logistic regression estimates the odds, as a continuous variable, that the dependent variable is a success (a case). In some applications the odds are all that is needed. In others, a specific yes-or-no prediction is needed for whether the dependent variable is or is not a case; this categorical prediction can be based on the computed odds of a success, with predicted odds above some chosen cutoff value being translated into a prediction of a success.

Latent variable interpretation

The logistic regression can be understood simply as finding the?{\displaystyle \beta }?parameters that best fit:

{\displaystyle y=1}?if?{\displaystyle \beta _{0}+\beta _{1}x+\epsilon >0}
{\displaystyle y=0}, otherwise

where?{\displaystyle \epsilon }?is an error distributed by the standard?logistic distribution. (If the standard normal distribution is used instead, it is a probit regression.)

The associated latent variable is?{\displaystyle y\prime =\beta _{0}+\beta _{1}x+\epsilon }. The error term?{\displaystyle \epsilon }?is not observed, and so the?{\displaystyle y\prime }?is also an unobservable, hence termed "latent". (The observed data are values of?{\displaystyle y}?and?{\displaystyle x}.) Unlike ordinary regression, however, the?{\displaystyle \beta }?parameters cannot be expressed by any direct formula of the?{\displaystyle y}?and?{\displaystyle x}?values in the observed data. Instead they are to be found by an iterative search process, usually implemented by a software program, that finds the maximum of a complicated "likelihood expression" that is a function of all of the observed?{\displaystyle y}?and?{\displaystyle x}?values. The estimation approach is explained below.

Logistic function, odds, odds ratio, and logit

Figure 1. The standard logistic function?{\displaystyle \sigma (t)}; note that?{\displaystyle \sigma (t)\in (0,1)}?for all?{\displaystyle t}.

Definition of the logistic function

An explanation of logistic regression can begin with an explanation of the standard?logistic function. The logistic function is useful because it can take an input with any value from negative to positive infinity, whereas the output always takes values between zero and one[14]?and hence is interpretable as a probability. The logistic function?{\displaystyle \sigma (t)}?is defined as follows:

{\displaystyle \sigma (t)={\frac {e^{t}}{e^{t}+1}}={\frac {1}{1+e^{-t}}}}

A graph of the logistic function on the?t-interval (-6,6) is shown in Figure 1.

Let us assume that?{\displaystyle t}?is a linear function of a single?explanatory variable?{\displaystyle x}?(the case where?{\displaystyle t}?is a?linear combination?of multiple explanatory variables is treated similarly). We can then express?{\displaystyle t}?as follows:

{\displaystyle t=\beta _{0}+\beta _{1}x}

And the logistic function can now be written as:

{\displaystyle F(x)={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x)}}}}

Note that?{\displaystyle F(x)}?is interpreted as the probability of the dependent variable equaling a "success" or "case" rather than a failure or non-case. It's clear that theresponse variables?{\displaystyle Y_{i}}?are not identically distributed:?{\displaystyle P(Y_{i}=1\mid X)}?differs from one data point?{\displaystyle X_{i}}?to another, though they are independent given?design matrix?{\displaystyle X}?and shared with parameters?{\displaystyle \beta }.[1]

Definition of the inverse of the logistic function

We can now define the inverse of the logistic function,?{\displaystyle g}, the?logit?(log odds):

{\displaystyle g(F(x))=\ln \left({\frac {F(x)}{1-F(x)}}\right)=\beta _{0}+\beta _{1}x,}

and equivalently, after exponentiating both sides:

{\displaystyle {\frac {F(x)}{1-F(x)}}=e^{\beta _{0}+\beta _{1}x}.}

Interpretation of these terms

In the above equations, the terms are as follows:

  • {\displaystyle g(\cdot )}?refers to the logit function. The equation for?{\displaystyle g(F(x))}?illustrates that the?logit?(i.e., log-odds or natural logarithm of the odds) is equivalent to the linear regression expression.
  • {\displaystyle \ln }?denotes the?natural logarithm.
  • {\displaystyle F(x)}?is the probability that the dependent variable equals a case, given some linear combination of the predictors. The formula for?{\displaystyle F(x)}?illustrates that the probability of the dependent variable equaling a case is equal to the value of the logistic function of the linear regression expression. This is important in that it shows that the value of the linear regression expression can vary from negative to positive infinity and yet, after transformation, the resulting expression for the probability?{\displaystyle F(x)}?ranges between 0 and 1.
  • {\displaystyle \beta _{0}}?is the?intercept?from the linear regression equation (the value of the criterion when the predictor is equal to zero).
  • {\displaystyle \beta _{1}x}?is the regression coefficient multiplied by some value of the predictor.
  • base?{\displaystyle e}?denotes the exponential function.

Definition of the odds

The odds of the dependent variable equaling a case (given some linear combination?{\displaystyle x}?of the predictors) is equivalent to the exponential function of the linear regression expression. This illustrates how the?logit?serves as a link function between the probability and the linear regression expression. Given that the logit ranges between negative and positive infinity, it provides an adequate criterion upon which to conduct linear regression and the logit is easily converted back into the odds.[14]

So we define odds of the dependent variable equaling a case (given some linear combination?{\displaystyle x}?of the predictors) as follows:

{\displaystyle {\text{odds}}=e^{\beta _{0}+\beta _{1}x}.}

Definition of the odds ratio

For a continuous independent variable the odds ratio can be defined as:

{\displaystyle \mathrm {OR} ={\frac {\operatorname {odds} (x+1)}{\operatorname {odds} (x)}}={\frac {\frac {F(x+1)}{1-F(x+1)}}{\frac {F(x)}{1-F(x)}}}={\frac {e^{\beta _{0}+\beta _{1}(x+1)}}{e^{\beta _{0}+\beta _{1}x}}}=e^{\beta _{1}}}

This exponential relationship provides an interpretation for?{\displaystyle \beta _{1}}: The odds multiply by?{\displaystyle e^{\beta _{1}}}?for every 1-unit increase in x.[15]

For a binary independent variable the odds ratio is defined as?{\displaystyle {\frac {ad}{bc}}}?where a, b, c and d are cells in a 2x2?contingency table.[16]

Multiple explanatory variables

If there are multiple explanatory variables, the above expression?{\displaystyle \beta _{0}+\beta _{1}x}?can be revised to?{\displaystyle \beta _{0}+\beta _{1}x_{1}+\beta _{2}x_{2}+\cdots +\beta _{m}x_{m}.}?Then when this is used in the equation relating the logged odds of a success to the values of the predictors, the linear regression will be a?multiple regression?with?m?explanators; the parameters?{\displaystyle \beta _{j}}?for all?j?= 0, 1, 2, ...,?m?are all estimated.

Model fitting

Estimation

Because the model can be expressed as a?generalized linear model?(see?below), for 0<p<1,?ordinary least squares?can suffice, with?R-squared?as the measure ofgoodness of fit?in the fitting space. When p=0 or 1, more complex methods are required.[citation needed]

Maximum likelihood estimation[edit]

The regression coefficients are usually estimated using?maximum likelihood?estimation.[17]?Unlike linear regression with normally distributed residuals, it is not possible to find a closed-form expression for the coefficient values that maximize the likelihood function, so that an iterative process must be used instead; for example?Newton's method. This process begins with a tentative solution, revises it slightly to see if it can be improved, and repeats this revision until improvement is minute, at which point the process is said to have converged.[18]

In some instances the model may not reach convergence. Nonconvergence of a model indicates that the coefficients are not meaningful because the iterative process was unable to find appropriate solutions. A failure to converge may occur for a number of reasons: having a large ratio of predictors to cases,multicollinearity,?sparseness, or complete separation.

  • Having a large ratio of variables to cases results in an overly conservative Wald statistic (discussed below) and can lead to nonconvergence.
  • Multicollinearity refers to unacceptably high correlations between predictors. As multicollinearity increases, coefficients remain unbiased but standard errors increase and the likelihood of model convergence decreases.[17]?To detect multicollinearity amongst the predictors, one can conduct a linear regression analysis with the predictors of interest for the sole purpose of examining the tolerance statistic?[17]?used to assess whether multicollinearity is unacceptably high.
  • Sparseness in the data refers to having a large proportion of empty cells (cells with zero counts). Zero cell counts are particularly problematic with categorical predictors. With continuous predictors, the model can infer values for the zero cell counts, but this is not the case with categorical predictors. The model will not converge with zero cell counts for categorical predictors because the natural logarithm of zero is an undefined value, so that final solutions to the model cannot be reached. To remedy this problem, researchers may collapse categories in a theoretically meaningful way or add a constant to all cells.[17]
  • Another numerical problem that may lead to a lack of convergence is complete separation, which refers to the instance in which the predictors perfectly predict the criterion?– all cases are accurately classified. In such instances, one should reexamine the data, as there is likely some kind of error.[14]

As a rule of thumb, logistic regression models require a minimum of about 10 events per explaining variable (where?event?denotes the cases belonging to the less frequent category in the dependent variable).[19]

Evaluating goodness of fit[edit]

Discrimination?in linear regression models is generally measured using?R2. Since this has no direct analog in logistic regression, various methods[20]:ch.21including the following can be used instead.

Deviance and likelihood ratio tests[edit]

In linear regression analysis, one is concerned with partitioning variance via the?sum of squares?calculations – variance in the criterion is essentially divided into variance accounted for by the predictors and residual variance. In logistic regression analysis,?deviance?is used in lieu of sum of squares calculations.[21]?Deviance is analogous to the sum of squares calculations in linear regression[14]?and is a measure of the lack of fit to the data in a logistic regression model.[21]?When a "saturated" model is available (a model with a theoretically perfect fit), deviance is calculated by comparing a given model with the saturated model.[14]?This computation gives the?likelihood-ratio test:[14]

{\displaystyle D=-2\ln {\frac {\text{likelihood of the fitted model}}{\text{likelihood of the saturated model}}}.}

In the above equation?D?represents the deviance and ln represents the natural logarithm. The log of this likelihood ratio (the ratio of the fitted model to the saturated model) will produce a negative value, hence the need for a negative sign.?D?can be shown to follow an approximate?chi-squared distribution.[14]Smaller values indicate better fit as the fitted model deviates less from the saturated model. When assessed upon a chi-square distribution, nonsignificant chi-square values indicate very little unexplained variance and thus, good model fit. Conversely, a significant chi-square value indicates that a significant amount of the variance is unexplained.

When the saturated model is not available (a common case), deviance is calculated simply as -2·(log likelihood of the fitted model), and the reference to the saturated model's log likelihood can be removed from all that follows without harm.

Two measures of deviance are particularly important in logistic regression: null deviance and model deviance. The null deviance represents the difference between a model with only the intercept (which means "no predictors") and the saturated model. The model deviance represents the difference between a model with at least one predictor and the saturated model.[21]?In this respect, the null model provides a baseline upon which to compare predictor models. Given that deviance is a measure of the difference between a given model and the saturated model, smaller values indicate better fit. Thus, to assess the contribution of a predictor or set of predictors, one can subtract the model deviance from the null deviance and assess the difference on a?{\displaystyle \chi _{s-p}^{2},}?chi-square distribution withdegrees of freedom[14]?equal to the difference in the number of parameters estimated.

Let

{\displaystyle {\begin{aligned}D_{\text{null}}&=-2\ln {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}\\\ D_{\text{fitted}}&=-2\ln {\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}.\end{aligned}}}

Then the difference of both is:

{\displaystyle {\begin{aligned}D_{\text{null}}-D_{\text{fitted}}&=-2\left(\ln {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}-\ln {\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}\right)\\&=-2\ln {\frac {\frac {\text{likelihood of null model}}{\text{likelihood of the saturated model}}}{\frac {\text{likelihood of fitted model}}{\text{likelihood of the saturated model}}}}\\&=-2\ln {\frac {\text{likelihood of the null model}}{\text{likelihood of fitted model}}}.\end{aligned}}}

If the model deviance is significantly smaller than the null deviance then one can conclude that the predictor or set of predictors significantly improved model fit. This is analogous to the?F-test used in linear regression analysis to assess the significance of prediction.[21]

Pseudo-R2s

In linear regression the squared multiple correlation,?R2?is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.[21]?In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.[21][22]

Four of the most commonly used indices and one less commonly used one are examined on this page:

  • Likelihood ratio?R2L
  • Cox and Snell?R2CS
  • Nagelkerke?R2N
  • McFadden?R2McF
  • Tjur?R2T

R2L?is given by?[21]

{\displaystyle R_{\text{L}}^{2}={\frac {D_{\text{null}}-D_{\text{fitted}}}{D_{\text{null}}}}.}

This is the most analogous index to the squared multiple correlation in linear regression.[17]?It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the?variance?in?linear regression?analysis.[17]?One limitation of the likelihood ratio?R2?is that it is not monotonically related to the odds ratio,[21]?meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases.

R2CS?is an alternative index of goodness of fit related to the?R2?value from linear regression.[22]?It is given by:

{\displaystyle R_{\text{CS}}^{2}=1-\left({\frac {L_{M}}{L_{0}}}\right)^{2/n}}.

where?LM?and?L0?are the likelihoods for the model being fitted and the null model, respectively. The Cox and Snell index is problematic as its maximum value is?{\displaystyle 1-L_{0}^{2/n}}. The highest this upper bound can be is 0.75, but it can easily be as low as 0.48 when the marginal proportion of cases is small.[22]

R2N?provides a correction to the Cox and Snell?R2?so that the maximum value is equal to 1. Nevertheless, the Cox and Snell and likelihood ratio?R2s show greater agreement with each other than either does with the Nagelkerke?R2.[21]?Of course, this might not be the case for values exceeding .75 as the Cox and Snell index is capped at this value. The likelihood ratio?R2?is often preferred to the alternatives as it is most analogous to?R2?in?linear regression, is independent of the base rate (both Cox and Snell and Nagelkerke?R2s increase as the proportion of cases increase from 0 to .5) and varies between 0 and 1.

R2McF?is defined as

{\displaystyle R_{\text{McF}}^{2}=1-{\frac {\ln(L_{M})}{\ln(L_{0})}}},

and is preferred over?R2CS?by Allison.[22]?The two expressions?R2McF?and?R2CS?are then related respectively by,

{\displaystyle {\begin{matrix}R_{\text{CS}}^{2}=1-\left({\dfrac {1}{L_{0}}}\right)^{\frac {2(R_{\text{McF}}^{2})}{n}}\\[1.5em]R_{\text{McF}}^{2}=-{\dfrac {n}{2}}\cdot {\dfrac {\ln(1-R_{\text{CS}}^{2})}{\ln(L_{0})}}\end{matrix}}}

However, Allison now prefers?R2T?which is a relatively new measure developed by Tjur.[23]?It can be calculated in two steps:[22]

  • For each level of the dependent variable, find the mean of the predicted probabilities of an event.
  • Take the absolute value of the difference between these means
  • A word of caution is in order when interpreting pseudo-R2?statistics. The reason these indices of fit are referred to as?pseudo?R2?is that they do not represent the proportionate reduction in error as the?R2?in?linear regression?does.[21]?Linear regression assumes?homoscedasticity, that the error variance is the same for all values of the criterion. Logistic regression will always be?heteroscedastic?– the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of?R2?as a proportionate reduction in error in a universal sense in logistic regression.[21]

    Hosmer–Lemeshow test

    The?Hosmer–Lemeshow test?uses a test statistic that asymptotically follows a?{\displaystyle \chi ^{2}}?distribution?to assess whether or not the observed event rates match expected event rates in subgroups of the model population. This test is considered to be obsolete by some statisticians because of its dependence on arbitrary binning of predicted probabilities and relative low power.[24]

    Coefficients

    After fitting the model, it is likely that researchers will want to examine the contribution of individual predictors. To do so, they will want to examine the regression coefficients. In linear regression, the regression coefficients represent the change in the criterion for each unit change in the predictor.[21]?In logistic regression, however, the regression coefficients represent the change in the logit for each unit change in the predictor. Given that the logit is not intuitive, researchers are likely to focus on a predictor's effect on the exponential function of the regression coefficient – the odds ratio (see?definition). In linear regression, the significance of a regression coefficient is assessed by computing a?t?test. In logistic regression, there are several different tests designed to assess the significance of an individual predictor, most notably the likelihood ratio test and the Wald statistic.

    Likelihood ratio test

    The?likelihood-ratio test?discussed above to assess model fit is also the recommended procedure to assess the contribution of individual "predictors" to a given model.[14][17][21]?In the case of a single predictor model, one simply compares the deviance of the predictor model with that of the null model on a chi-square distribution with a single degree of freedom. If the predictor model has a significantly smaller deviance (c.f chi-square using the difference in degrees of freedom of the two models), then one can conclude that there is a significant association between the "predictor" and the outcome. Although some common statistical packages (e.g. SPSS) do provide likelihood ratio test statistics, without this computationally intensive test it would be more difficult to assess the contribution of individual predictors in the multiple logistic regression case. To assess the contribution of individual predictors one can enter the predictors hierarchically, comparing each new model with the previous to determine the contribution of each predictor.[21]?There is some debate among statisticians about the appropriateness of so-called "stepwise" procedures. The fear is that they may not preserve nominal statistical properties and may become misleading.[1]

    Wald statistic

    Alternatively, when assessing the contribution of individual predictors in a given model, one may examine the significance of the?Wald statistic. The Wald statistic, analogous to the?t-test in linear regression, is used to assess the significance of coefficients. The Wald statistic is the ratio of the square of the regression coefficient to the square of the standard error of the coefficient and is asymptotically distributed as a chi-square distribution.[17]

    {\displaystyle W_{j}={\frac {B_{j}^{2}}{SE_{B_{j}}^{2}}}}

    Although several statistical packages (e.g., SPSS, SAS) report the Wald statistic to assess the contribution of individual predictors, the Wald statistic has limitations. When the regression coefficient is large, the standard error of the regression coefficient also tends to be large increasing the probability ofType-II error. The Wald statistic also tends to be biased when data are sparse.[21]

    Case-control sampling

    Suppose cases are rare. Then we might wish to sample them more frequently than their prevalence in the population. For example, suppose there is a disease that affects 1 person in 10,000 and to collect our data we need to do a complete physical. It may be too expensive to do thousands of physicals of healthy people in order to obtain data for only a few diseased individuals. Thus, we may evaluate more diseased individuals. This is also called unbalanced data. As a rule of thumb, sampling controls at a rate of five times the number of cases will produce sufficient control data.[25]

    If we form a logistic model from such data, if the model is correct, the?{\displaystyle \beta _{j}}?parameters are all correct except for?{\displaystyle \beta _{0}}. We can correct?{\displaystyle \beta _{0}}?if we know the true prevalence as follows:[25]

    {\displaystyle {\hat {\beta _{0}^{*}}}={\hat {\beta _{0}}}+\log {{\pi } \over {1-\pi }}-\log {{\tilde {\pi }} \over {1-{\tilde {\pi }}}}}

    where?{\displaystyle \pi }?is the true prevalence and?{\displaystyle {\tilde {\pi }}}?is the prevalence in the sample.

    Formal mathematical specification

    There are various equivalent specifications of logistic regression, which fit into different types of more general models. These different specifications allow for different sorts of useful generalizations.

    Setup

    The basic setup of logistic regression is the same as for standard?linear regression.

    It is assumed that we have a series of?N?observed data points. Each data point?i?consists of a set of?m?explanatory variables?x1,i?...?xm,i?(also calledindependent variables, predictor variables, input variables, features, or attributes), and an associated?binary-valued?outcome variable?Yi?(also known as adependent variable, response variable, output variable, outcome variable or class variable), i.e. it can assume only the two possible values 0 (often meaning "no" or "failure") or 1 (often meaning "yes" or "success"). The goal of logistic regression is to explain the relationship between the explanatory variables and the outcome, so that an outcome can be predicted for a new set of explanatory variables.

    Some examples:

    • The observed outcomes are the presence or absence of a given disease (e.g. diabetes) in a set of patients, and the explanatory variables might be characteristics of the patients thought to be pertinent (sex, race, age,?blood pressure,?body-mass index, etc.).
    • The observed outcomes are the votes (e.g.?Democratic?or?Republican) of a set of people in an election, and the explanatory variables are the demographic characteristics of each person (e.g. sex, race, age, income, etc.). In such a case, one of the two outcomes is arbitrarily coded as 1, and the other as 0.

    As in linear regression, the outcome variables?Yi?are assumed to depend on the explanatory variables?x1,i?...?xm,i.

    Explanatory variables

    As shown above in the above examples, the explanatory variables may be of any?type:?real-valued,?binary,?categorical, etc. The main distinction is betweencontinuous variables?(such as income, age and?blood pressure) and?discrete variables?(such as sex or race). Discrete variables referring to more than two possible choices are typically coded using?dummy variables?(or?indicator variables), that is, separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have that value". For example, a four-way discrete variable of?blood type?with the possible values "A, B, AB, O" can be converted to four separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. (In a case like this, only three of the four dummy variables are independent of each other, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it is necessary to encode only three of the four possibilities as dummy variables. This also means that when all four possibilities are encoded, the overall model is not?identifiable?in the absence of additional constraints such as a regularization constraint. Theoretically, this could cause problems, but in reality almost all logistic regression models are fitted with regularization constraints.)

    Outcome variables

    Formally, the outcomes?Yi?are described as being?Bernoulli-distributed?data, where each outcome is determined by an unobserved probability?pi?that is specific to the outcome at hand, but related to the explanatory variables. This can be expressed in any of the following equivalent forms:

    {\displaystyle {\begin{aligned}Y_{i}\mid x_{1,i},\ldots ,x_{m,i}\ &\sim \operatorname {Bernoulli} (p_{i})\\\mathbb {E} [Y_{i}\mid x_{1,i},\ldots ,x_{m,i}]&=p_{i}\\\Pr(Y_{i}=y\mid x_{1,i},\ldots ,x_{m,i})&={\begin{cases}p_{i}&{\text{if }}y=1\\1-p_{i}&{\text{if }}y=0\end{cases}}\\\Pr(Y_{i}=y\mid x_{1,i},\ldots ,x_{m,i})&=p_{i}^{y}(1-p_{i})^{(1-y)}\end{aligned}}}

    The meanings of these four lines are:

  • The first line expresses the?probability distribution?of each?Yi: Conditioned on the explanatory variables, it follows a?Bernoulli distribution?with parameters?pi, the probability of the outcome of 1 for trial?i. As noted above, each separate trial has its own probability of success, just as each trial has its own explanatory variables. The probability of success?pi?is not observed, only the outcome of an individual Bernoulli trial using that probability.
  • The second line expresses the fact that the?expected value?of each?Yi?is equal to the probability of success?pi, which is a general property of the Bernoulli distribution. In other words, if we run a large number of Bernoulli trials using the same probability of success?pi, then take the average of all the 1 and 0 outcomes, then the result would be close to?pi. This is because doing an average this way simply computes the proportion of successes seen, which we expect to converge to the underlying probability of success.
  • The third line writes out the?probability mass function?of the Bernoulli distribution, specifying the probability of seeing each of the two possible outcomes.
  • The fourth line is another way of writing the probability mass function, which avoids having to write separate cases and is more convenient for certain types of calculations. This relies on the fact that?Yi?can take only the value 0 or 1. In each case, one of the exponents will be 1, "choosing" the value under it, while the other is 0, "canceling out" the value under it. Hence, the outcome is either?pi?or 1???pi, as in the previous line.
  • Linear predictor function

    The basic idea of logistic regression is to use the mechanism already developed for?linear regression?by modeling the probability?pi?using a?linear predictor function, i.e. a?linear combination?of the explanatory variables and a set of?regression coefficients?that are specific to the model at hand but the same for all trials. The linear predictor function?{\displaystyle f(i)}?for a particular data point?i?is written as:

    {\displaystyle f(i)=\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{m}x_{m,i},}

    where?{\displaystyle \beta _{0},\ldots ,\beta _{m}}?are?regression coefficients?indicating the relative effect of a particular explanatory variable on the outcome.

    The model is usually put into a more compact form as follows:

    • The regression coefficients?β0,?β1, ...,?βm?are grouped into a single vector?β?of size?m?+?1.
    • For each data point?i, an additional explanatory pseudo-variable?x0,i?is added, with a fixed value of 1, corresponding to the?intercept?coefficient?β0.
    • The resulting explanatory variables?x0,i,?x1,i, ...,?xm,i?are then grouped into a single vector?Xi?of size?m?+?1.

    This makes it possible to write the linear predictor function as follows:

    {\displaystyle f(i)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i},}

    using the notation for a?dot product?between two vectors.

    As a generalized linear model

    The particular model used by logistic regression, which distinguishes it from standard?linear regression?and from other types of?regression analysis?used forbinary-valued?outcomes, is the way the probability of a particular outcome is linked to the linear predictor function:

    {\displaystyle \operatorname {logit} (\mathbb {E} [Y_{i}\mid x_{1,i},\ldots ,x_{m,i}])=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)=\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{m}x_{m,i}}

    Written using the more compact notation described above, this is:

    {\displaystyle \operatorname {logit} (\mathbb {E} [Y_{i}\mid \mathbf {X} _{i}])=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}

    This formulation expresses logistic regression as a type of?generalized linear model, which predicts variables with various types of?probability distributionsby fitting a linear predictor function of the above form to some sort of arbitrary transformation of the expected value of the variable.

    The intuition for transforming using the logit function (the natural log of the odds) was explained above. It also has the practical effect of converting the probability (which is bounded to be between 0 and 1) to a variable that ranges over?{\displaystyle (-\infty ,+\infty )}?— thereby matching the potential range of the linear prediction function on the right side of the equation.

    Note that both the probabilities?pi?and the regression coefficients are unobserved, and the means of determining them is not part of the model itself. They are typically determined by some sort of optimization procedure, e.g.?maximum likelihood estimation, that finds values that best fit the observed data (i.e. that give the most accurate predictions for the data already observed), usually subject to?regularization?conditions that seek to exclude unlikely values, e.g. extremely large values for any of the regression coefficients. The use of a regularization condition is equivalent to doing?maximum a posteriori?(MAP) estimation, an extension of maximum likelihood. (Regularization is most commonly done using?a squared regularizing function, which is equivalent to placing a zero-mean?Gaussian?prior distribution?on the coefficients, but other regularizers are also possible.) Whether or not regularization is used, it is usually not possible to find a closed-form solution; instead, an iterative numerical method must be used, such as?iteratively reweighted least squares?(IRLS) or, more commonly these days, a?quasi-Newton method?such as the?L-BFGS method.

    The interpretation of the?βj?parameter estimates is as the additive effect on the log of the?odds?for a unit change in the?jth explanatory variable. In the case of a dichotomous explanatory variable, for instance gender,?{\displaystyle e^{\beta }}?is the estimate of the odds of having the outcome for, say, males compared with females.

    An equivalent formula uses the inverse of the logit function, which is the?logistic function, i.e.:

    {\displaystyle \mathbb {E} [Y_{i}\mid \mathbf {X} _{i}]=p_{i}=\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})={\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}}

    The formula can also be written as a?probability distribution?(specifically, using a?probability mass function):

    {\displaystyle \operatorname {Pr} (Y_{i}=y\mid \mathbf {X} _{i})={p_{i}}^{y}(1-p_{i})^{1-y}=\left({\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{y}\left(1-{\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{1-y}={\frac {e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}\cdot y}}{1+e^{{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}}

    As a latent-variable model

    The above model has an equivalent formulation as a?latent-variable model. This formulation is common in the theory of?discrete choice?models, and makes it easier to extend to certain more complicated models with multiple, correlated choices, as well as to compare logistic regression to the closely related?probit model.

    Imagine that, for each trial?i, there is a continuous?latent variable?Yi*?(i.e. an unobserved?random variable) that is distributed as follows:

    {\displaystyle Y_{i}^{\ast }={\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon \,}

    where

    {\displaystyle \varepsilon \sim \operatorname {Logistic} (0,1)\,}

    i.e. the latent variable can be written directly in terms of the linear predictor function and an additive random?error variable?that is distributed according to a standard?logistic distribution.

    Then?Yi?can be viewed as an indicator for whether this latent variable is positive:

    {\displaystyle Y_{i}={\begin{cases}1&{\text{if }}Y_{i}^{\ast }>0\ {\text{ i.e. }}-\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i},\\0&{\text{otherwise.}}\end{cases}}}

    The choice of modeling the error variable specifically with a standard logistic distribution, rather than a general logistic distribution with the location and scale set to arbitrary values, seems restrictive, but in fact it is not. It must be kept in mind that we can choose the regression coefficients ourselves, and very often can use them to offset changes in the parameters of the error variable's distribution. For example, a logistic error-variable distribution with a non-zero location parameter?μ?(which sets the mean) is equivalent to a distribution with a zero location parameter, where?μ?has been added to the intercept coefficient. Both situations produce the same value for?Yi*?regardless of settings of explanatory variables. Similarly, an arbitrary scale parameter?s?is equivalent to setting the scale parameter to 1 and then dividing all regression coefficients by?s. In the latter case, the resulting value of?Yi*?will be smaller by a factor of?s?than in the former case, for all sets of explanatory variables — but critically, it will always remain on the same side of 0, and hence lead to the same?Yi?choice.

    (Note that this predicts that the irrelevancy of the scale parameter may not carry over into more complex models where more than two choices are available.)

    It turns out that this formulation is exactly equivalent to the preceding one, phrased in terms of the?generalized linear model?and without any?latent variables. This can be shown as follows, using the fact that the?cumulative distribution function?(CDF) of the standard?logistic distribution?is the?logistic function, which is the inverse of the?logit function, i.e.

    {\displaystyle \Pr(\varepsilon <x)=\operatorname {logit} ^{-1}(x)}

    Then:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=1\mid \mathbf {X} _{i})&=\Pr(Y_{i}^{\ast }>0\mid \mathbf {X} _{i})&\\&=\Pr({\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon >0)&\\&=\Pr(\varepsilon >-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\&=\Pr(\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&&{\text{(because the logistic distribution is symmetric)}}\\&=\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\&=p_{i}&&{\text{(see above)}}\end{aligned}}}

    This formulation—which is standard in?discrete choice?models—makes clear the relationship between logistic regression (the "logit model") and the?probit model, which uses an error variable distributed according to a standard?normal distribution?instead of a standard logistic distribution. Both the logistic and normal distributions are symmetric with a basic unimodal, "bell curve" shape. The only difference is that the logistic distribution has somewhat?heavier tails, which means that it is less sensitive to outlying data (and hence somewhat more?robust?to model mis-specifications or erroneous data).

    As a two-way latent-variable model

    Yet another formulation uses two separate latent variables:

    {\displaystyle {\begin{aligned}Y_{i}^{0\ast }&={\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}+\varepsilon _{0}\,\\Y_{i}^{1\ast }&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}\,\end{aligned}}}

    where

    {\displaystyle {\begin{aligned}\varepsilon _{0}&\sim \operatorname {EV} _{1}(0,1)\\\varepsilon _{1}&\sim \operatorname {EV} _{1}(0,1)\end{aligned}}}

    where?EV1(0,1) is a standard type-1?extreme value distribution: i.e.

    {\displaystyle \Pr(\varepsilon _{0}=x)=\Pr(\varepsilon _{1}=x)=e^{-x}e^{-e^{-x}}}

    Then

    {\displaystyle Y_{i}={\begin{cases}1&{\text{if }}Y_{i}^{1\ast }>Y_{i}^{0\ast },\\0&{\text{otherwise.}}\end{cases}}}

    This model has a separate latent variable and a separate set of regression coefficients for each possible outcome of the dependent variable. The reason for this separation is that it makes it easy to extend logistic regression to multi-outcome categorical variables, as in the?multinomial logit?model. In such a model, it is natural to model each possible outcome using a different set of regression coefficients. It is also possible to motivate each of the separate latent variables as the theoretical?utility?associated with making the associated choice, and thus motivate logistic regression in terms of?utility theory. (In terms of utility theory, a rational actor always chooses the choice with the greatest associated utility.) This is the approach taken by economists when formulatingdiscrete choice?models, because it both provides a theoretically strong foundation and facilitates intuitions about the model, which in turn makes it easy to consider various sorts of extensions. (See the example below.)

    The choice of the type-1?extreme value distribution?seems fairly arbitrary, but it makes the mathematics work out, and it may be possible to justify its use through?rational choice theory.

    It turns out that this model is equivalent to the previous model, although this seems non-obvious, since there are now two sets of regression coefficients and error variables, and the error variables have a different distribution. In fact, this model reduces directly to the previous one with the following substitutions:

    {\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}
    {\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}}

    An intuition for this comes from the fact that, since we choose based on the maximum of two values, only their difference matters, not the exact values — and this effectively removes one?degree of freedom. Another critical fact is that the difference of two type-1 extreme-value-distributed variables is a logistic distribution, i.e. if?{\displaystyle \varepsilon =\varepsilon _{1}-\varepsilon _{0}\sim \operatorname {Logistic} (0,1).}

    We can demonstrate the equivalent as follows:

    {\displaystyle {\begin{aligned}&\Pr(Y_{i}=1\mid \mathbf {X} _{i})\\[4pt]={}&\Pr(Y_{i}^{1\ast }>Y_{i}^{0\ast }\mid \mathbf {X} _{i})&\\={}&\Pr(Y_{i}^{1\ast }-Y_{i}^{0\ast }>0\mid \mathbf {X} _{i})&\\={}&\Pr({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}-({\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}+\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i})+(\varepsilon _{1}-\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0})\cdot \mathbf {X} _{i}+(\varepsilon _{1}-\varepsilon _{0})>0)&\\={}&\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0})\cdot \mathbf {X} _{i}+\varepsilon >0)&&{\text{(substitute }}\varepsilon {\text{ as above)}}\\={}&\Pr({\boldsymbol {\beta }}\cdot \mathbf {X} _{i}+\varepsilon >0)&&{\text{(substitute }}{\boldsymbol {\beta }}{\text{ as above)}}\\={}&\Pr(\varepsilon >-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&&{\text{(now, same as above model)}}\\={}&\Pr(\varepsilon <{\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\={}&\operatorname {logit} ^{-1}({\boldsymbol {\beta }}\cdot \mathbf {X} _{i})&\\={}&p_{i}\end{aligned}}}

    Example

    As an example, consider a province-level election where the choice is between a right-of-center party, a left-of-center party, and a secessionist party (e.g. the?Parti Québécois, which wants?Quebec?to secede from?Canada). We would then use three latent variables, one for each choice. Then, in accordance with?utility theory, we can then interpret the latent variables as expressing the?utility?that results from making each of the choices. We can also interpret the regression coefficients as indicating the strength that the associated factor (i.e. explanatory variable) has in contributing to the utility — or more correctly, the amount by which a unit change in an explanatory variable changes the utility of a given choice. A voter might expect that the right-of-center party would lower taxes, especially on rich people. This would give low-income people no benefit, i.e. no change in utility (since they usually don't pay taxes); would cause moderate benefit (i.e. somewhat more money, or moderate utility increase) for middle-incoming people; and would cause significant benefits for high-income people. On the other hand, the left-of-center party might be expected to raise taxes and offset it with increased welfare and other assistance for the lower and middle classes. This would cause significant positive benefit to low-income people, perhaps weak benefit to middle-income people, and significant negative benefit to high-income people. Finally, the secessionist party would take no direct actions on the economy, but simply secede. A low-income or middle-income voter might expect basically no clear utility gain or loss from this, but a high-income voter might expect negative utility, since he/she is likely to own companies, which will have a harder time doing business in such an environment and probably lose money.

    These intuitions can be expressed as follows:

    Estimated strength of regression coefficient for different outcomes (party choices) and different values of explanatory variables?Center-rightCenter-leftSecessionistHigh-incomeMiddle-incomeLow-income
    strong +strong ?strong ?
    moderate +weak +none
    nonestrong +none

    This clearly shows that

  • Separate sets of regression coefficients need to exist for each choice. When phrased in terms of utility, this can be seen very easily. Different choices have different effects on net utility; furthermore, the effects vary in complex ways that depend on the characteristics of each individual, so there need to be separate sets of coefficients for each characteristic, not simply a single extra per-choice characteristic.
  • Even though income is a continuous variable, its effect on utility is too complex for it to be treated as a single variable. Either it needs to be directly split up into ranges, or higher powers of income need to be added so that?polynomial regression?on income is effectively done.
  • As a "log-linear" model

    Yet another formulation combines the two-way latent variable formulation above with the original formulation higher up without latent variables, and in the process provides a link to one of the standard formulations of the?multinomial logit.

    Here, instead of writing the?logit?of the probabilities?pi?as a linear predictor, we separate the linear predictor into two, one for each of the two outcomes:

    {\displaystyle {\begin{aligned}\ln \Pr(Y_{i}=0)&={\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}-\ln Z\,\\\ln \Pr(Y_{i}=1)&={\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}-\ln Z\,\\\end{aligned}}}

    Note that two separate sets of regression coefficients have been introduced, just as in the two-way latent variable model, and the two equations appear a form that writes the?logarithm?of the associated probability as a linear predictor, with an extra term?{\displaystyle -lnZ}?at the end. This term, as it turns out, serves as thenormalizing factor?ensuring that the result is a distribution. This can be seen by exponentiating both sides:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=0)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}\,\\\Pr(Y_{i}=1)&={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}\,\\\end{aligned}}}

    In this form it is clear that the purpose of?Z?is to ensure that the resulting distribution over?Yi?is in fact a?probability distribution, i.e. it sums to 1. This means that?Z?is simply the sum of all un-normalized probabilities, and by dividing each probability by?Z, the probabilities become "normalized". That is:

    {\displaystyle Z=e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}

    and the resulting equations are

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=0)&={\frac {e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\\\Pr(Y_{i}=1)&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\end{aligned}}}

    Or generally:

    {\displaystyle \Pr(Y_{i}=c)={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{h}e^{{\boldsymbol {\beta }}_{h}\cdot \mathbf {X} _{i}}}}}

    This shows clearly how to generalize this formulation to more than two outcomes, as in?multinomial logit. Note that this general formulation is exactly theSoftmax function?as in

    {\displaystyle \Pr(Y_{i}=c)=\operatorname {softmax} (c,{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i},{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i},\dots ).}

    In order to prove that this is equivalent to the previous model, note that the above model is overspecified, in that?{\displaystyle \Pr(Y_{i}=0)}?and?{\displaystyle \Pr(Y_{i}=1)}?cannot be independently specified: rather?{\displaystyle \Pr(Y_{i}=0)+\Pr(Y_{i}=1)=1}?so knowing one automatically determines the other. As a result, the model is?nonidentifiable, in that multiple combinations of?β0?and?β1?will produce the same probabilities for all possible explanatory variables. In fact, it can be seen that adding any constant vector to both of them will produce the same probabilities:

    {\displaystyle {\begin{aligned}\Pr(Y_{i}=1)&={\frac {e^{({\boldsymbol {\beta }}_{1}+\mathbf {C} )\cdot \mathbf {X} _{i}}}{e^{({\boldsymbol {\beta }}_{0}+\mathbf {C} )\cdot \mathbf {X} _{i}}+e^{({\boldsymbol {\beta }}_{1}+\mathbf {C} )\cdot \mathbf {X} _{i}}}}\,\\&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}e^{\mathbf {C} \cdot \mathbf {X} _{i}}}}\,\\&={\frac {e^{\mathbf {C} \cdot \mathbf {X} _{i}}e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{\mathbf {C} \cdot \mathbf {X} _{i}}(e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}})}}\,\\&={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}\,\\\end{aligned}}}

    As a result, we can simplify matters, and restore identifiability, by picking an arbitrary value for one of the two vectors. We choose to set?{\displaystyle {\boldsymbol {\beta }}_{0}=\mathbf {0} .}?Then,

    {\displaystyle e^{{\boldsymbol {\beta }}_{0}\cdot \mathbf {X} _{i}}=e^{\mathbf {0} \cdot \mathbf {X} _{i}}=1}

    and so

    {\displaystyle \Pr(Y_{i}=1)={\frac {e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}{1+e^{{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}={\frac {1}{1+e^{-{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}}}}=p_{i}}

    which shows that this formulation is indeed equivalent to the previous formulation. (As in the two-way latent variable formulation, any settings where?{\displaystyle {\boldsymbol {\beta }}={\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{0}}?will produce equivalent results.)

    Note that most treatments of the?multinomial logit?model start out either by extending the "log-linear" formulation presented here or the two-way latent variable formulation presented above, since both clearly show the way that the model could be extended to multi-way outcomes. In general, the presentation with latent variables is more common in?econometrics?and?political science, where?discrete choice?models and?utility theory?reign, while the "log-linear" formulation here is more common in?computer science, e.g.?machine learning?and?natural language processing.

    As a single-layer perceptron[edit]

    The model has an equivalent formulation

    {\displaystyle p_{i}={\frac {1}{1+e^{-(\beta _{0}+\beta _{1}x_{1,i}+\cdots +\beta _{k}x_{k,i})}}}.\,}

    This functional form is commonly called a single-layer?perceptron?or single-layer?artificial neural network. A single-layer neural network computes a continuous output instead of a?step function. The derivative of?pi?with respect to?X?=?(x1, ...,?xk) is computed from the general form:

    {\displaystyle y={\frac {1}{1+e^{-f(X)}}}}

    where?f(X) is an?analytic function?in?X. With this choice, the single-layer neural network is identical to the logistic regression model. This function has a continuous derivative, which allows it to be used in?backpropagation. This function is also preferred because its derivative is easily calculated:

    {\displaystyle {\frac {\mathrm ozvdkddzhkzd y}{\mathrm ozvdkddzhkzd X}}=y(1-y){\frac {\mathrm ozvdkddzhkzd f}{\mathrm ozvdkddzhkzd X}}.\,}

    In terms of binomial data[edit]

    A closely related model assumes that each?i?is associated not with a single Bernoulli trial but with?ni?independent identically distributed?trials, where the observation?Yi?is the number of successes observed (the sum of the individual Bernoulli-distributed random variables), and hence follows a?binomial distribution:

    {\displaystyle Y_{i}\ \sim \operatorname {Bin} (n_{i},p_{i}),{\text{ for }}i=1,\dots ,n}

    An example of this distribution is the fraction of seeds (pi) that germinate after?ni?are planted.

    In terms of?expected values, this model is expressed as follows:

    {\displaystyle p_{i}=\mathbb {E} \left[\left.{\frac {Y_{i}}{n_{i}}}\,\right|\,\mathbf {X} _{i}\right],}

    so that

    {\displaystyle \operatorname {logit} \left(\mathbb {E} \left[\left.{\frac {Y_{i}}{n_{i}}}\,\right|\,\mathbf {X} _{i}\right]\right)=\operatorname {logit} (p_{i})=\ln \left({\frac {p_{i}}{1-p_{i}}}\right)={\boldsymbol {\beta }}\cdot \mathbf {X} _{i},}

    Or equivalently:

    {\displaystyle \operatorname {Pr} (Y_{i}=y\mid \mathbf {X} _{i})={n_{i} \choose y}p_{i}^{y}(1-p_{i})^{n_{i}-y}={n_{i} \choose y}\left({\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{y}\left(1-{\frac {1}{1+e^{-{\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}}}\right)^{n_{i}-y}}

    This model can be fit using the same sorts of methods as the above more basic model.

    Bayesian logistic regression

    Comparison of?logistic function?with a scaled inverse?probit function?(i.e. the?CDF?of the?normal distribution), comparing?{\displaystyle \sigma (x)}?vs.?{\displaystyle \Phi ({\sqrt {\frac {\pi }{8}}}x)}, which makes the slopes the same at the origin. This shows the?heavier tails?of the logistic distribution.

    In a?Bayesian statistics?context,?prior distributions?are normally placed on the regression coefficients, usually in the form of?Gaussian distributions. Unfortunately, the Gaussian distribution is not the?conjugate prior?of thelikelihood function?in logistic regression. As a result, the?posterior distribution?is difficult to calculate, even using standard simulation algorithms (e.g.?Gibbs sampling)[citation needed].

    There are various possibilities:

    • Don't do a proper Bayesian analysis, but simply compute a?maximum a posteriori?point estimate of the parameters. This is common, for example, in "maximum entropy" classifiers in?machine learning.
    • Use a more general approximation method such as the?Metropolis–Hastings algorithm.
    • Draw a Markov chain Monte Carlo sample from the exact posterior by using the Independent Metropolis–Hastings algorithm with heavy-tailed multivariate candidate distribution found by matching the mode and curvature at the mode of the normal approximation to the posterior and then using the Student’s t shape with low degrees of freedom.[26]?This is shown to have excellent convergence properties.
    • Use a?latent variable model?and approximate the logistic distribution using a more tractable distribution, e.g. a?Student's t-distribution?or a?mixture?of?normal distributions.
    • Do?probit regression?instead of logistic regression. This is actually a special case of the previous situation, using a?normal distribution?in place of a Student's t, mixture of normals, etc. This will be less accurate but has the advantage that probit regression is extremely common, and a ready-made Bayesian implementation may already be available.
    • Use the?Laplace approximation?of the posterior distribution.[27]?This approximates the posterior with a Gaussian distribution. This is not a terribly good approximation, but it suffices if all that is desired is an estimate of the posterior mean and variance. In such a case, an approximation scheme such as?variational Bayescan be used.[28]

    Gibbs sampling with an approximating distribution[edit]

    As shown above, logistic regression is equivalent to a?latent variable model?with an?error variable?distributed according to a standard?logistic distribution. The overall distribution of the latent variable?{\displaystyle Y_{i}\ast }?is also a logistic distribution, with the mean equal to?{\displaystyle {\boldsymbol {\beta }}\cdot \mathbf {X} _{i}}?(i.e. the fixed quantity added to the error variable). This model considerably simplifies the application of techniques such as?Gibbs sampling. However, sampling the regression coefficients is still difficult, because of the lack of?conjugacy?between the normal and logistic distributions. Changing the prior distribution over the regression coefficients is of no help, because the logistic distribution is not in the?exponential family?and thus has no?conjugate prior.

    One possibility is to use a more general?Markov chain Monte Carlo?technique, such as the?Metropolis–Hastings algorithm, which can sample arbitrary distributions. Another possibility, however, is to replace the logistic distribution with a similar-shaped distribution that is easier to work with using Gibbs sampling. In fact, the logistic and normal distributions have a similar shape, and thus one possibility is simply to have normally distributed errors. Because the normal distribution is conjugate to itself, sampling the regression coefficients becomes easy. In fact, this model is exactly the model used in?probit regression.

    However, the normal and logistic distributions differ in that the logistic has?heavier tails. As a result, it is more?robust?to inaccuracies in the underlying model (which are inevitable, in that the model is essentially always an approximation) or to errors in the data. Probit regression loses some of this robustness.

    Another alternative is to use errors distributed as a?Student's t-distribution. The Student's t-distribution has heavy tails, and is easy to sample from because it is the?compound distribution?of a normal distribution with variance distributed as an?inverse gamma distribution. In other words, if a normal distribution is used for the error variable, and another?latent variable, following an inverse gamma distribution, is added corresponding to the variance of this error variable, the?marginal distribution?of the error variable will follow a Student's t distribution. Because of the various conjugacy relationships, all variables in this model are easy to sample from.

    The Student's t distribution that best approximates a standard logistic distribution can be determined by?matching the moments?of the two distributions. The Student's t distribution has three parameters, and since the?skewness?of both distributions is always 0, the first four moments can all be matched, using the following equations:

    {\displaystyle {\begin{aligned}\mu &=0\\{\frac {\nu }{\nu -2}}s^{2}&={\frac {\pi ^{2}}{3}}\\{\frac {6}{\nu -4}}&={\frac {6}{5}}\end{aligned}}}

    This yields the following values:

    {\displaystyle {\begin{aligned}\mu &=0\\s&={\sqrt {{\frac {7}{9}}{\frac {\pi ^{2}}{3}}}}\\\nu &=9\end{aligned}}}

    The following graphs compare the standard logistic distribution with the Student's t distribution that matches the first four moments using the above-determined values, as well as the normal distribution that matches the first two moments. Note how much closer the Student's t distribution agrees, especially in the tails. Beyond about two standard deviations from the mean, the logistic and normal distributions diverge rapidly, but the logistic and Student's t distributions don't start diverging significantly until more than 5 standard deviations away.

    (Another possibility, also amenable to Gibbs sampling, is to approximate the logistic distribution using a?mixture density?of?normal distributions.)

    Comparison of logistic and approximating distributions (t, normal). Tails of distributions.
    Further tails of distributions. Extreme tails of distributions.

    Extensions

    There are large numbers of extensions:

    • Multinomial logistic regression?(or?multinomial logit) handles the case of a multi-way?categorical?dependent variable (with unordered values, also called "classification"). Note that the general case of having dependent variables with more than two values is termed?polytomous regression.
    • Ordered logistic regression?(or?ordered logit) handles?ordinal?dependent variables (ordered values).
    • Mixed logit?is an extension of multinomial logit that allows for correlations among the choices of the dependent variable.
    • An extension of the logistic model to sets of interdependent variables is the?conditional random field.

    Software

    Most?statistical software?can do binary logistic regression.

    • SAS
      • PROC LOGISTIC for basic logistic regression.[29]
      • PROC CATMOD when all the variables are categorical.[30]
      • PROC GLIMMIX for?multilevel model?logistic regression.[31]
    • R
      • glm?in the stats package (using family = binomial)[32]
      • GLMNET package for an efficient implementation regularized logistic regression
      • lmer for mixed effects logistic regression
    • python
      • Logistic Regression with ARD prior?code?,?tutorial
      • Bayesian Logistic Regression with Laplace Approximation?code,?tutorial
      • Variational Logistic Regression?code,?tutorial
    • NCSS
      • Logistic Regression in NCSS

    See also

    • Logistic function
    • Discrete choice
    • Jarrow–Turnbull model
    • Limited dependent variable
    • Multinomial logit model
    • Ordered logit
    • Hosmer–Lemeshow test
    • Brier score
    • MLPACK?- contains a?C++?implementation of logistic regression
    • Local case-control sampling
    • Logistic model tree

    References[edit]

  • David A. Freedman?(2009).?Statistical Models: Theory and Practice.Cambridge University Press. p.?128.
  • Walker, SH; Duncan, DB (1967). "Estimation of the probability of an event as a function of several independent variables".?Biometrika?54: 167–178.doi:10.2307/2333860.
  • Cox, DR (1958). "The regression analysis of binary sequences (with discussion)".?J Roy Stat Soc B?20: 215–242.
  • ?Gareth James; Daniela Witten; Trevor Hastie; Robert Tibshirani (2013).?An Introduction to Statistical Learning. Springer. p.?6.
  • Boyd, C. R.; Tolson, M. A.; Copes, W. S. (1987). "Evaluating trauma care: The TRISS method. Trauma Score and the Injury Severity Score".?The Journal of trauma27?(4): 370–378.?doi:10.1097/00005373-198704000-00005.?PMID?3106646.
  • Kologlu M., Elker D., Altun H., Sayek I. Validation of MPI and OIA II in two different groups of patients with secondary peritonitis // Hepato-Gastroenterology. – 2001. – Vol. 48, № 37. – P. 147-151.
  • ?Biondo S., Ramos E., Deiros M. et al. Prognostic factors for mortality in left colonic peritonitis: a new scoring system // J. Am. Coll. Surg. – 2000. – Vol. 191, № 6. – Р. 635-642.
  • ?Marshall J.C., Cook D.J., Christou N.V. et al. Multiple Organ Dysfunction Score: A reliable descriptor of a complex clinical outcome // Crit. Care Med. – 1995. – Vol. 23. – P. 1638-1652.
  • ?Le Gall J.-R., Lemeshow S., Saulnier F. A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study // JAMA. – 1993. – Vol. 270. – P. 2957-2963.
  • ?Truett, J; Cornfield, J; Kannel, W (1967). "A multivariate analysis of the risk of coronary heart disease in Framingham".?Journal of chronic diseases?20?(7): 511–24.?doi:10.1016/0021-9681(67)90082-3.?PMID?6028270.
  • ?Harrell, Frank E. (2001).?Regression Modeling Strategies. Springer-Verlag.ISBN?0-387-95232-2.
  • ?M. Strano; B.M. Colosimo (2006).?"Logistic regression analysis for experimental determination of forming limit diagrams".?International Journal of Machine Tools and Manufacture?46?(6): 673–682.?doi:10.1016/j.ijmachtools.2005.07.005.
  • ?Palei, S. K.; Das, S. K. (2009). "Logistic regression model for prediction of roof fall risks in bord and pillar workings in coal mines: An approach".?Safety Science?47: 88–96.?doi:10.1016/j.ssci.2008.01.002.
  • ^?Jump up to:a?b?c?d?e?f?g?h?i?j?k?Hosmer, David W.; Lemeshow, Stanley (2000).?Applied Logistic Regression?(2nd ed.). Wiley.?ISBN?0-471-35632-8.[page?needed]
  • http://www.planta.cn/forum/files_planta/introduction_to_categorical_data_analysis_805.pdf
  • ?Everitt, Brian (1998).?The Cambridge Dictionary of Statistics. Cambridge, UK New York: Cambridge University Press.?ISBN?0521593468.
  • ?Menard, Scott W. (2002).?Applied Logistic Regression?(2nd ed.). SAGE.?ISBN?978-0-7619-2208-7.[page?needed]
  • Menard ch 1.3
  • Peduzzi, P; Concato, J; Kemper, E; Holford, TR; Feinstein, AR (December 1996). "A simulation study of the number of events per variable in logistic regression analysis.".?Journal of Clinical Epidemiology?49?(12): 1373–9.?doi:10.1016/s0895-4356(96)00236-3.?PMID?8970487.
  • ?Greene, William N. (2003).?Econometric Analysis?(Fifth ed.). Prentice-Hall.ISBN?0-13-066189-9.
  • ?Cohen, Jacob; Cohen, Patricia; West, Steven G.; Aiken, Leona S. (2002).?Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences?(3rd ed.). Routledge.?ISBN?978-0-8058-2223-6.[page?needed]
  • Measures of Fit for Logistic Regression
  • ?Tjur, Tue (2009). "Coefficients of determination in logistic regression models".?American Statistician: 366–372.
  • ?Hosmer, D.W. (1997). "A comparison of goodness-of-fit tests for the logistic regression model".?Stat in Med?16: 965–980.?doi:10.1002/(sici)1097-0258(19970515)16:9<965::aid-sim509>3.3.co;2-f.
  • https://class.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/classification.pdf?slide 16
  • ?Bolstad, William M. (2010).?Understandeing Computational Bayesian Statistics. Wiley.?ISBN?978-0-470-04609-8.[page?needed]
  • ?Bishop, Christopher M. "Chapter 4. Linear Models for Classification".?Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp.?217–218.?ISBN?978-0387-31073-2.
  • ?Bishop, Christopher M. "Chapter 10. Approximate Inference".?Pattern Recognition and Machine Learning. Springer Science+Business Media, LLC. pp.?498–505.ISBN?978-0387-31073-2.
  • https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#logistic_toc.htm
  • https://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_catmod_sect003.htm
  • https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#glimmix_toc.htm
  • Gelman, Andrew; Hill, Jennifer (2007).?Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp.?79–108.?ISBN?978-0-521-68689-1.
  • Further reading

    • Agresti, Alan. (2002).?Categorical Data Analysis. New York: Wiley-Interscience.?ISBN?0-471-36093-7.
    • Amemiya, Takeshi (1985).?"Qualitative Response Models".?Advanced Econometrics. Oxford: Basil Blackwell. pp.?267–359.?ISBN?0-631-13345-3.
    • Balakrishnan, N. (1991).?Handbook of the Logistic Distribution. Marcel Dekker, Inc.?ISBN?978-0-8247-8587-1.
    • Gouriéroux, Christian?(2000).?"The Simple Dichotomy".?Econometrics of Qualitative Dependent Variables. New York: Cambridge University Press. pp.?6–37.ISBN?0-521-58985-1.
    • Greene, William H. (2003).?Econometric Analysis, fifth edition. Prentice Hall.?ISBN?0-13-066189-9.
    • Hilbe, Joseph M. (2009).?Logistic Regression Models. Chapman & Hall/CRC Press.?ISBN?978-1-4200-7575-5.
    • Hosmer, David (2013).?Applied logistic regression. Hoboken, New Jersey: Wiley.?ISBN?978-0470582473.
    • Howell, David C. (2010).?Statistical Methods for Psychology, 7th ed. Belmont, CA; Thomson Wadsworth.?ISBN?978-0-495-59786-5.
    • Peduzzi, P.; J. Concato; E. Kemper; T.R. Holford; A.R. Feinstein (1996). "A simulation study of the number of events per variable in logistic regression analysis".?Journal of Clinical Epidemiology?49?(12): 1373–1379.?doi:10.1016/s0895-4356(96)00236-3.?PMID?8970487.

    External links


    • Econometrics Lecture (topic: Logit model)
      ?on?YouTube?by?Mark Thoma
    • Logistic Regression Interpretation
    • Logistic Regression tutorial
    • Open source Excel add-in implementation of Logistic Regressio
      n

    轉(zhuǎn)載于:https://www.cnblogs.com/davidwang456/articles/5592886.html

    總結(jié)

    以上是生活随笔為你收集整理的Logistic regression--转的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。