slide 1 standard binary logistic regression. slide 2 logistic regression  logistic regression is...

Click here to load reader

Post on 22-Dec-2015

218 views

Category:

Documents

4 download

Embed Size (px)

TRANSCRIPT

  • Slide 1
  • Slide 1 Standard Binary Logistic Regression
  • Slide 2
  • Slide 2 Logistic regression Logistic regression is used to analyze relationships between a dichotomous dependent variable and metric or non-metric independent variables. (SPSS now supports Multinomial Logistic Regression that can be used with more than two groups, but our focus now is on binary logistic regression for two groups.) Logistic regression combines the independent variables to estimate the probability that a particular event will occur, i.e. a subject will be a member of one of the groups defined by the dichotomous dependent variable. In SPSS, the model is always constructed to predict the group with higher numeric code. If responses are coded 1 for Yes and 2 for No, SPSS will predict membership in the No category. If responses are coded 1 for No and 2 for Yes, SPSS will predict membership in the Yes category. We will refer to the predicted event for a particular analysis as the modeled event. Predicting the No event create some awkward wording in our problems. Our only option for changing this is to recode the variable. If the probability for group membership in the modeled category is above some cut point (the default is 0.50), the subject is predicted to be a member of the modeled group. If the probability is below the cut point, the subject is predicted to be a member of the other group. For any given case, logistic regression computes the probability that a case with a particular set of values for the independent variable is a member of the modeled category
  • Slide 3
  • Slide 3 Level of measurement requirements Logistic regression analysis requires that the dependent variable be dichotomous. Logistic regression analysis requires that the independent variables be metric or non- metric. The logistic regression procedure will dummy-code non-metric variables for us. For logistic regression, we will use indicator dummy-coding, rather than deviation dummy-coding since I think it makes more sense to compare the odds for two groups rather than compare the odds for one group to the average odds for all groups. If an independent variable is ordinal, we can either treat it as non-metric and dummy-code it or we can treat it as interval, in which case we will attach the usual caution. Dichotomous independent variables do not have to be dummy-coded, but in our problems we will have SPSS dummy-code them because then we do not need to worry about the original codes for the variable as we can always interpret
  • Slide 4
  • Slide 4 Dummy-coding in SPSS - 1 When we want SPSS to dummy- code a variable, we enter the specifications in the Define Categorical Variables dialog box. Here we are dummy-coding sex, using the defaults of indicatory coding with the last category as the reference category. In the table of coefficients, the dummy-coded variable is referred to by its original name plus the value for the Parameter coding in the Categorical Variables Codings table. SPSS shows you its coding scheme in the table of Categorical Variables Codings in the output. Since we chose the last category as reference, FEMALE is coded 0.
  • Slide 5
  • Slide 5 Dummy-coding in SPSS - 2 Here we are dummy-coding sex, using the defaults of indicatory coding with the First category as the reference category. Note you must click on the Change button after selecting the First option button. In the table of coefficients, the dummy-coded variable is still referred to by its original name plus the value for the Parameter coding in the Categorical Variables Codings table, but in this case it stands for females. SPSS shows you its coding scheme in the table of Categorical Variables Codings in the output. Since we chose the FIRST category as reference, MALE is coded 0.
  • Slide 6
  • Slide 6 Assumptions Logistic regression does not make any assumptions of normality, linearity, and homogeneity of variance for the independent variables. When the variables satisfy the assumptions of normality, linearity, and homogeneity of variance, discriminant analysis has historically been cited as the more effective statistical procedure for evaluating relationships with a non-metric dependent variable. However, logistic regression is being used more and more frequently because it can be interpreted similarly to other general linear model problems. When the variables do not satisfy the assumptions of normality, linearity, and homogeneity of variance, logistic regression is the statistic of choice since it does not make these assumptions. Multicollinearity is a problem for logistic regression with the same consequences as multiple regression, i.e. we are likely to misinterpret the contribution of independent variables when they are collinear. SPSS does not compute tolerance values for logistic regression, so we will detect it through the examination of standard errors. We will not interpret models when evidence of multicollinearity is found. Evidence of multicollinearity is detected as a numerical problem in the attempted solution.
  • Slide 7
  • Slide 7 Numerical problems The maximum likelihood method used to calculate logistic regression is an iterative fitting process that attempts to cycle through repetitions to find an answer. Sometimes, the method will break down and not be able to converge or find an answer. Sometimes the method will produce wildly improbable results, reporting that a one-unit change in an independent variable increases the odds of the modeled event by hundreds of thousands or millions. These implausible results can be produced by multicollinearity, categories of predictors having no cases or zero cells, and complete separation whereby the two groups are perfectly separated by the scores on one or more independent variables. The clue that we have numerical problems and should not interpret the results are standard errors for some independent variables that are larger than 2.0 (this does not apply to the constant).
  • Slide 8
  • Slide 8 Sample size requirements The minimum number of cases per independent variable is 10, using a guideline provided by Hosmer and Lemeshow, authors of Applied Logistic Regression, one of the main resources for Logistic Regression. If we do not meet the sample size requirement, it is suggested that this be mentioned as a limitation to our analysis. If the relationships between predictors and the dependent variable are strong, we may still attain statistical significance with smaller samples.
  • Slide 9
  • Slide 9 Methods for including variables SPSS supports the three methods for including variables in the regression equation: the standard or simultaneous method in which all independents are included at the same time The hierarchical method in which control variables are entered in the analysis before the predictors whose effects we are primarily concerned with. The stepwise method (forward conditional or forward LR in SPSS) in which variables are selected in the order in which they maximize the statistically significant contribution to the model. For all methods, the contribution to the model is measures by model chi-square is a statistical measure of the fit between the dependent and independent variables, like R.
  • Slide 10
  • Slide 10 Computational method Multiple regression uses the least-squares method to find the coefficients for the independent variables in the regression equation, i.e. it computed coefficients that minimized the residuals for all cases. Logistic regression uses maximum-likelihood estimation to compute the coefficients for the logistic regression equation. This method finds attempts to find coefficients that match the breakdown of cases on the dependent variable. The overall measure of how will the model fits is given by the likelihood value, which is similar to the residual or error sum of squares value for multiple regression. A model that fits the data well will have a small likelihood value. A perfect model would have a likelihood value of zero. Maximum-likelihood estimation is an iterative procedure that successively tries works to get closer and closer to the correct answer. When SPSS reports the "iterations," it is telling us how may cycles it took to get the answer.
  • Slide 11
  • Slide 11 Overall test of relationship Errors in a logistic regression models are measured in terms of -2 log likelihood values which are analogous to total sum of squares . When an independent variable has a relationship to the dependent variable the measure of error decreases. Since -2 log likelihood (abbreviated at -2LL) is measured in negative numbers, an improvement is relationship is indicated by a larger number, e.g. if -2LL were -200, a -2LL of -100 would represent an improvement. The overall test of relationship among the independent variables and groups defined by the dependent is based on the reduction in the -2 log likelihood values for a model which does not contain any independent variables and the model that contains the independent variables. This difference in likelihood follows a chi-square distribution, and is referred to as the model chi-square. The significance test for the model chi-square is our statistical evidence of the presence of a relationship between the dependent variable and the combination of the independent variables. In a hierarchical logistic regression, the significance test for the addition of the predictor variables is based on the block chi-square in the omnibus tests of model coefficients.
  • Slide 12
  • Slide 12 Overall test of relationship in SPSS output Though the iteration history is not usually an output of