English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

reasearch test

2006-07-30 15:24:46 · 5 answers · asked by ericnsherylshome 1 in Education & Reference Higher Education (University +)

5 answers

Multivariate analysis of variance (MANOVA) is an extension of analysis of variance (ANOVA) methods to cover cases where there is more than one dependent variable and where the dependent variables cannot simply be combined. As well as identifying whether changes in the independent variables have a significant effect on the dependent variables, the technique also seeks to identify the interactions among the independent variables and the association between dependent variables.

I hope this is the answer you seek. Ciao amigo.

2006-07-30 15:28:32 · answer #1 · answered by The Chaotic Darkness 7 · 5 0

Manova

2016-11-14 05:48:55 · answer #2 · answered by plumley 4 · 0 0

If I could understand your question I wouldn't be on minimum wage!!

2016-03-16 09:01:29 · answer #3 · answered by ? 4 · 0 0

Just a multivariate analysis of the variances of the research test results.

2006-07-30 15:30:47 · answer #4 · answered by sunshine25 7 · 0 0

One of the most common questions spouses ask when confronting a marriage crisis is this: How can I save my marriage if my partner doesn't want to help find a solution? How do I succeed I am trying to save my marriage on my own? Learn here https://tr.im/MPNvO

It is a typical enough story: one partner leaves, the other stays. One remains 'in love', the other is uncertain. Whatever it is that has caused a couple to be apart, the one person who remains bears the prospect, fear, doubt, desire, hope of saving his or her marriage' alone.

2016-02-11 04:09:53 · answer #5 · answered by Anonymous · 0 0

Overview
Multivariate GLM is the version of the general linear model now often used to implement two long-established statistical procedures - MANOVA and MANCOVA. Multivariate GLM, MANOVA, and MANCOVA all deal with the situation where there is more than one dependent variable and one or more independents. MANCOVA also supports use of continuous control variables as covariates.
Multiple analysis of variance (MANOVA) is used to see the main and interaction effects of categorical variables on multiple dependent interval variables. MANOVA uses one or more categorical independents as predictors, like ANOVA, but unlike ANOVA, there is more than one dependent variable. Where ANOVA tests the differences in means of the interval dependent for various categories of the independent(s), MANOVA tests the differences in the centroid (vector) of means of the multiple interval dependents, for various categories of the independent(s). One may also perform planned comparison or post hoc comparisons to see which values of a factor contribute most to the explanation of the dependents.

There are multiple potential purposes for MANOVA.


To compare groups formed by categorical independent variables on group differences in a set of interval dependent variables.
To use lack of difference for a set of dependent variables as a criterion for reducing a set of independent variables to a smaller, more easily modeled number of variables.
To identify the independent variables which differentiate a set of dependent variables the most.
Multiple analysis of covariance (MANCOVA) is similar to MANOVA, but interval independents may be added as "covariates." These covariates serve as control variables for the independent factors, serving to reduce the error term in the model. Like other control procedures, MANCOVA can be seen as a form of "what if" analysis, asking what would happen if all cases scored equally on the covariates, so that the effect of the factors over and beyond the covariates can be isolated. The discussion of concepts in the ANOVA section also applies, including the discussion of assumptions.


Key Concepts

General Linear Model (GLM). In more recent versions of SPSS, MANOVA and MANCOVA are found under "GLM" (General Linear Model). Output is still similar, but with GLM, parameters (coefficients) are created for every category of every factor and this "full parameterization" approach handles the problem of empty cells better than traditional MANOVA. GLM accepts categorical variables which, in SPSS regression, must be manipulated manually as dummy variables. That is, GLM automatically transforms declared categorical variables into sets of indicator variables. GLM calculates parameters using IWLS (iterative weighted least squares). The seminal article on GLM is Nelder and Wedderburn (1972) and an overview is Gill (2001).
Parameters in GLM are not interpreted as in ordinary least squares. That is, a unit change in an independent variable k does not correspond to a bk change in the dependent variable. This is because GLM uses a nonlinear link function. If coefficients are examined, it is in terms of first differences: the researcher determines two levels of interest for a given independent variable, then calculates the value of the dependent variable under these two conditions, holding all other variables constant at their mean value.

Significance Tests


F-test. The omnibus or overall F test is the first of the two-step MANOVA process of analysis. The F test appears in the "Tests of Between-Subjects Effects" table of GLM MANOVA output in SPSS and answers the question, "Is the model significant for each dependent?" There will be an F significance level for each dependent. That is, the F test tests the null hypothesis that there is no difference in the means of each dependent variable for the different groups formed by categories of the independent variables. The multivariate formula for F is based not only on the sum of squares between and within groups, as in ANOVA, but also on the sum of crossproducts - that is, it takes covariance into account as well as group means.

Multivariate tests in contrast, answer the question, "Is each effect significant?" That is, where the F test focuses on the dependents, the multivariate tests focus on the independents and their interactions. These tests appear in the "Multivariate Tests" table of SPSS output. There are four leading multivariate tests of group differences.

Hotelling's T-Square is the most common, traditional test where there are two groups formed by the independent variables. Note one may see the related statistic, Hotelling's Trace (a.k.a. Lawley-Hotelling or Hotelling-Lawley Trace). To convert from the Trace coefficient to the T-Square coefficient, multiply the Trace coefficient by (N-g), where N is the sample size across all groups and g is the number of groups. The T-Square result will still have the same F value, degrees of freedom, and significance level as the Trace coefficient.

Wilks' lambda, U. This is the most common, traditional test where there are more than two groups formed by the independent variables. It is a measure of the difference between groups of the centroid (vector) of means on the independent variables. The smaller the lambda, the greater the differences. The Bartlett's V transformation of lambda is then used to compute the significance of lambda. Wilks's lambda is used, in conjunction with Bartlett's V, as a multivariate significance test of mean differences in MANOVA, for the case of multiple interval dependents and multiple (>2) groups formed by the independent(s). The t-test, Hotelling's T, and the F test are special cases of Wilks's lambda.

Pillai-Bartlett trace, V. Multiple discriminant analysis (MDA) is the part of MANOVA where canonical roots are calculated. Each significant root is a dimension on which the vector of group means is differentiated. The Pillai-Bartlett trace is the sum of explained variances on the discriminant variates, which are the variables which are computed based on the canonical coefficients for a given root. Olson (1976) found V to be the most robust of the four tests and is sometimes preferred for this reason.

Roy's greatest characteristic root (GCR) is similar to the Pillai-Bartlett trace but is based only on the first (and hence most important) root.Specifically, let lambda be the largest eigenvalue, then GCR = lambda/(1 + lambda). Note that Roy's largest root is sometimes also equated with the largest eigenvalue, as in SPSS's GLM procedure (however, SPSS reports GCR for MANOVA). GCR is less robust than the other tests in the face of violations of the assumption of multivariate normality.



Post-Hoc Tests. The second step in MANOVA is that if the overall F-test shows the centroid (vector) of means of the dependent variables is not the same for all the groups formed by the categories of the independent variables, post-hoc univariate F tests of group differences are used to determine just which group means differ significantly from others. This helps specify the exact nature of the overall effect determined by the F test. Pairwise multiple comparison tests test each pair of groups to identify similarities and differences. Multiple comparison procedures and post hoc tests are discussed more extensively in the corresponding section under ANOVA.

Bonferroni adjustment. When there are many dependentents, some univariate tests might be significant due to chance alone. That is the nominal .05 level is not the actual alpha level. Researchers may adjust the nominal alpha level. Actual alpha = 1 - (1-alpha1)(1-alpha2)...(1-alphan), where alpha1 to alpha-n are the nominal levels of alpha for a series of post hoc tests. For instance, for a series of 4 tests at the nominal alpha level of .01, the actual alpha would be estimated to be 1-.994 = .039. One wants an actual adjusted alpha level of at least .05.

Bonferroni test: If the Bonferroni test is requested, SPSS will print out a table of "Multiple Comparisons" giving the mean difference in the dependent variable between any two groups (ex., differences in test scores for any two educational groups). The significance of this difference is also printed, and an asterisk is printed next to differences significant at the .05 level or better. The Bonferroni method is preferred when the number of groups is small.

Tukey test: If the Tukey test is requested, SPSS will produce a similar table which is interpreted in the same way. The Tukey method is preferred when the number of groups is large.

Other tests: Methods when the assumption of homogeneity of variances is not met: SPSS provides these alternate methods not shown here: Games-Howell, Tamhane's T2, Dunnett's T3, and Dunnett's C.



Canonical Correlation

In general: When using MANOVA, social scientists are often concerned with the overall F-test of significance, which is a test of the null hypothesis of no group differences, as shown in the SPSS output section titled "Analysis of Variance." However, the second optional step in MANOVA is to examine the output block for each effect, titled "Eigenvalues and Canonical Correlations." This section can be used to better understand the nature of group differences established by the overall F test. The meaning of elements of this output section are discussed below.

Canonical roots or linear discriminant functions, LDFs. In order to test the hypothesis that groups differ significantly on weighted combinations of the observed independent variables, MANOVA conducts a multiple discriminant analysis (MDA). MDA partitions the variance of dependent variables into components also called canonical roots or LDFs. The canonical roots are analogous to principal components in factor analysis, except they seek to maximize the between-groups variance. In canonical correlation, the canonical roots are also called canonical variables.
Each canonical root represents a dimension of meaning, but what? What meaningful label do we give to each canonical root (which SPSS labels merely, 1, 2, etc.)? In factor analysis one ascribes a label to each factor based on the factor loadings of each measured variable on the factor. In MANOVA, this is done on multiple bases, using the raw weights, standardized weights, and structure correlations. The structure correlations are often the most useful for this purpose when there is more than one significant canonical root. Structure correlations are the correlations between the measured variables and the canonical roots. In MANOVA, there will be one set of MDA output for each main and interaction effect.


Eigenvalue: The eigenvalue has to do with the proportion of the total variance of the group of variables included in the analysis which is accounted for by a specific canonical root. That is, the larger the eigenvalue, the larger the group differences on the variate (the variable computed by the linear combination of canonical coefficients) for that canonical root. SPSS will print out the associated percent of variance explained and cumulative percent next to each canonical root in the section "Eigenvalues and Canonical Correlations."

Condition indices. A "singular value" is the square root of an eigenvalue, and "condition indices" are the ratio of the largest singular values to each other singular value. Condition indices are used to flag excessive collinearity in the data. A condition index over 30 suggests serious collinearity problems and an index over 15 indicates possible collinearity problems. See further discussion in the section on regression diagnostics.

Canonical correlation: Every main and interaction effect has a set of canonical roots, and every canonical root has both a corresponding eigenvalue and canonical correlation coefficient. The canonical coefficients are the weights in the linear combination of canonical roots relating the independent and dependent sets of variables being canonically correlated, which maximize the correlation relating the two sets of variables. In MANOVA there will be a set of canonical roots for each main and interaction effect of the independents on the set of dependent variables, and there will be one canonical coefficient for each canonical root. Since canonical correlation coefficients are standardized, their weights may be compared. The ratio of canonical correlation weights for a set of canonical roots is their relative importance for the given effect.

F-to-remove index. Many computer programs output this F test, which gauges the effect of removing a given variable from the analysis. The larger the F-to-remove index, the more that variable contributes to group separation.



Plots

Spread-versus-level plots depict standard deviations vs. means, or variances vs. means, for each dependent variable. Each point shows the value of a factor design matrix group cell on the mean and on the standard deviation or variance. This is useful in testing the homogeneity of variances assumption, and in identifying cells which deviate substantially from the assumption.

Observed*Predicted*Standardized Residual Plots. For each dependent variable, a plot is produced which shows the 6 comparisons among observed, predicted, and standardized residuals. For observed by predicted, one would like to see a clear pattern, but for the plots involving standardized residuals, one would like not to see a pattern.

Profile plots are line plots of the predicted means of each dependent variable across levels of each factor. When two or three factors are involved, these are called interaction plots. Where a plot of observed means would show the effect being studied and the error, the profile plots of predicted means show the effect without the error. Each point in a profile plot indicates the estimated marginal mean of the dependent variable (adjusted for covariates in MANCOVA) at one level of a given factor. The profile plot shows if the estimated marginal means are increasing across levels. A second or third factor can be represented by a second or third line (not shown below), where parallel lines indicate no interaction and crossing lines indicate interaction among the factors.



Profile Analyis
Profile analysis is equivalent to repeated measures MANOVA. There is a within-subjects factor, which is either time (the same item administered at sequential time periods) or tests (repeated measures of the same underlying construct). Then there is a between-subjects grouping factor. For instance, tests t1, t2, t3, and t4 could be grouped by gender (m, f). For categories of the grouping factor (here, gender), one could plot the mean response on each of the multiple tests, connecting the means with lines, one line per cateegory of the factor (thus two lines for gender). The lines are termed "profiles." One asks if the profiles are parallel, if the profiles are equal or separated, and if the means of each factor category are the same for each of the dependent variables. In SPSS, profile tests can be accomplished in MANOVA by clicking on the Paste button and adding LMATRIX and MMATRIX commands in syntax to specify the type of contrast wanted, as described below.
Note that profile analysis in MANOVA has be superceded to some extent by multidimensional scaling, mixed model ANOVA, and/or random effects regression models.


Parallelism tests test if profiles are parallel. This is equivalent to testing that there is no interaction of the within-subjects factor with the between-subjects factor (ex., between tests and gender). A finding of non-significance means the profiles are not significantly different in shaper and the researcher concludes the profiles are parallel. In SPSS, the syntax is of the form:
/LMATRIX = GENDER 1 -1
/MMATRIX
t1 1 t2 -1;
t2 1 t3 -1;
t3 1 t4 -1.

The LMATRIX command specifies a contrast between the two values of gender. The MMATRIX command asks for contrasts between t1 and t2, between t2 and t3, and between t3 and t4. Output will be in a section labeled "Custom Hypothesis Tests".

Equality of profiles tests, also called separation of group profiles or testing the groups hypotheses. test if parallel profiles are equal (coincident) or separated (dispersed). This is equivalent to testing that there is no main effect of the between-subjects factor (ex., gender). A finding of non-significance means the grouping variable (ex., gender) has no effect and the profiles of gender are equal. In SPSS, the syntax is of the form:
/LMATRIX = gender 1 -1
/MMATRIX = t1 .25 t2 .25 t3 .25 t4 .25.

The LMATRIX command again specifies a contrast between the two values of gender. The MMATRIX command asks for a single equality test comparing the four levels of "test" on an equally weighted basis. Output will be in a section labeled "Custom Hypothesis Tests."

Equality of means tests, also called tests of flatness, test if the means are equal between factor categories for each dependent (es., between male and female for each of the t1. t2. t3. and t4 tests). This is equivalent to testing if there are significant differences across levels of the within-subjects factor (tests). That is, this is testing if there are significant differences across levels of the within-subjects factor (ex., tests t1, t2, t3, and t4), ignoring the between-groups factor (ex., ignoring gender by looking at the whole sample). No differences would mean a flat profile, hence "tests of flatness." In these tests, the contrast is with the intercept (constant), which represents the equally weighted average of the within-groups factor (the dependent measures, ex. t1, t2, t3, and t4) when the grouping factor is unknown (treated as 0). A finding of non-significance means the gwo genders are not significantly different on the average on any of the levels of the within-groups factor (ex., on t1, t2, t3, and t4). In SPSS, the syntax is of the form:
/LMATRIX = INTERECEPT 1 gender .5 .5
/MMATRIX
t1 1 t2 -1;
t2 1 t3 -1;
t3 1 t4 -1.

The LMATRIX command specifies a contrast between the two eqaully weighted values of gender and the intercept. The MMATRIX command asks for contrasts between t1 and t2, between t2 and t3, and between t3 and t4. SPSS output will be in a section labeled "Custom Hypothesis Tests."



Assumptions
Observations are independent of one another. MANOVA is not robust when the selection of one observation depends on selection of one or more earlier ones, as in the case of before-after and other repeated measures designs. However, there does exist a variant of MANOVA for repeated measures designs.

The independent variable is or variables are categorical.

The dependent variables are continuous and interval level.

Low measurement error of the covariates. The covariate variables are continuous and interval level, and are assumed to be measured without error. Imperfect measurement reduces the statistical power of the F test for MANCOVA and for experimental data, there is a conservative bias (increased likelihood of Type II errors: thinking there is no relationship when in fact there is a relationship) . As a rule of thumb, covariates should have a reliability coefficient of .80 or higher.

Equal group sizes. To the extent that group sizes are very unequal, statistical power diminishes. SPSS adjusts automatically for unequal group sizes. In SPSS, METHOD=UNIQUE is the usual method.

Appropriate sums of squares. Normally there are data for every cell in the design. For instance, 2-way ANOVA with a 3-level factor and a 4-level factor will have 12 cells (gropups). But if there are no data for some of the cells, the ordinary computation of sums of squares ("Type III" is the ordinary, default type) will result in bias. When there are empty cells, one must ask for "Type IV" sums of squares, which compare a given cell with averages of other cells. In SPSS, Analyze, General Linear Model, Univariate; click Model, then set "Sum of Squares" to "Type IV" or other appropriate type depending on one's design:

Type I. Used in hierarchical balanced designs where main effects are specified before first-order interaction effects, and first-order interaction effects are specified before second-order interaction errects, etc. Also used for purely nested models where a first effect is nested within a second effect, the second within a third, etc. And used in polynomial regression models where simple terms are specified before higher-order terms (ex., squared terms).

Type II. Used with purely nested designs which have main factors and no interaction effects, or with any regression model, or for balanced models common in experimental research.

Type III. The default type and by far the most common, for any models mentioned above and any balanced or unbalanced model as long as there are no empty cells in the design.

Type IV. Required if any cells are empty in a balanced or unbalanced design. This would include all nested designs, such as Latin square design.



Adequate sample size. At a minimum, every cell must have more cases than there are dependent variables. With multiple factors and multiple dependents, group sizes fall below minimum levels more easily than in ANOVA/ANCOVA.

Residuals are randomly distributed.

Homoscedasticity (homogeneity of variances and covariances): within each group formed by the categorical independents, the variance of each interval dependent should be similar, as tested by Levene's test, below. Also, for each of the k groups formed by the independent variables, the covariance between any two dependent variables must be the same. When sample sizes are unequal, tests of group differences (Wilks, Hotelling, Pillai-Bartlett, GCR) are not robust when this assumption is violated. Pillai-Bartlett trace was found to be more robust than the alternatives when this assumption was violated but sample sizes of the groups were equal (Olson, 1976).

Box's M: Box's M tests MANOVA's assumption of homoscedasticity using the F distribution. If p(M)<.05, then the covariances are significantly different. Thus we want M not to be significant, rejecting the null hypothesis that the covariances are not homogeneous. That is, the probability value of this F should be greater than .05 to demonstrate that the assumption of homoscedasticity is upheld. Note, however, that Box's M is extremely sensitive to violations of the assumption of normality, making the Box's M test less useful than might otherwise appear. For this reason, some researchers test at the p=.001 level, especially when sample sizes are unequal.

Levene's Test. SPSS also outputs Levene's test as part of Manova. This is discussed in the section on ANOVA. If Levene's test is significant, then the data fail the assumption of equal group variances.

Homogeneity of regression. The covariate coefficients (the slopes of the regression lines) are the same for each group formed by the categorical variables and measured on the dependents. The more this assumption is violated, the more conservative MANCOVA becomes (the more likely it is to make Type I errors - accepting a false null hypothesis). When running a MANCOVA model in SPSS, include in the model options the interactions between the covariate(s) and each independent variable -- any significant interaction effects indicate that the assumption of homogeneity of regression coefficients has been violated. See the discussion in the section on testing assumptions.

Sphericity. In a repeated measures design, the univariate ANOVA tables will not be interpreted properly unless the variance/covariance matrix of the dependent variables is circular in form (see Huynh and Mandeville, 1979). When there is a violation of this assumption, a common option then is to focus on the multivariate (simultaneous) approach to gauging effects.

Bartlett's and Mauchly's Tests of Sphericity The researcher wants the test not to be significant in order to conclude there is insufficient evidence that conclude the sphericity assumption is violated.

Multivariate normal distribution. For purposes of significance testing, variables follow multivariate normal distributions. In practice, it is common to assume multivariate normality if each variable considered separately follows a normal distribution. MANOVA is robust in the face of most violations of this assumption if sample size is not small (ex., <20).

No outliers. MANCOVA is highly sensitive to outliers in the covariates.

Covariates are linearly related or in a known relationship to the dependents. The form of the relationship between the covariate and the dependent must be known and most computer programs assume this relationship is linear, adjusting the dependent mean based on linear regression. Scatterplots of the covariate and the dependent for each of the k groups formed by the independents is one way to assess violations of this assumption. Covariates may be transformed (ex., log transform) to establish a linear relationship.

Does not assume linear relationships; can handle interaction effects.

See also: assumptions sections for ANOVA and ANCOVA.



SPSS Output Examples, with Commentary
MANOVA
MANCOVA



Frequently Asked Questions
Why can't I just use multiple univariate ANOVA tests rather than MANOVA, one for each dependent variable in my set?
How many dependents can I have in MANOVA?
Explain the syntax for MANOVA in SPSS.
What is analysis of residuals for in MANOVA?
Is there a limit on the number of covariates which can be included in an multiple analysis of variance?
What is step-down MANOVA?
What is the "protected F" or least significant difference (LSD) test in MANOVA?



Why can't I just use multiple univariate ANOVA tests rather than MANOVA, one for each dependent variable in my set?
If the dependent variables are uncorrelated with each other, it may be that a series of univariate ANOVA tests would be acceptable. Where the dependents are correlated (most of the time), MANOVA is superior. This is because ANOVA only tests differences in means, whereas MANOVA is sensitive not only to mean differences but also to the direction and size of correlations among the dependents. Put another way, MANOVA will test groups (ex., a treatment and a control group) to differ if they differ in correlation among the dependents even though their means are the same on the dependents, whereas ANOVA will fail to reject the null hypotheses of no group differences.

How many dependents can I have in MANOVA?
There is no theoretical limit, but keep in mind that as one increases the number of dependents there is a decline in interpretability, the likelihood of error based interactions increases, and there is a loss of power (that is, increased likelihood of Type II errors - thinking you don't have something when you do).

Explain the syntax for MANOVA in SPSS.
Note: This has largely been replaced by GLM syntax.

MANOVA opinion1 opinion2 opinion3 BY EducLevel (1, 2) SESlevel (1, 3)
/ WSFACTORS = Ideology (3)
/ WSDESIGN = Ideology
/ PRINT = CELLINFO (MEANS)
/ DESIGN.
Line 1: The MANOVA command word is followed by the three variables opinion1, opinion2, and opinion3. These represent the three levels of the within-subjects factor Ideology. The BY keyword tells SPSS that what follows are the groups or between-subjects factors; in this case, EducLevel and SESlevel. Following each of the two between-subjects factors are two numbers between parentheses. SESlevel (1,3) simply means that the variable SESlevel has three levels coded as 1, 2, and 3. One may have no grouping variable and thus no BY clause.
Line 2: The slash mark indicates a subcommand. The WSFACTORS subcommand, tells SPSS that there is one repeated factor called Ideology and that it has three levels (matching the three opinion measurements listed after the MANOVA keyword). This is needed by SPSS to interpret the list of dependent variables in line 1. The WSFACTORS subcommand follows the MANOVA command when there is a within-subjects factor, which is to say when there is a repeated measures design.
Line 3: The WSDESIGN subcommand tells SPSS to test the within-subjects hypotheses for repeated measures designs.
Line 4: The PRINT subcommand specifies the output. CELLINFO (MEANS) prints cell means and standard deviations used to evaluate patterns in the data. Many additional statistics could be requested.
Line 5: The DESIGN subcommand causes SPSS to test the between-subjects hypotheses.

The general MANOVA syntax, from the SPSS manual, is:

MANOVA
depvarlist [BY indvarlist (min,max) [indvarlist (min, max) ...]
[WITH covarlist]]
[/WSFACTORS = name (levels) name...]

[/{PRINT | NOPRINT} = [CELLINFO [(MEANS SSCP COV COR ALL)]]
[HOMOGENEITY [(BARTLETT COCHRAN BOXM ALL)]]
[SIGNIF (MULTIV UNIV AVERF AVONLY EFSIZE ALL)]]

[/OMEANS [VARIABLES(varlist)] [TABLES ([CONSTANT] [factor BY factor])]]

[/CONTRAST(factor) = {POLYNOMIAL[(#)] | SPECIAL(k1s + contrasts)}]
[/CONTR ...}

[/WSDESIGN = effect ...] [/DESIGN = effect ...]

Notes on Effects
Keywords: BY, W or WITHIN, MWITHIN
Varname(#): # = one of k-1 contrasts or one of k levels


Click here for syntax for other MANOVA commands for various designs.

What is analysis of residuals for in MANOVA?
A plot of standardized residual values against values expected by the MANOVA model tests the assumption of MANOVA that residuals are randomly distributed. If there are any observable systematic patterns, the model is questionable even if upheld by significance testing.

Is there a limit on the number of covariates which can be included in an multiple analysis of variance?
Not really. Whereas SPSS limits you to 10 in ANOVA, the limit is 200 in MANOVA -- more than appropriate for nearly all research situations. As one adds covariates, the likelihood of collinearity of the variables increases, adding little to the percent of variance explained (R-squared) and making interpretation of the standard errors of the individual covariates difficult.

What is step-down MANOVA?
Step-down MANOVA, also called the Roy-Bargman Stepdown F test, is a conservative approach to significance testing of the main effects, designed to prevent the inflation of Type I errors (thinking you have something when you do not). Stepdown tests perform a univariate analysis of the significance of the first dependent, then test the second dependent controlling for the first as a covariate, and so on, sequentially rotating the dependents to the status of covariates. This process continues until one encounters a significant F for a variable, leading the researcher to conclude it is significantly related to the independent (classification) variables over and above the "covaried out" previously-tested dependent variables. Step-down MANOVA is recommended only when there is an à priori theoretical basis for ordering the dependent (criterion) variables. The order of the variables is critical to the results obtained by this method. In SPSS syntax mode, specify PRINT SIG(STEP) to obtain the stepdown F test. As of SPSS Version 10, it was not available in menu mode but using GLM one could obtain identical results by conducting multiple runs, each time with a different dependent as covariate.
In SPSS, make sure that the dependents are entered in the desired order, then in the MANOVA syntax, enter PRINT SIGNIF(STEPDOWN) or simply PRINT SIG(STEP). For example:

manova
var1 var2 var3 BY gender(1,2)
/print signif(stepdown)
...

See James Stevens, Applied Multivariate Analysis for the Social Sciences, 2nd Edition.

What is the "protected F" or least significant difference (LSD) test in MANOVA? How does it relate to the use of discriminant analysis in MANCOVA?

In the second step on MANOVA, when one tests for specific group differences, having established the overall difference among groups using the F test, today the most common method relies on multiple discriminant analysis (MDA) and associated significance tests (ex., Wilks, Hotelling, Pillai-Bartlett, etc.) as discussed above. However, the earlier method, of following a significant MANOVA with a series of ANOVAs on each of the dependent variables, is still used. This traditional method is called the protected F test, the protected t test, or the least significant difference test. Using multiple univariate ANOVA tests at a nominal alpha significance level (ex., .05) is misleading -- the actual significance level will be much higher (that is, > .05), affording less protection against Type I errors (thinking you have something when you don't) than the researcher may be assuming. For this reason, the protected F or LSD method is no longer recommended.
Using discriminant analysis, the MANCOVA dependents are used as predictor variables to classify a factor (treatment) variable, and the discriminant beta weights are used to assess the relative strength of relation of the MANOVA dependents to the factor. The beta weights indicate the strength of relation of a given dependent controlling for all other dependents





Bibliography
Bray, James H. and Scott E. Maxwell (1985). Multivariate analysis of variance. Quantitative applications in the social sciences series #54. Thousand Oaks, CA: Sage Publications.

Gill, Jeff (2001). Generalized Linear Models: A Unified Approach. Thousand Oaks, CA: Sage Publications. Seires: Quantitative Applications in the Social Sciences, No. 134. A mathematical overview of GLM.

Hand, D.J. and C. C. Taylor (1987). Multivariate analysis of variance and repeated measures. London: Chapman and Hall.

Huyn, H. and G. K. Mandeville (1979). Validity conditions in a repeated measures design. Psychological Bulletin, Vol. 86: 964-973.

Nelder, J. A. and R. W. M. Wedderburn (1972). Generalized liner models. Journal of the royal Statistical Society, A, 135: 370-384. This is the seminal article on GLM.

Olson, C. L. (1976). On choosing a test statistic in multivariate analyses of variance. Psychological Bulletin, Vol. 83: 579-586. Olson's tests showed Pillai-Bartlett trace to be more robust than Wilks's U.

2006-07-30 15:34:01 · answer #6 · answered by Anonymous · 0 0

fedest.com, questions and answers