Discussion:
Data Reduction/Factor Extraction on SPSS 12
(too old to reply)
Sonny
2006-07-14 21:03:51 UTC
Permalink
Hi there,

This is probably one of the most common data analyzing tasks for many of
you. I was searching for a tutorial or the procedure of it all over the
Internet but couldn't find any. Hope someone could help out here.

Anyway, let's say I have eight survey items, ATT1-ATT8 (all on 7-point
Likert scale), each measuring an aspect of the subject's attitude towards
something. I have >130 cases (responses). I want to collapse all the items
into one "Attitude" construct, so that it can be used in my regression model
as an independent variable.

Therefore,
Q1) in SPSS --> Analyze --> Data Reduction --> Factor, under Extraction,
should I use Maximum likehood or Principla component method (or others)?
Q2) Should I extract factors with Eigenvalues over 1, or should I specify to
extract only 1 factor?
Q3) And do I need to use any rotation method under Rotation?
Q4) How to instruct SPSS to output a factor loading table like this one?

Attitude Construct
----------------------------------
ATT1 loading score 1
ATT2 loading score 2
...
ATT8 loading score 3
----------------------------------

Thanks,
Sonny
Richard Ulrich
2006-07-14 22:29:33 UTC
Permalink
Post by Sonny
Hi there,
This is probably one of the most common data analyzing tasks for many of
you. I was searching for a tutorial or the procedure of it all over the
Internet but couldn't find any. Hope someone could help out here.
Anyway, let's say I have eight survey items, ATT1-ATT8 (all on 7-point
Likert scale), each measuring an aspect of the subject's attitude towards
something. I have >130 cases (responses). I want to collapse all the items
into one "Attitude" construct, so that it can be used in my regression model
as an independent variable.
If you know you want to collapse the variables into one
overall factor, you can do that by computing an average
(or total) score.

If you want to document the internal reliability of that factor,
you can use the Reliability procedure.

The main purpose of Factor Analysis is to find multiple
factors from an unknown scale, or to check that the structure
in this sample is not radically different from what is
expected. You can use most of the defaults, and look at
the output while matching up to what you read in texts.
Varimax is easy and popular for the rotation. If the data
that you have are sufficiently one-dimensional, the default
will give just one factor (with no rotation, then).

The factor structure matrix shows the correlations of the
items with the factors, and that is the usual one to read.
Post by Sonny
Therefore,
Q1) in SPSS --> Analyze --> Data Reduction --> Factor, under Extraction,
should I use Maximum likehood or Principla component method (or others)?
Q2) Should I extract factors with Eigenvalues over 1, or should I specify to
extract only 1 factor?
Q3) And do I need to use any rotation method under Rotation?
Q4) How to instruct SPSS to output a factor loading table like this one?
--
Rich Ulrich, ***@pitt.edu
http://www.pitt.edu/~wpilib/index.html
Sonny
2006-07-14 23:23:44 UTC
Permalink
Thanks Richard. That was very helpful.

I must say I have not seen many studies use just the average to extract the
underlying factor, probably because most studies usually have other latent
factors at the same time too.

What if I have additional eight items on their attitude towards something
else then, and the two attitude constructs are somewhat related to each
other? In this case I know (or hope) the items will collapse into two
factors.

Should I use Principle component or Maximum likelihood? Should I specify how
many factors to extract (e.g. 2 in this case), to make sure the factors
extracted are indeed what I need? And the loading scores I should report are
still the correlation coefficients with the factors, right?

Sonny
Post by Richard Ulrich
Post by Sonny
Hi there,
This is probably one of the most common data analyzing tasks for many of
you. I was searching for a tutorial or the procedure of it all over the
Internet but couldn't find any. Hope someone could help out here.
Anyway, let's say I have eight survey items, ATT1-ATT8 (all on 7-point
Likert scale), each measuring an aspect of the subject's attitude towards
something. I have >130 cases (responses). I want to collapse all the items
into one "Attitude" construct, so that it can be used in my regression model
as an independent variable.
If you know you want to collapse the variables into one
overall factor, you can do that by computing an average
(or total) score.
If you want to document the internal reliability of that factor,
you can use the Reliability procedure.
The main purpose of Factor Analysis is to find multiple
factors from an unknown scale, or to check that the structure
in this sample is not radically different from what is
expected. You can use most of the defaults, and look at
the output while matching up to what you read in texts.
Varimax is easy and popular for the rotation. If the data
that you have are sufficiently one-dimensional, the default
will give just one factor (with no rotation, then).
The factor structure matrix shows the correlations of the
items with the factors, and that is the usual one to read.
Post by Sonny
Therefore,
Q1) in SPSS --> Analyze --> Data Reduction --> Factor, under Extraction,
should I use Maximum likehood or Principla component method (or others)?
Q2) Should I extract factors with Eigenvalues over 1, or should I specify to
extract only 1 factor?
Q3) And do I need to use any rotation method under Rotation?
Q4) How to instruct SPSS to output a factor loading table like this one?
--
http://www.pitt.edu/~wpilib/index.html
Gottfried Helms
2006-07-15 06:31:15 UTC
Permalink
Post by Sonny
Thanks Richard. That was very helpful.
I must say I have not seen many studies use just the average to extract the
underlying factor, probably because most studies usually have other latent
factors at the same time too.
Search for the keyword "parceling" for instance. It means to
make a packet of all indicators, just by summing their values
or their mean-deviates, z-scores or something very similar.
There were discussions about the pro's and con's of "parceling"
compared with the use of the original items in a factor-
analysis (I think that applies also to regression).

In my view, if a factor-analysis program is easily available,
then using the original items instead of a parcel is a
superior approach: "parceling" means essentially using the
centroid-factor, which is sensitive, if items are positive and
negative measured and thus correlate positively and negatively
with the underlying dimension of interest.

This is a very basic issue, which can be overcome simply by using
the principal component/principal factor instead.

If in the scale are even two or more dimensions expected,
then a rotation with the quartimax or varimax criterion
should be considered to have the different dominant factors
best attached to their main indicators - in terms of "parceling"
this would mean to find a separation of the set of items
into two or more lists and parcel these lists separately as
different sum-scores, which indicate the different factors.
Post by Sonny
What if I have additional eight items on their attitude towards something
else then, and the two attitude constructs are somewhat related to each
other? In this case I know (or hope) the items will collapse into two
factors.
This applies to the above said...
Post by Sonny
Should I use Principle component or Maximum likelihood?
Well, the concepts are different and the differences are sophisticated
and important to be understood. It's depending on what you are doing
and expecting - but that are too many aspects to discuss here in a
short reply (at least for me, today).
With maximum likelihood you need "maximum likelihood" but in
principle "principal component" can be appropriate too - it
principally depends on the theory about your data.... <eg>
Post by Sonny
Should I specify how
many factors to extract (e.g. 2 in this case), to make sure the factors
extracted are indeed what I need? And the loading scores I should report are
still the correlation coefficients with the factors, right?
Sonny
Well, if a student comes to me, his manuscript in hand, and he/she asks me
these questions, expecting a quick decisive answer, I usually start asking
him to take a deep breath first ...
...and then sit down,
... then to forget these questions for the next time
... and then to explain to me, to tell about a lot of what and why's
until he/she figures out himself/herself ,
what he/she really wants and thinks about his/this theory
and data.

So no quick answer here, I think a basic discussion should be
taken first.

(It's saturday, so Ill post just this short comment only)

Regards -

Gottfried Helms
Sonny
2006-07-18 09:28:48 UTC
Permalink
Thanks a lot everyone.

Upon further reading and thinking, I think this is what I am going to do:

I'll do a Principal Component analysis on all the items using their
correlation coefficients. I'll instruct SPSS to extract two most principal
components only, since this is what the items are intended for. I'll use
Oblique rotation to align the components, since I expect these two attitude
constructs are somewhat related to each other. Hopefully the results will
show that 1) much of the variance in the original items have been accounted
for by these two factors, and 2) the items will load high on their own
factor and low on the other. If not then I'll check to see which items
cross-load. Remove those items and repeat the process if necessary until I
have the two intended factors. Then compute a factor score using regression
or Bartless method for my next-step regression analysis.

What do you think?

Sonny
Post by Gottfried Helms
Post by Sonny
Thanks Richard. That was very helpful.
I must say I have not seen many studies use just the average to extract the
underlying factor, probably because most studies usually have other latent
factors at the same time too.
Search for the keyword "parceling" for instance. It means to
make a packet of all indicators, just by summing their values
or their mean-deviates, z-scores or something very similar.
There were discussions about the pro's and con's of "parceling"
compared with the use of the original items in a factor-
analysis (I think that applies also to regression).
In my view, if a factor-analysis program is easily available,
then using the original items instead of a parcel is a
superior approach: "parceling" means essentially using the
centroid-factor, which is sensitive, if items are positive and
negative measured and thus correlate positively and negatively
with the underlying dimension of interest.
This is a very basic issue, which can be overcome simply by using
the principal component/principal factor instead.
If in the scale are even two or more dimensions expected,
then a rotation with the quartimax or varimax criterion
should be considered to have the different dominant factors
best attached to their main indicators - in terms of "parceling"
this would mean to find a separation of the set of items
into two or more lists and parcel these lists separately as
different sum-scores, which indicate the different factors.
Post by Sonny
What if I have additional eight items on their attitude towards something
else then, and the two attitude constructs are somewhat related to each
other? In this case I know (or hope) the items will collapse into two
factors.
This applies to the above said...
Post by Sonny
Should I use Principle component or Maximum likelihood?
Well, the concepts are different and the differences are sophisticated
and important to be understood. It's depending on what you are doing
and expecting - but that are too many aspects to discuss here in a
short reply (at least for me, today).
With maximum likelihood you need "maximum likelihood" but in
principle "principal component" can be appropriate too - it
principally depends on the theory about your data.... <eg>
Post by Sonny
Should I specify how
many factors to extract (e.g. 2 in this case), to make sure the factors
extracted are indeed what I need? And the loading scores I should report are
still the correlation coefficients with the factors, right?
Sonny
Well, if a student comes to me, his manuscript in hand, and he/she asks me
these questions, expecting a quick decisive answer, I usually start asking
him to take a deep breath first ...
...and then sit down,
... then to forget these questions for the next time
... and then to explain to me, to tell about a lot of what and why's
until he/she figures out himself/herself ,
what he/she really wants and thinks about his/this theory
and data.
So no quick answer here, I think a basic discussion should be
taken first.
(It's saturday, so Ill post just this short comment only)
Regards -
Gottfried Helms
Gottfried Helms
2006-07-18 17:35:21 UTC
Permalink
Post by Sonny
Thanks a lot everyone.
I'll do a Principal Component analysis on all the items using their
correlation coefficients. I'll instruct SPSS to extract two most principal
components only, since this is what the items are intended for. I'll use
Oblique rotation to align the components, since I expect these two attitude
constructs are somewhat related to each other. Hopefully the results will
show that 1) much of the variance in the original items have been accounted
for by these two factors, and 2) the items will load high on their own
factor and low on the other. If not then I'll check to see which items
cross-load. Remove those items and repeat the process if necessary until I
have the two intended factors. Then compute a factor score using regression
or Bartless method for my next-step regression analysis.
What do you think?
Yepp, this sounds as a good start.


Next question could be:
do you assume your data measured errorfree (or nearly
errorfree)?
That means, did you measure with an instrument with
*very* high reliability (like for instance in electro-
physics),
or do you have to assume, that your data has itemspecific
variance due to (uncorrelated) random noise (which I
always assume in social sciences)?

If data can be assumed to be (nearly) error-free,
a PC-extraction of factors is meaningful; if each
variable can have an uncorrelated error (like "noise")
it seems more sensible to employ extraction procedures,
which respect such an amount of itemspecific error,
like the SPSS-canned PAF or ML-extraction.

Second: if you decide, that your data have such error -
and thus tend to PAF or ML -
do you plan to infer from your sample to population?
If yes, ML is indicated, since for ML there exist
estimators with known statistical distributions,
and significance test can be performed. But note:
they are only reliable, if data are multivariate
normal distributed - if not, significance results
are -at least- suspicious, if not useless, depending
on the degree of multivariate-not-normality.

if you want do even more, then check, whether your
data are homegenuous, means there do not occur
remarkable clusters of types of answers. Then a
factor analysis separately in the clusters could
be more sensible.

If you want to infer to population with significance
testing then you should employ ML-extraction as
already said. But this means, that you test a theory
about how many factors you expect and which is their
composition into variables - a theory, which you already
mention.
The significance test - as I understood things - is then,
whether your model can be fitted with your data (but
I'm not expert in this question).

Note, that the "canned" procedures sometimes estimate
factors such that heywood-cases occur: that the
itemspecific variance is overestimated, since there
is no external criterion used on how high this variance
should be estimated. However, there are procedures
to prevent this problem (which I don't assume it
will occur, if you only expect two factors). But
if this happens (and it happens not seldom in ML
extraction) I could provide a procedure which is
stable in this sense. Also, ML's statistics may
only be computable, if you can use a covariance
matrix - check literature or ask for replies of
more experienced experts here in the newsgroup.

Possibly/surely/perhaps you want relate your factors
to other items of your survey. The way is usually
to estimate factorscores and to perform the appropriate
test between such factor scores and other items.
Here PC-factors would be more comfortable, since
they can be computed "exactly" from your data.
PAF and ML can only estimate, for instance by an
implicite multiple regression procedure, and there
are different philosophies for that estimation
procedures - estimated factor scores are not
unique and depend on the method of estimation.
Here a procedure would be useful, which avoids the
need of factor-scores and would allow
to include the intended items, (external to the
factor-analysis and their rotationcriteria itself)
into the same vector/factorspace without using them
for factor-rotation - to avoid
such ambiguosities with factor scores steming from
the need of score-estimation.
Depending on how sophisticated your analysis is,
I could supply an experimental program to include
such items in such a way and export the loadings
and correlations into SPSS (email me then, its
use is also not too far developed until now).
I think in literature such an approach is called
"extension analysis", but I read about that only in
usenet posts.
...

Such...

...

.... are some more ideas; hope they are helpful
and not frustrating.

Sorry for long post-
couldn't resist -


Gottfried Helms
Gottfried Helms
2006-07-19 02:14:32 UTC
Permalink
Post by Sonny
Thanks a lot everyone.
I'll do a Principal Component analysis on all the items using their
correlation coefficients. I'll instruct SPSS to extract two most principal
components only, since this is what the items are intended for. I'll use
Oblique rotation to align the components, since I expect these two attitude
constructs are somewhat related to each other. Hopefully the results will
show that 1) much of the variance in the original items have been accounted
for by these two factors, and 2) the items will load high on their own
factor and low on the other. If not then I'll check to see which items
cross-load. Remove those items and repeat the process if necessary until I
have the two intended factors. Then compute a factor score using regression
or Bartless method for my next-step regression analysis.
What do you think?
Well, the answer of Rich Ulrich reminds me, that
my own (long) answer drove me a bit away from your
last question.

I understand you are in an explorative situation,
looking for factors such that two groups of
items are best separated, also exploring, which
items should be included. I think, your idea to
explore it this way is ok so far, and surely
interesting. But note, that factors, found by
arbitrarily removing and including items this
way, may reported later only as this: a pure
heuristic of a -possibly- only random concert of
items and sample-features, just only finding
an idea for a hypothese, which must investigated
later with different datasets and contrasted to
concurring models.

But you seem to be in such an explorative situation,
so I think, that your proposed aproach is meaningful
and interesting.

Gottfried Helms

P.s:

concerning that exploration I may add one comment:


For a flexible explorative approach to such questions
I wrote a DOS-based-program some years ago, which allows
to find similar structures as yours by interactively
rotating and selecting the interesting items, plus
some enhancements.

The idea is, to include all variables, the items
to be factored plus the (metric) items, to which
the factors should later be related in a common vector
space first.

say the loadingsmatrix, unknown values indicated by
*, unknown small loadings by ., uniteresting loadings
by ? , zero loadings by -

common itemspecific
factors variances
1 2 3 4 5 6 7 8 9
------------------------------------
co-variate items
age * * * * . - - - - -
scaleitems part a ------------------
it_a1 * . . . . - * - - - - -
it_a2 * * . . . - - * - - - -
it_a3 * . . . . - - - * - - -
scaleitems part b ------------------
it_b1 . * . . . - - - - * - -
it_b2 . * . . . - - - - - * -
it_b3 * * . . . - - - - - - *

where the (metric) co-variate items are already
included and the computation of factorscores
is not needed.

Improving for varimax-factors, in which you are
interested, would be done by "deactivating" the
crossloading items to find

common itemspecific
factors factors/variances
1 2 3 4 5 6 7 8 9 10 11
------------------------------------
co-variate items
age * * * ? * - - - - - -
scaleitems part a ------------------
it_a1 ** . . . . - * - - - - -
it_a2 ? ? . . . - - * - - - -
it_a3 ** . . . . - - - * - - -
scaleitems part b ------------------
it_b1 . ** . . . - - - - * - -
it_b2 . ** . . . - - - - - * -
it_b3 ? ? . . . - - - - - - *

interactively, where the factors are rotated for
optimizing for the "active" items only. This can
interactively be improved by just activating/
deactivating appropriate scale-items and re-rotate.
Unfortunately I couldn't implement oblique rotations
like promax and others, when I wrote that program.
But see below for "oblique factors" as "latent variables"



The inclusion of the co-variate items, which don't
influence the rotation-criteria, allows to see
correlations with the found factors in one shot,
without need of estimating factorscores.

This concept of including the covariates in the
overall vectorspace also allows to control that the
found factors are uncorrelated to an itemspecific
variance in the covariates, here "age", as well,
which could not be achieved with common procedures,
except by subsequent factor-analyses of the
factorscores with the covariates.

Unable to do oblique rotations with that program,
it would be possible instead, to proceed from the
above configuration, and to find a principal
component for the interesting scaleitems "a" first,
include that as a new "latent variable" into the set:

common itemspecific
factors factors/variances
1 2 3 4 5 6 7 8 9 10 11
------------------------------------
co-variate items
age * * * ? * - - - - - -
scaleitems part a ------------------
it_a1 ** . . . . - * - - - - -
it_a2 ? ? . . . - - * - - - -
it_a3 ** . . . . - - - * - - -
scaleitems part b ------------------
it_b1 ? ? . . . - - - - * - -
it_b2 ? ? . . . - - - - - * -
it_b3 ? ? . . . - - - - - - *
pc of common variance of "good" scaleitems a
pc_a 1 - - - - - - - - - - -

then do a new rotation for principal compoenent
of the interesting scaleitems b and add this
components as another new latent variable to the
set:

common itemspecific
factors factors/variances
1 2 3 4 5 6 7 8 9 10 11
------------------------------------
co-variate items
age * * * ? * - - - - - -
scaleitems part a ------------------
it_a1 ? ? . . . - * - - - - -
it_a2 ? ? . . . - - * - - - -
it_a3 ? ? . . . - - - * - - -
scaleitems part b ------------------
it_b1 ** - . . . - - - - * - -
it_b2 ** - . . . - - - - - * -
it_b3 ? ? . . . - - - - - - *
pc of common variance of "good" scaleitems a and b
pc_a ? ? - - - - - - - - - -
pc_b 1 - - - - - - - - - - -

and the new two latent variables are representants
of oblique factors, which would be found approximately
by an oblique rotation.

Rotating for getting the factor "age" in the
first columns, for instance, to find the
correlations of the found factors with "age"
(remember, an itemspecific error in "age" was
also excluded from the beginning) like

common itemspecific
factors factors/variances
1 2 3 4 5 6 7 8 9 10 11
------------------------------------
co-variate items
age 1 - - - - - - - - - -
scaleitems part a ------------------
it_a1 ? ? . . . ? * - - - - -
it_a2 ? ? . . . ? - * - - - -
it_a3 ? ? . . . ? - - * - - -
scaleitems part b ------------------
it_b1 ** - . . . ? - - - * - -
it_b2 ** - . . . ? - - - - * -
it_b3 ? ? . . . ? - - - - - *
pc of common variance of "good" scaleitems a and b
pc_a x * - - - ? - - - - - -
pc_b y * . - - ? - - - - - -

gives then the correlations of age with the latent
variables pc_a,pc_b (which represent oblique
principal factors of it_a1, it_a3, and
it_b1,it_b2 respectively) in the values of
x and y in the above table, and may be conceptually
superior to any factor solution of a canned procedure:

- the assumption of an itemspecific variance even of
the covariates can be respected in defining the
factors

- the so found oblique factors have an intuitive definition
as principal component of the (interesting)
common variance of the separate subsets of items,
and are dealt with as "latent" variables, completely ana-
loguously as any other item in this configuration.

- the correlation with the covariates need no inter-
mediate estimation of factor-scores (in fact, they
behave as if they were estimated by regression, if
that would be done)


If -after this monster-post- you are still reading... :-)
and are interested in this type of factor-exploration,
you may email me and get the program from my server.

Gottfried Helms

Richard Ulrich
2006-07-18 22:49:51 UTC
Permalink
Post by Sonny
Thanks a lot everyone.
I'll do a Principal Component analysis on all the items using their
correlation coefficients. I'll instruct SPSS to extract two most principal
components only, since this is what the items are intended for. I'll use
Oblique rotation to align the components, since I expect these two attitude
constructs are somewhat related to each other. Hopefully the results will
I would use Varimax as first choice, because it usually works
well. I might try oblique if I was sure that my factors ought
to be rather *strongly* correlated. In that case, I would
experiment with the amount of correlation expressed by the
factors, and compare several solutions.
Post by Sonny
show that 1) much of the variance in the original items have been accounted
for by these two factors, and 2) the items will load high on their own
factor and low on the other. If not then I'll check to see which items
cross-load. Remove those items and repeat the process if necessary until I
have the two intended factors. Then compute a factor score using regression
or Bartless method for my next-step regression analysis.
What do you think?
[snip, previous]

Likert items are conceived of making up additive sums.
The simple way to score factors for scale items is still
by adding or averaging simple scores; while the complex way,
these days, is by using "scaling methods".
--
Rich Ulrich, ***@pitt.edu
http://www.pitt.edu/~wpilib/index.html
Sonny
2006-07-14 23:24:50 UTC
Permalink
Thanks Richard. That was very helpful.

I must say I have not seen many studies use just the average to extract the
underlying factor, probably because most studies usually have other latent
factors at the same time too.

What if I have additional eight items on their attitude towards something
else then, and the two attitude constructs are somewhat related to each
other? In this case I know (or hope) the items will collapse into two
factors.

Should I use Principle component or Maximum likelihood? Should I specify how
many factors to extract (e.g. 2 in this case), to make sure the factors
extracted are indeed what I need? And the loading scores I should report are
still the correlation coefficients with the factors, right?

Sonny
Post by Richard Ulrich
Post by Sonny
Hi there,
This is probably one of the most common data analyzing tasks for many of
you. I was searching for a tutorial or the procedure of it all over the
Internet but couldn't find any. Hope someone could help out here.
Anyway, let's say I have eight survey items, ATT1-ATT8 (all on 7-point
Likert scale), each measuring an aspect of the subject's attitude towards
something. I have >130 cases (responses). I want to collapse all the items
into one "Attitude" construct, so that it can be used in my regression model
as an independent variable.
If you know you want to collapse the variables into one
overall factor, you can do that by computing an average
(or total) score.
If you want to document the internal reliability of that factor,
you can use the Reliability procedure.
The main purpose of Factor Analysis is to find multiple
factors from an unknown scale, or to check that the structure
in this sample is not radically different from what is
expected. You can use most of the defaults, and look at
the output while matching up to what you read in texts.
Varimax is easy and popular for the rotation. If the data
that you have are sufficiently one-dimensional, the default
will give just one factor (with no rotation, then).
The factor structure matrix shows the correlations of the
items with the factors, and that is the usual one to read.
Post by Sonny
Therefore,
Q1) in SPSS --> Analyze --> Data Reduction --> Factor, under Extraction,
should I use Maximum likehood or Principla component method (or others)?
Q2) Should I extract factors with Eigenvalues over 1, or should I specify to
extract only 1 factor?
Q3) And do I need to use any rotation method under Rotation?
Q4) How to instruct SPSS to output a factor loading table like this one?
--
http://www.pitt.edu/~wpilib/index.html
Sonny
2006-07-14 23:25:15 UTC
Permalink
Thanks Richard. That was very helpful.

I must say I have not seen many studies use just the average to extract the
underlying factor, probably because most studies usually have other latent
factors at the same time too.

What if I have additional eight items on their attitude towards something
else then, and the two attitude constructs are somewhat related to each
other? In this case I know (or hope) the items will collapse into two
factors.

Should I use Principle component or Maximum likelihood? Should I specify how
many factors to extract (e.g. 2 in this case), to make sure the factors
extracted are indeed what I need? And the loading scores I should report are
still the correlation coefficients with the factors, right?

Sonny
Post by Richard Ulrich
Post by Sonny
Hi there,
This is probably one of the most common data analyzing tasks for many of
you. I was searching for a tutorial or the procedure of it all over the
Internet but couldn't find any. Hope someone could help out here.
Anyway, let's say I have eight survey items, ATT1-ATT8 (all on 7-point
Likert scale), each measuring an aspect of the subject's attitude towards
something. I have >130 cases (responses). I want to collapse all the items
into one "Attitude" construct, so that it can be used in my regression model
as an independent variable.
If you know you want to collapse the variables into one
overall factor, you can do that by computing an average
(or total) score.
If you want to document the internal reliability of that factor,
you can use the Reliability procedure.
The main purpose of Factor Analysis is to find multiple
factors from an unknown scale, or to check that the structure
in this sample is not radically different from what is
expected. You can use most of the defaults, and look at
the output while matching up to what you read in texts.
Varimax is easy and popular for the rotation. If the data
that you have are sufficiently one-dimensional, the default
will give just one factor (with no rotation, then).
The factor structure matrix shows the correlations of the
items with the factors, and that is the usual one to read.
Post by Sonny
Therefore,
Q1) in SPSS --> Analyze --> Data Reduction --> Factor, under Extraction,
should I use Maximum likehood or Principla component method (or others)?
Q2) Should I extract factors with Eigenvalues over 1, or should I specify to
extract only 1 factor?
Q3) And do I need to use any rotation method under Rotation?
Q4) How to instruct SPSS to output a factor loading table like this one?
--
http://www.pitt.edu/~wpilib/index.html
Loading...