Factor Analysis Is An Interdependence Technique Whose Primary Purpose Is To Define The Underlying.docx

  • Uploaded by: Rema Vanchhawng
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Factor Analysis Is An Interdependence Technique Whose Primary Purpose Is To Define The Underlying.docx as PDF for free.

More details

  • Words: 1,272
  • Pages: 3
Factor analysis is an interdependence technique whose primary purpose is to define the underlying structure among the variables in the analysis. Obviously, variables play a key role in any multivariate analysis. Whether we are making a sales forecast with regression, predicting success or failure of a new firm with discriminant analysis, or using any other multivariate techniques, we must have a set of variables upon which to form relationships (e.g., What variables best predict sales or success/ failure?). As such, variables are the building blocks of relationships. As we employ multivariate techniques, by their very nature, the number of variables increases. Univariate techniques are limited to a single variable, but multivariate techniques can have tens, hundreds, or even thousands of variables. But how do we describe and represent all of these variables? Certainly, if we have only a few variables, they may all be distinct and different. As we add more and more variables, more and more overlap (i.e., correlation) is likely among the variables. In some instances, such as when we are using multiple measures to overcome measurement error by multivariable measurement, the researcher even strives for correlation among the variables. As the variables become correlated, the researcher now needs ways in which to manage these variables—grouping highly correlated variables together, labeling or naming the groups, and perhaps even creating a new composite measure that can represent each group of variables. We introduce factor analysis as our first multivariate technique because it can play a unique role in the application of other multivariate techniques. Broadly speaking, factor analysis provides the tools for analyzing the structure of the interrelationships (correlations) among a large number of variables (e.g., test scores, test items, questionnaire responses) by defining sets of variables that are highly interrelated, known as factors. These groups of variables (factors), which are by definition highly intercorrelated, are assumed to represent dimensions within the data. If we are only concerned with reducing the number of variables, then the dimensions can guide in creating new composite measures. However, if we have a conceptual basis for understanding the relationships between variables, then the dimensions may actually have meaning for what they collectively represent. In the latter case, these dimensions may correspond to concepts that cannot be adequately described by a single measure (e.g., store atmosphere is defined by many sensory components that must be measured separately but are all interrelated). We will see that factor analysis presents several ways of representing these groups of variables for use in other multivariate techniques. We should note at this point that factor analytic techniques can achieve their purposes from either an exploratory or confirmatory perspective. A continuing debate concerns the appropriate role for factor analysis. Many researchers consider it only exploratory, useful in searching for structure among a set of variables or as a data reduction method. In this perspective, factor analytic techniques “take what the data give you” and do not set any a priori constraints on the estimation of components or the number of components to be extracted. For many—if not 92 Exploratory Factor Analysis most— applications, this use of factor analysis is appropriate. However, in other situations, the researcher has preconceived thoughts on the actual structure of the data, based on theoretical support or prior research. For example, the researcher may wish to test hypotheses involving issues such as which variables should be grouped together on a factor or the precise number of factors. In these instances, the researcher requires that factor analysis take a confirmatory approach—that is, assess the degree to which the data meet the expected structure. The methods we discuss in this chapter do not directly provide the

necessary structure for formalized hypothesis testing. In this chapter, we view factor analytic techniques principally from an exploratory or nonconfirmatory viewpoint. Factor analysis is a very broad term and does not really represent a unitary concept. It generally refers to a set of statistical procedures, all of which function so as to locate a small number of dimensions, clusters or factors in a larger set of independent variables or items. The primary distinctive element of factor analysis is data reduction. In factor analysis, we start with a large set of variables, and variables that correlate highly with each other are identifi ed as representing a single factor, and variables that do not correlate with each other are identifi ed as representing orthogonal (or independent) factors. The ideal factor analysis would identify a small number of factors which are orthogonal to each other; that is, in spatial terms, they would lie at right angles to each other when graphed (Reber and Reber 2001). All procedures for factor analysis require the same basic kind of data for the purpose, that is, for correlation matrix. There are a few procedures which can also use matrix of covariance. Principal component (Hottelling) and principal axes (Kelley) methods are mathematically more rigorous. These can be applied with more objectivity. These methods can account for obtained scores and intercorrelations but are diffi cult to interpret psychologically. The summation method has arbitrary restrictions like requirement of a gfactor.1 The main objective of principal components analysis is to fonn new variables that are linear combinations of the original variables. The new variables are referred to as the principal components and are uncorrelated with each other. Furthennore, the first principal component accounts for the maximum variance in the data, the second principal component accounts for the maximum of the variance that has not been accounted for by the first principal component. and so on. It is hoped that only a few principal components would be needed to account for most of the variance in the data. Consequently, the researcher needs to use only a few principal components rather than all of the variables. Therefore. principal components analysis is commonly classified as a data-reduction technique. TIle results of principal components analysis can be affected by the type of data used (i.e .. mean-corrected or standardized). If mean-corrected data are used then the relative variances of the variables have an effect on the weights used 10 fonn the principal components. Variables that have a high variance relative to other variables will receive a higher weight, and vice versa. To avoid the effect of the relative variance on the weights, one can use standardized data. A number of statistical packages are available for perfonning principal components analysis. Hypothetical and actual data sets were used to demonstrate interpretation of the resulting output from SAS and to discuss various issues that arise when using principal components analysis. The next chapter discusses factor analysis. As was pointed out earlier. principal components analysis is often confused with factor analysis.

Exploratory factor analysis (EFA) can be a highly useful and powerful multivariate statistical technique for effectively extracting information from large bodies of interrelated data. When variables are correlated, the researcher needs ways to manage these variables: grouping highly correlated variables together, labeling or naming the groups, and perhaps even creating a new composite measure that can represent each group of variables. The primary purpose of exploratory factor analysis is to define the underlying structure among the variables in the analysis. As an interdependence technique, factor analysis attempts to identify groupings among variables (or cases) based on relationships represented in a correlation matrix. It is a powerful tool to better understand the structure of the data, and also can be used to simplify analyses of a large set of variables by replacing them with composite variables. When it works well, it points to interesting relationships that might not have been obvious from examination of the raw data alone, or even the correlation matrix.

Related Documents


More Documents from "Tim Price"