Sales

  • Uploaded by: kashif salman
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Sales as PDF for free.

More details

  • Words: 7,543
  • Pages: 23
Darratm.qxd

12/04/02

14:51

Page 175

Does advertising stimulate sales or mainly deliver signals? A multivariate analysis Maxwell K. Hsu University of Wisconsin, USA Ali F. Darrat Louisiana Tech University, USA Maosen Zhong Kansas State University, USA Salah S. Abosedra University of Qatar, Doha The main purpose of this article is to empirically examine the Galbraithian hypothesis that advertising adjusts aggregate demand to the changing industrial development and consequently stimulates sales. The causal relations between sales and advertising are tested in the context of a vector autoregressive system using the US aggregate data over the post-Second World War period. Our empirical results fail to support the Galbraithian thesis, and suggest instead the presence of a potent reverse causality running from aggregate sales to advertising. We use the signalling theory to interpret our results.

INTRODUCTION Advertising expenditures in the USA have dramatically increased from about $31 billion in 1950 (or less than 2% of the nation’s Gross Domestic Product) to more than $168 billion in 1999 (approximately 2.5% of the GDP).1 While several factors may account for such a phenomenal growth in advertising, a major culprit is likely to be the 1

Real figures are expressed in 1990 prices.

International Journal of Advertising, 21, pp. 175–195 © 2002 Advertising Association Published by the World Advertising Research Center, Farm Road, Henley-on-Thames, Oxon RG9 1EJ, UK

175

Darratm.qxd

12/04/02

14:51

Page 176

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

common belief, espoused by Galbraith (1967) and others, that advertising stimulates sales by altering consumers’ behaviour (see Balasubramanian & Kumar 1990). Galbraith (1967) contends that advertising is primarily a societal response to the needs of highly specialised technologies that require heavy investment which cannot be converted easily to other uses. Once such technology is in place, its maintenance depends on consumers’ demand. Advertising helps manage aggregate demand to fit the needs of industrialised economies. Since the development of the industrial system is rather slow and takes an extended period of time, the Galbraithian hypothesis predicts that the relationship between advertising and aggregate sales is of a long-term nature. Another related issue is that disposable income might intervene between advertising and personal consumption expenditures (Jacobson & Nicosia 1981). When disposable income rises, firms/ advertisers will attempt to attract more household consumption and hence intensify their advertising campaign. Thus, the role of disposable income should be accounted for when studying the dynamic relationship between advertising and aggregate sales (consumption). Hence, Galbraith’s (1967) hypothesis implies that higher personal income from the industrial development yields more advertising, which in turn leads to increased aggregate consumption. However, empirical research does not provide unequivocal support for this proposition (e.g. Solow 1967; Schmalensee 1972; Ashley et al. 1980; Sturgess 1982; Duffy 1991; Chowdhury 1994). Besides the advertising-to-sales nexus, another plausible presumption contends that sales drive advertising rather than vice versa. Researchers who ascribe to this sales-to-advertising linkage include Nelson (1975), Kihlstrom & Riordan (1984), Milgrom & Roberts (1986), Tellis & Fornell (1988), and Abe (1995). Indeed, such a chain of causation is consistent with an advertising–sales ratio rule whereby the advertising budget is decided as a percentage of sales (see Shimp 1997). In this article, we use recent econometric techniques to investigate the aggregate relationship between advertising and sales in the USA over the period 1948 to 1995. We investigate the long-term nature of this relationship using the Johansen (1991) cointegration test. Contrary to the Galbraithian hypothesis, we do not find any long-term relationship binding advertising, aggregate sales and disposable income. Results from the Granger-causality tests identify a 176

Darratm.qxd

12/04/02

14:51

Page 177

ADVERTISING: A MULTIVARIATE ANALYSIS

unidirectional causal relationship running from aggregate sales to advertising, but no feedback is found. This finding appears inconsistent with the Galbraithian argument, and we provide a plausible explanation – signalling theory – to interpret the results. The rest of the article is structured as follows. The second section reviews the literature on the relationship between advertising and sales; the third section introduces the data and illustrates the methodology; the fourth section presents the empirical results; the fifth section attempts to provide a logical interpretation in linking the findings to the signalling theory; and the sixth section concludes the study. LITERATURE REVIEW The relationship between advertising and sales (consumption) has received considerable attention in the marketing literature. Zanias (1994) observes that a large number of studies on the advertising– sales nexus may be categorised into two groups. The first group uses the regression approach to model advertising expenditure as an explanatory variable with appropriate dynamics, while the second group of studies employs a Box–Jenkins time-series model. In terms of the levels of aggregation, researchers use different data ranging from the firm level (e.g. Leone 1983) to the national level (e.g. Jacobson & Nicosia 1981; Chowdhury 1994). Alhough disaggregated data at the firm or industry level may be better suited for studying the advertising–sales relationship for specific brands and/or product categories, we believe that aggregate data used in this article have the advantage of providing more stable results.2 Using Australia’s data, Metwally & Tamaschke (1981) find evidence to support the Galbraithian hypothesis, and conclude that advertising intensity has a positive and significant effect on the propensity to consume. Peel (1975) also reports similar evidence for the UK, but Sturgess (1982) derives contradictory results from similar data. As for the US data, Schmalensee (1972) reports evidence in support of the

2

For example, a McGraw-Hill (1969) study shows that, in terms of the advertisingto-sales ratios, the variation among firms in the same industry is almost as much as between different industries. Note, however, that aggregate data do not provide information on advertising and sales at different stages of the product life cycle.

177

Darratm.qxd

12/04/02

14:51

Page 178

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

notion that sales influence advertising expenditures. Ashley et al. (1980) employ the causality approach and confirm Schmalensee’s earlier evidence for the US data. Reference should also be made to Chowdhury (1994) who applies cointegration and causality tests to the British data to investigate the relationship between advertising and six other macroeconomic variables, including sales. He fails to support any reliable relationship between advertising and sales. Unlike Chowdhury, we examine in this article the relationship between advertising and sales in the context of trivariate (as opposed to bivariate) models against the US (not the UK) data. DATA AND METHODOLOGY Our data are annual for three US aggregate variables covering the period 1948 to 1995. Specifically, the variables are aggregate advertising expenditures (A) obtained from the Direct Marketing Association’s Statistical Fact Book; aggregate sales (S) measured by personal consumption expenditures and obtained from the S&P/DRI Database; and personal disposable income (I ) culled from various issues of the Statistical Abstracts of the United States. All three variables are measured by per capita figures since the hypothesised relationships have a clear microeconomic foundation. The variables are also expressed in natural logarithms to mitigate any heteroscedasticity problems, and expressed in real terms (deflated by the consumer price index) to eliminate possible inflation noise. Necessary data on population and prices are also obtained from the S&P/DRI Database. Given the long-term nature of the advertising–sales relationship as predicted by Galbraith’s hypothesis, cointegration is well-suited for this type of research. Recently, Grewal et al. (2001) urged the application of the cointegration approach on marketing issues and suggest that this relatively new technique is ‘an intriguing development for analyzing marketing interactions in dynamic environments’ (p. 127). Details of the cointegration approach and all other empirical procedures used in this article are given in the appendix. Galbraith postulates that increased advertising leads to increased sales, and also that personal disposable income stimulates advertising. The opposing hypothesis states that sales propagate advertising expenditures. We test these two competing hypotheses simultaneously using a three-variable vector autoregressive (VAR) modelling 1

Footnote.

178

Darratm.qxd

12/04/02

14:51

Page 179

ADVERTISING: A MULTIVARIATE ANALYSIS

procedure. Thus, we examine the following VAR model consisting of real advertising per capita (At ), real sales per capita (St ), and per-capita real disposable income (It ):3  At  φ1  γ 11 ( L ) γ 12 ( L ) γ 13 ( L )  At  ε 1t   S  = φ  + γ ( L ) γ ( L ) γ ( L )  S  + ε  22 23  t   2   21  t   2t   I t  φ3  γ 31 ( L ) γ 32 ( L ) γ 33 ( L )  I t  ε 3 t 

(1)

where φi and γij are the coefficients to be estimated (i, j = 1, 2, 3); (L) are lag polynomials, γ(L) = γ1L + γ2L2 + … + γnLn; and εit denotes white-noise residuals. The two competing hypotheses can be tested using the well-known Granger-causality concept. The null hypothesis that sales unidirectionally Granger-cause advertising is consistent with the parameter restrictions: γ12 (L) ≠ 0 and γ21 (L) = 0. On the other hand, the Galbraithian argument postulates instead that causality runs from advertising to sales, and also from personal disposable income to advertising. The implied restrictions are: γ21 (L) ≠ 0, and γ13 (L) ≠ 0. EMPIRICAL RESULTS A necessary prelude to any careful time-series estimation is to check the stationarity of the variables in the model. To that end, we use the Augmented Dickey–Fuller (ADF), the Phillips–Perron (PP) and the Weighted Symmetric (WS) procedures to test for non-stationarity and determine the lag structure in the tests by the Akaike Information Criterion (AIC). We employ several testing procedures to ensure the robustness of our stationarity inferences. The test results, given in Table 1, suggest the absence of a unit root for all three variables but only if expressed in first-differences. Since all three variables prove first-difference stationary, it is possible that they are also co-integrated. We use the Johansen (1991) multivariate procedure, along with

3

If there exists cointegration among the three variables in the VAR, one should construct a vector error-correction model by adding the lagged residuals from the cointegration vector(s) in the VAR model, equation (1). However, as discussed below, we find no cointegration among the three variables of the model.

179

Darratm.qxd

12/04/02

14:51

Page 180

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

TABLE 1 STATIONARITY TEST RESULTS ADF (L)

PP (L)

WS (L)

A. Variables in levels At St It

–1.61 (3) –1.33 (3) –0.92 (2)

–1.93 (3) –0.47 (3) –0.73 (2)

0.81 (3) 0.40 (3) 0.31 (2)

B. Variables in first-differences ∆At ∆St ∆It

–4.25 (2)* –5.23 (3)* –3.43 (2)*

–44.45 (2)* –44.66 (3)* –29.61 (3)*

–4.34 (2)* –5.40 (3)* –4.03 (2)*

Notes: At is log advertising expenditure per capita, St is log sales per capita, and It is the log personal disposal income, ∆ denotes the first-difference operator, ADF is the Augmented Dickey–Fuller test, PP is the Phillips–Perron test, WS is the Weighted Symmetric test, and L denotes the proper lag structure based on the AIC criterion. An asterisk (*) indicates rejection of the null hypothesis of non-stationarity at the 5% level of significance. A time trend was included in the testing equations whenever it proved statistically significant.

Reimers’ (1992) correction for possible finite-sample bias, to test for cointegration among the three variables in the model. Results displayed in Table 2 indicate that there exists no significant cointegration relationship among advertising, sales and disposable income even at the weak 90% level. This evidence against cointegration persists when using different lags. Since cointegration reflects long-term relations, the Johansen test results against cointegration imply that the three variables can only be related over the short term. Given the importance of lag structures when estimating VARs, lags should not be imposed arbitrarily, nor should they be common across TABLE 2 THE JOHANSEN COINTEGRATION TESTS Trace statistics: Maximal eigenvalue: λ-max = –(T – pk) ln(1 – λ^ r +1) (r = 0, 1, …, p)

p

Trace = –(T – pk)



ln(1 – λ^ i )

i =r0 +1

Null

λ-max

CV 90%

Null

Trace

CV 90%

H0: r = 0 H0: r = 1 H0: r = 2

11.54 8.48 5.84

19.77 13.75 7.52

H0: r = 0 H0: r ≤ 1 H0: r ≤ 2

25.87 14.34 5.84

32.00 17.85 7.52

Notes: Both the maximal eigenvalues and trace statistics are adjusted for small sample bias using the Reimers (1992) method. T is the sample size, p is the number of variables (= 3 in our case), k is the number lags (= 3) determined by AIC at which there is no autocorrelation in the VAR model, r is the hypothesised cointegrating vectors.

180

Darratm.qxd

12/04/02

14:51

Page 181

ADVERTISING: A MULTIVARIATE ANALYSIS

all variables to avoid possible biases (Ahking & Miller 1985). In this article, we follow recent literature and select the lag structure for every variable in each of the three equations by means of Akaike’s final prediction error (FPE), in conjunction with the specific-gravity criterion of Caines et al. (1981). Specifically, we search for the ‘optimal’ lag length for each explanatory variable in our VAR system, allowing up to three annual lags. Higher initial lags could quickly consume available degrees of freedom. We subject the final model to various diagnostic tests to ensure appropriate model specifications (e.g. Durbin-m and Bruesch–Godfrey tests for serial correlation; the Lagrange-multiplier test for heteroscedasticity; the Ramsey RESET test for omission-of-variables bias; and the Chow test for structural instability). Results from all these diagnostic tests (available upon request) evince no serious model misspecifications. Observe also that the VAR model can be estimated ‘equation by equation’ using ordinary least square (OLS). However, if the error terms across equations are significantly correlated, we can enhance the statistical efficiency of the estimates through using the Zellner Seemingly Unrelated Regression (SUR) technique. Applying the preceding steps, the final model takes the following form:  ∆At  φ1   γ 11 ( 1) γ 12 ( 3 ) γ 13 ( 1)   ∆At  ε 1t   ∆S  = φ  + γ ( 2 ) γ ( 3 ) γ ( 3 )  ∆S  + ε  22 23  t   2   21  t   2t   ∆I t  φ3  γ 31 ( 3 ) γ 32 (1) γ 33 (1)   ∆I t  ε 3 t 

(2)

The Granger-causality test results from the SUR estimation are shown in Table 3. The empirical results given in Table 3 provide support for the hypothesis that ‘sales cause advertising’ over the Galbraithian alternative of ‘advertising causes sales’. Specifically, the null hypothesis that advertising does not Granger-cause sales is not rejected at the 5% level of significance (χ2 = 5.28 with 3 df). However, the reverse hypothesis that sales do not Granger-cause advertising is soundly rejected at the same level of significance (χ2 = 12.60 with 3 df). Also inconsistent with the Galbraithian hypothesis is the finding that personal income does not Granger-cause advertising (χ2 = 2.05 with 1 df). To check for the robustness of these inferences, we impose different lag structures and re-estimate the VAR system, but the results qualitatively. 1persist Footnote. 181

Darratm.qxd

12/04/02

14:51

Page 182

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

TABLE 3 LIKELIHOOD RATIO (LR) TESTS OF HYPOTHESIS RESTRICTIONS (SUR SYSTEM ESTIMATION) Null hypotheses

LR statistics df

1 Sales do not Granger-cause advertising: γ12(L) = 0 2 Advertising does not Granger-cause sales: γ21(L) = 0 3 Disposable income does not Granger-cause advertising: γ13(L) = 0 4 Advertising does not Granger-cause disposable income: γ31(L) = 0

p-values

12.60* 5.28

3 3

0.006 0.15

2.05

1

0.15

0.59

1

0.44

Notes: The lag profile of the VAR is determined by Akaike’s method in conjunction with specification gravity criterion. Each equation of the VAR has passed various model diagnostic tests (for autocorrelation, heteroscedasticity, omission-of-variables bias and structural instability). The VAR system is estimated by Zellner’s seemingly unrelated regression method. Degrees of freedom (df) correspond to the number of FPE-minimising lags. An asterisk (*) denotes rejection of the null hypothesis of no-causality at the 5% level of significance.

TABLE 4 FORECAST ERROR VARIANCE DECOMPOSITION By innovations in % of FEVD of the variable explained

Disposable income

Sales

Advertising

Horizon (in years)

1 3 6 12 1 3 6 12 1 3 6 12

Disposable income

Sales

Advertising

FEVD

SE

FEVD

SE

FEVD

SE

100.00* 97.22* 97.04* 97.03* 34.71* 35.71* 35.72* 35.71* 15.56 20.48 21.36 21.39

7.07 7.24 7.33 5.49 13.97 13.93 13.92 13.97 12.37 12.35 13.35 12.39

0.00 1.80 1.93 1.92 65.30* 63.22* 62.85* 62.77* 55.56* 50.10* 49.05* 48.85*

3.83 4.01 4.19 3.66 13.88 13.84 13.83 13.73 12.23 12.20 12.20 12.08

0.00 0.98 1.03 1.05 0.00 1.07 1.43 1.52 28.88* 29.42* 29.59* 29.76*

4.30 4.35 4.36 4.25 3.88 3.92 3.93 3.76 7.89 7.88 7.88 7.80

Notes: The SE is the standard error for the point estimate of forecast error variance decomposition (FEVD) computed by a Monte-Carlo simulation procedure with 500 random draws. An asterisk (*) indicates the case where the estimated FEVD is at least twice as much as its standard error. A rule of thumb is that the estimated FEVD is deemed significant when it is at least twice as much as its standard error.

182

Darratm.qxd

12/04/02

14:51

Page 183

ADVERTISING: A MULTIVARIATE ANALYSIS

Of course, Granger-causality tests are not the only metric for judging the impact of advertising on sales (or vice versa). Another useful approach is to check for the ability of one variable to account for the forecast error variance of the other (see the appendix for details). Using different forecasting horizons over one year, three years, six years, and twelve years, Table 4 reports the estimated forecast error variance decompositions (FEVDs) of each variable (in columns) decomposed into fractions that are accounted for by innovations of the three variables (in rows). We also perform a Monte-Carlo simulation with 500 random draws to compute the standard errors for the estimated FEVDs. Generally, an estimated FEVD is deemed statistically significant when it is at least twice its standard error. Using a second-order VAR, sales explain a significant portion (about 50%) of the forecast error variance of advertising across all alternative forecast horizons. In contrast, advertising fails to account for any substantial portion of the error variance of sales (ranging only from 0 to 1.5%).4 These findings from the FEVD analysis offer additional evidence in support of the notion that sales drive advertising. An additional insight into the dynamic interrelations among advertising, sales and disposable income may be obtained by examining the impulse responses to innovations of each variable in the VAR system. To conserve space, we focus on the responses of sales (advertising) to a one-standard-deviation shock in advertising (sales). Figures 1 and 2 plot the time paths of the impulse response functions of sales (advertising) reacting to a one-standard-deviation shock in advertising (sales), along with the 95% confidence interval computed by Monte-Carlo simulations with 500 random draws. As is clear from the two figures, advertising reacts strongly to innovations in sales. The significant effect of a shock in sales on advertising lasts for about one year before it converges to zero. In contrast, shocks in advertising do not have any significant effect on sales. These impulse response results provide further support for our earlier finding that sales have a considerable impact on advertising expenditures, but not vice versa.

4

The orthogonalisation order for results in Table 4 is personal disposable income, sales, and then advertising. Such an order allows for the exogenous impacts of personal disposable income and sales on advertising.

183

Darratm.qxd

12/04/02

14:51

Page 184

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

0.008 0.006 0.004 0.002 0 –0.002 –0.004 –0.006 –0.008 –0.010 1

2

3

4

5

6

7

8

9

10

11

12

Horizons (in years) Notes The dark lines represent the time path of impulse response functions to a one-standard-deviation shock. The dotted lines characterise the 95% confidence intervals of the impulse response functions computed by MonteCarlo simulations with 500 random draws.

FIGURE 1 IMPULSE RESPONSES OF SALES TO AN INNOVATION IN ADVERTISING

AN INTERPRETATION Recent marketing research does not support the Galbraithian hypothesis that advertising shapes consumers’ behaviour and promotes sales. Some researchers explain the alternative ‘sales drive advertising’ hypothesis on the basis of an advertising budget method (i.e. advertising-to-sales ratio). The apparent advertising–sales puzzle may be explained by the signalling theory of advertising (see Prabhu & Stewart (2001) for a recent account). Akerlof (1970) and Spence (1973) pioneered the signalling theory whose central message is that consumers cannot perfectly determine product quality and attributes. The theory contends that 184

Darratm.qxd

12/04/02

14:51

Page 185

ADVERTISING: A MULTIVARIATE ANALYSIS

0.05

0.04

0.03

0.02

0.01

0

–0.01

–0.02 1

2

3

4

5

6

7

8

9

10

11

12

Horizons (in years) Notes The dark lines represent the time path of impulse response functions to a one-standard-deviation shock. The dotted lines characterise the 95% confidence intervals of the impulse response functions computed by MonteCarlo simulations with 500 random draws.

FIGURE 2 IMPULSE RESPONSES OF ADVERTISING TO AN INNOVATION IN SALES

manufacturers of high-quality products have an incentive to advertise more heavily to attract new customers and/or induce repeated purchases (Nelson 1975). The manufacturers of high-quality (popular) brands may be considered as ‘strong’ manufacturers, and the manufacturers of low-quality (less-favoured) products as ‘weak’ manufacturers. Typically, strong manufacturers are able to generate more revenues and profits because of their superior products, while weak manufacturers do not realise as much revenue due to their lower quality and less popular products. For industrial and consumer products, it has been found that an index of ‘quality’ and ‘uniqueness’ is positively related to higher advertising-to-sales ratios (Lilien 1978; Farris & Buzzell 1979). This implies that marketers increase advertising expenditures to promote products of higher quality. In addition, it is commonly assumed that manufacturers compete with 185

Darratm.qxd

12/04/02

14:51

Page 186

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

each other and attempt to maximise market shares. Under these circumstances, manufacturers tend to ‘boast’ about their products, and small manufacturers mimic, and perhaps even challenge, major competitors through comparative advertising. Consumers typically experience some difficulty in differentiating strong manufacturers from weak manufacturers. This problem could cause consumers to partially discount information contained in public advertisements. Accordingly, all manufacturers might be simply pooled in consumers’ psyche into one general ‘average quality’ class. This situation is referred to as ‘pooling equilibrium’ in the signalling literature.5 In order to overcome this market deficiency and more effectively convey information about product characteristics, a strong manufacturer tends to employ a series of costly signals that would be prohibitively expensive to mimic by weaker manufacturers. Advertising is costly and must generate sufficient sales to cover it. As Kirmani & Rao (2000) suggest, from the perspective of weak manufacturers, costly advertisements induce consumers’ trial which probably exposes the low quality of the weak manufacturer’s products. These heavy advertisements of strong manufacturers will then deter weak manufacturers since they jeopardise future sales and the recovery of advertising costs. Moreover, significant advertising expenditures also deplete funds that would otherwise be available for product improvements and/or innovations. Therefore, weak manufacturers typically find the opportunity cost under these circumstances prohibitively high. The net outcome is a ‘separating equilibrium’ in which stronger manufacturers employ signals to deliver asymmetric information of their product quality and popularity to consumers, hoping for the market’s rewards in the form of higher sales. This equilibrium is not only stable – no party can generate excessive profit by inappropriate signalling behaviour – but it is also efficient, in the sense that the amount of advertising expenditure correctly reflects the true prospects of the various products. Consumers are therefore able to differentiate strong from weak manufacturers, and consequently have the opportunity to purchase truly high-quality products.

5

Pooling equilibrium could explain the phenomenon that the effect of interpersonal communication such as ‘word of mouth’ tends to be more effective than that of mass media, as the diffusion of innovations research seems to suggest (Sultan et al. 1990).

186

Darratm.qxd

12/04/02

14:51

Page 187

ADVERTISING: A MULTIVARIATE ANALYSIS

It is in this sense that manufacturers tend to increase their advertising budgets to convey asymmetric information that is otherwise not perceived by consumers. Whenever manufacturers realise more revenues from sales, they tend to spend more on advertising. Of course, a considerable portion of advertising expenditure will be reflected ultimately in higher consumer prices. Indeed, as some studies suggest, advertising expenditure may be quite wasteful and akin to ‘burning money to signal’ (see, for example, Abraham & Lodish (1990) and Lodish et al. (1995)). Yet the underlying justification for more advertising remains. Strong manufacturers look forward to recouping advertising costs in the future through creating barriers of entry and driving out weaker manufacturers from the market. Consumers too might benefit from advertising signals since such signals could improve information regarding the characteristics of the products, and therefore reduce their costs of improper purchases (Ehrlich & Fisher 1982). Under this scenario, firms spend on advertising in a direct reaction to sales increases, and they are motivated to continue with advertising from their conviction of the high quality of their products and from their eventual success in the market. In the case of products in which consumers cannot verify the crucial aspects of their quality except through the adoption experience, advertisements can credibly convey little direct information beyond the simple fact that the product is available in the market. Yet such ‘reminding ads’ may still be useful, both for consumers to identify high-quality products, and for the manufacturers to make themselves known. Therefore, even when consumers discount advertising messages, advertising can still help consumers gauge the product quality and ascertain the company’s conviction. As a result, marketers generally spend a considerable portion of sales revenue on advertising. Before concluding, we should caution that our analysis in this article does not directly test the signalling theory, but only uses this increasingly popular framework to interpret the results we obtained for the advertising–sales nexus at the macro level. CONCLUSION The Galbraithian hypothesis contends that advertising increases with disposable personal income and promotes sales. We re-examine this 187

Darratm.qxd

12/04/02

14:51

Page 188

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

hypothesis using several empirical procedures (multivariate Grangercausality tests, forecast error variance decompositions, and impulseresponse functions) against US annual data over the period 1948 to 1995. Consistent with Chowdhury’s (1994) evidence for the UK, our results for the USA also reject the Galbraithian hypothesis. However, unlike Chowdhury’s bivariate results, our results from a broader model consistently suggest that there is a reliable relationship between advertising and sales, but one in which sales lead advertising rather than vice versa. We use the signalling theory to interpret the results. Advertising is clearly costly, and may even be wasteful. Yet advertising is helpful for consumers to differentiate high-quality from low-quality manufacturers, thus avoiding regret from purchasing low-quality products. Consumers view advertising as the company’s way to convey confidence in its product. APPENDIX When considering long-term marketing interactions, it is necessary to consider the possibility of cointegration among the variables under investigation. One main advantage of the cointegration analysis is that it enables researchers to analyse non-stationary variables and thus avoids the loss of information due to the process of transforming non-stationary variables to become stationary through differencing. We summarise below the empirical procedures used. Non-stationarity (unit root) tests A variable is said to be stationary (has no unit root) if its stochastic properties (e.g. mean, variance) do not vary over time. In other words, ‘a stationary series tends to return to its mean value and fluctuate around it within a more-or-less constant range while a non-stationary series has a different mean at different points in time and its variance increases with the sample size’ (Harris 1995, p. 15). Non-stationarity could result in the spurious regressions phenomenon involving invalid test statistics (Granger & Newbold 1974; Phillips 1986). Stock & Watson (1989) also argue that the usual test statistics (such as t, F, DW and R2 ) will not exhibit standard distributions if some of the variables in the model are non-stationary. It is not surprising, then, that testing roots has gained popularity in the recent literature. 1for unit Footnote. 188

Darratm.qxd

12/04/02

14:51

Page 189

ADVERTISING: A MULTIVARIATE ANALYSIS

There are several ways to test for unit roots. The procedure proposed by Dickey & Fuller (1979), dubbed DF, is perhaps the most familiar unit root test. However, the DF test is biased under serially correlated errors. The augmented Dickey–Fuller (ADF) test is superior in this case since it adds lagged values of the dependent variables to minimise the serial correlation problem. The WeightedSymmetric (WS) and the Phillips–Perron (PP) tests have also gained acceptance in the literature. Pantula et al. (1994) argue that the WS test is more powerful than several alternative unit root tests, including the ADF test. On the other hand, the PP test is robust to a wide range of serial correlations and time-dependent heteroscedasticity (Enders 1995). Typically, a non-stationary time series variable can achieve stationarity by being differenced appropriately. A non-stationary timeseries variable is said to be integrated of order d if it becomes stationary after differencing it d times (Engle & Granger 1987). The cointegration test and error-correction models Cointegration refers to the presence of a long-term (equilibrium) relationship among non-stationary variables. Under cointegration, standard regressions become seriously misspecified due to the omission of important variables (the error-correction term). The Johansen (1988) method is widely considered to be more efficient than the two-step Engle & Granger (1987) procedure. If the number of variables in the system is n, there can be up to n – 1 cointegrating vectors. Both the trace and the maximal eigenvalue statistics are computed in order to determine the rank (or number) of cointegrating vectors, denoted r. The null hypothesis r = 0 is tested against the alternative r = 1; r ≤ 1 against the alternative r = 2, etc. The cointegration test is sensitive to the choice of lag length, and hence we use the Schwartz criterion and the likelihood ratio test to select the lag order. Grewal et al. (2001) suggest the use of Vector Autoregressive (VAR) models and Vector Error Correction Models (VECMs) to access the relationship among time-series variables. If cointegration among the variables under study is rejected, then the VAR model is constructed without incorporating the error-correction term. Under cointegration, however, VECM becomes appropriate and it includes the errorcorrection term(s) to represent the linear combination(s) of the cointegrating variables. VECMs incorporate the variables in their stationary forms to avoid the spurious regression problems. In this 1 Footnote. 189

Darratm.qxd

12/04/02

14:51

Page 190

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

regard, one main feature of the VAR and VECM is that all the terms in the model are stationary so the usual statistical inference is valid. Since lag structures are unlikely to be similar for different variables, a reasonable approach is to determine the appropriate lag lengths by Akaike’s final prediction error (FPE) criterion, in conjunction with the specific-gravity criterion of Caines et al. (1981). Thornton & Batten (1985) report that the FPE criterion is superior to many other alternative lag-selection procedures for determining the VAR lags. The FPE procedure minimises a function of the one-step-ahead forecast error. The specific-gravity criterion aids in ranking the various explanatory variables for inclusion in the equations; see Darrat & Brocato (1994) for details. Having specified the equations and determined their proper lag structures, we then pool them together as a system. Next, we test various restrictions in the maintained VAR (or VECM) using over- and under-fitting tests conducted by system estimations (e.g. using Zellner’s seemingly unrelated regressions, SUR). The purpose of such system tests is to ensure that the specifications of the various equations reached on the basis of single-equation estimations are robust. In addition, we also apply a battery of diagnostic tests to check the adequacy of the final model. In particular, we test for autocorrelation by the Durbin-m and Bruesch–Godfrey procedures; heteroscedasticity by the Lagrange-multiplier test; for a possible omission-of-variables bias by the Ramsey RESET test; and for structural instability by the Chow test. The Granger-causality test Testing for Granger-causality is a recurrent theme in applied business research. A covariance stationary time series (Y ) is said to Grangercause another stationary time series (X ) if the prediction error from regressing X on its own lagged values significantly declines when lagged values of Y are also added to the model. Since this Granger definition of causality remains controversial and may not correspond exactly to the concept of causality in the philosophical sense, we attach ‘Granger’ to ‘cause’. As Schwert (1981) points out, much of this controversy can be resolved by viewing the Granger-causality approach as tests of ‘incremental predictive content’. The Granger-causality is typically examined in a vector system where each variable under study has an opportunity to be considered as endogenous variable at one time. In our case, for example, 1 anFootnote. 190

Darratm.qxd

12/04/02

14:51

Page 191

ADVERTISING: A MULTIVARIATE ANALYSIS

Granger-causality can shed light in terms of whether advertising is determining sales, or sales are determining advertising, or both are determining each other in a trivariate system. Forecast error variance decompositions The forecast error variance decomposition (FEVD) assesses how much of the forecast error variance of a given variable is explained by other variables. Variance decompositions are useful since they provide additional insights into the direction of Granger-causality; for example, our empirical results suggest that approximately 50% of forecast error variance of advertising is explained by sales. In contrast, less than 2% of forecast error variance of sales is explained by advertising. These results are consistent with the notion that sales Granger-cause advertising rather than vice versa. Let vector Zt contain advertising, sales and personal income variables, and rewrite system equations (2) in a moving-average representation. The moving-average vector representation of ∆Zt is given by: ∞

∆ Z t = ∑ Bs e t − s

(3)

s =0

where the ith and the jth element of Bs measures the impulse response of the ith variable after s periods to a one standard deviation random shock in the jth variable. Although the et variables are serially uncorrelated, they may still be contemporaneously correlated. If these correlations are high, the interpretation of the impulse response function, as capturing the effect of a shock in the jth variable while all other variables are held constant, could be misleading. Thus, we use an orthogonalising transformation of et and rewrite system equations (3) in a recursive form as follows: ∞

∆Z t = ∑ C s u t − s

(4)

s =0

where Cs = V–1Bs, and V is a lower triangular matrix so that orthogonalised innovations ut can be obtained from ut = V–1et . The orthogonality of the ut allows the error variance of the h + f step ahead forecast of the returns in the ith variable to be decomposed into components accounted for by these shocks. In particular, the 1

Footnote.

191

Darratm.qxd

12/04/02

14:51

Page 192

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

components of this forecast error variance of the ith variable h+f accounted for by shocks in the jth variable are given by ∑h=0 C 2i, j,h. This variance decomposition permits the isolation of the relative contributions of the ith variable. Impulse response functions Impulse response functions (IRFs) gauge the response of one variable to a change in another variable in the system. For instance, if a 10% shock is assigned to advertising (i.e. increasing or decreasing advertising expenditure by 10%), the IRFs can be used to enquire whether such a shock has a delayed effect on sales, and for how many periods. The elements in the matrix C1i in equation (4) are called impact multipliers. These multipliers and their time paths resulting from a one-standard-deviation shock in a given variable may be plotted along with the 95% confidence interval derived from Monte-Carlo simulations. ACKNOWLEDGEMENTS The authors are indebted to Shahid Bhuian, Sean Dwyer, the Editor and two reviewers for their many helpful comments and suggestions. REFERENCES Abe, M. (1995) Price and advertising strategy of a national brand against its private-label clone: a signaling game approach, Journal of Business Research, 33(3), 241–250. Abraham, M.M. & Lodish, L.M. (1990) Getting the most out of advertising and promotion, Harvard Business Review, 68(3), 50–55. Ahking, F.W. & Miller, S.M. (1985) The relationship between government deficits, money growth, and inflation, Journal of Macroeconomics, 7(4), 447–467. Akerlof, G. (1970) The market for ‘lemons’, qualitative uncertainty and the market mechanism, Quarterly Journal of Economics, 84, August, 488–500. Ashley, R., Granger, C.W.J. & Schmalensee, R. (1980) Advertising and aggregate consumption: an analysis of causality, Econometrica, 48, 1149–1167. Balasubramanian, S.K. & Kumar, V. (1990) Analyzing variations in advertising and promotional expenditures: key correlates in consumer, industrial, and service markets, Journal of Marketing, 54(2), 57–68. Caines, P.E., Sethi, S.P. & Keng, C.W. (1981) Causality analysis and multivariate 1 autoregressive Footnote. modelling with an application to supermarket sales analysis,

192

Darratm.qxd

12/04/02

14:51

Page 193

ADVERTISING: A MULTIVARIATE ANALYSIS

Journal of Economic Dynamics and Control, 3(3), 267–298. Chowdhury, A.R. (1994) Advertising expenditures and the macro-economy: some new evidence, International Journal of Advertising, 13(1), 1–14. Darrat, A.F. & Brocato, J. (1994) Stock market efficiency and the federal budget deficit: another anomaly?, The Financial Review, 29(1), 49–75. Dickey, D.A. & Fuller, W. (1979) Distribution of the estimators for autoregressive time series with a unit root, Journal of The American Statistical Association, 74, 427–431. Duffy, M. (1991) Advertising in demand systems: testing a Galbraithian hypothesis, Applied Economics, 23(3), 485–496. Ehrlich, I. & Fisher, L. (1982) The derived demand for advertising: a theoretical and empirical investigation, The American Economic Review, 72(3), 366–388. Enders, W. (1995) Applied Econometric Time Series. New York: John Wiley & Sons. Engle, R.F. & Granger, C.W.J. (1987) Cointegration and error correction: representation, estimation, and testing, Econometrica, 55, 251–276. Farris, P.W. & Buzzell, R.D. (1979) Why advertising and promotional costs vary: some cross-sectional analyses, Journal of Marketing, 43, 112–122. Galbraith, J.K. (1967) The New Industrial State. Boston, MA: Houghton Mifflin. Granger, C.W.J. & Newbold, P. (1974) Spurious regressions in econometrics, Journal of Econometrics, 2, 111–120. Grewal, R., Mills, J.A., Mehta, R. & Mujumdar, S. (2001) Using cointegration analysis for modeling marketing interactions in dynamic environments: methodological issues and an empirical illustration, Journal of Business Research, 51(2), 127–144. Harris, R.I.D. (1995) Using Cointegration Analysis in Econometric Modeling. New York: Prentice Hall. Jacobson, R. & Nicosia, F.M. (1981) Advertising and public policy: the macroeconomic effects of advertising, Journal of Marketing Research, 18, February, 29–38. Johansen, S. (1988) Statistical analysis of cointegration vectors, Journal of Economic Dynamics and Control, 12, 231–254. Johansen, S. (1991) Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models, Econometrica, 59(6), 1551–1580. Kihlstrom, R.E. & Riordan, M.H. (1984) Advertising as a signal, Journal of Political Economy, 92(3), 427–450. Kirmani, A. & Rao, A.R. (2000) No pain, no gain: a critical review of the literature on signaling unobservable product quality, Journal of Marketing, 64(2), 66–79. Leone, R.P. (1983) Modeling sales–advertising relationship: an integrated time series-econometric approach, Journal of Marketing Research, 20, August, 291–295. Lilien, Gary L. (1978) Advisor 2: A Study of Industrial Marketing Budgeting. Cambridge, MA: The Advisor Project. Lodish, L.M., Abraham, M., Kalmenson, S., Livelsberger, J., Lubetkin, B., Richardson, B. & Stevens, M.E. (1995) How advertising works: a meta-analysis of 389 real world split cable TV advertising experiments, Journal of Marketing Research, 32(2), 125–139. 1

Footnote.

193

Darratm.qxd

12/04/02

14:51

Page 194

INTERNATIONAL JOURNAL OF ADVERTISING, 2002, 21(2)

McGraw-Hill (1969) Per cent of industrial sales invested in industrial advertising 1969, McGraw Hill Research Laboratory of Advertising Performance. New York: McGraw-Hill. Metwally, M.M. & Tamaschke, H.U. (1981) Advertising and the propensity to consume, Oxford Bulletin of Economics and Statistics, 43(3), 273–285. Milgrom, P. & Roberts, J. (1986) Price and advertising signals of product quality, Journal of Political Economy, 94(4), 796–821. Nelson, P. (1975) The economic consequences of advertising, Journal of Business, 48(2), 213–241. Pantula, S.G., Gonzalez-Farias, G. & Fuller, W.A. (1994) A comparison of unitroot test criteria, Journal of Business and Economic Statistics, 12(4), 449–459. Peel, D. (1975) Advertising and aggregate consumption. In K. Cowling et al. (eds), Advertising and Economic Behavior. London: Macmillan. Phillips, P.C.B. (1986) Understanding spurious regressions in econometrics, Journal of Econometrics, 33(3), 311–340. Prabhu, J. & Stewart, D.W. (2001) Signaling strategies in competitive interaction: building reputations and hiding the truth, Journal of Marketing Research, 38(1), 62–72. Reimers, H.E. (1992) Comparisons of tests for multivariate cointegration, Statistical Papers, 33, 335–359. Schmalensee, R. (1972) The Economics of Advertising. Amsterdam: North-Holland. Schwert, G.W. (1981) The adjustment of stock prices to information about inflation, Journal of Finance, 36(1), 15–29. Shimp, T.A. (1997) Advertising, Promotion, and Supplemental Aspects of Integrated Marketing Communications (4th edn). Fort Worth, TX: The Dryden Press. Solow, R.M. (1967) The new industrial state or son of affluence, Public Interest, 9, 100–108. Spence, M. (1973) Job market signaling, Quarterly Journal of Economics, 87(3), 355–374. Stock, J.H. & Watson, M.W. (1989) Interpreting the evidence on money–income causality, Journal of Econometrics, 40(1), 161–181. Sturgess, B.T. (1982) Dispelling the myth: the effects of total advertising expenditure on aggregate consumption, International Journal of Advertising, 1(3), 201–212. Sultan, F., Farley, J.U. & Lehmann, D.R. (1990) A meta-analysis of applications of diffusion models, Journal of Marketing Research, 27(1), 70–78. Tellis, G.J. & Fornell, C. (1988) The relationship between advertising and product quality over the product life cycle: a contingency theory, Journal of Marketing Research, 25(1), 64–71. Thornton, D.L. & Batten, D.S. (1985) Lag-length selection and tests of Granger causality between money and income, Journal of Money, Credit, and Banking, 17(2), 164–178. Zanias, G.P. (1994) The long run, causality, and forecasting in the advertising–sales relationship, Journal of Forecasting, 13(7), 601–610.

1

Footnote.

194

Darratm.qxd

12/04/02

14:51

Page 195

ADVERTISING: A MULTIVARIATE ANALYSIS

ABOUT THE AUTHORS Maxwell K. Hsu is an Assistant Professor of Marketing at the University of Wisconsin, Whitewater. He received his DBA in Marketing from Louisiana Tech University in 1999. His articles have appeared in Applied Economics Letters, International Journal of Business and Economics, Studies in Economics & Finance, Human System Management, and other journals. He is currently working on research projects related to advertising, international diffusion of innovations, service quality and information technology management. Ali F. Darrat is The Premier Bank Endowed Professor of Finance and Professor of Economics at Louisiana Tech University. He has published more than 130 articles in refereed national and international journals in the fields of economics and finance, including Review of Economics & Statistics, Journal of Money, Credit and Banking, Journal of Financial and Quantitative Analysis, Journal of Banking and Finance, and Journal of Financial Research. His work has been widely cited (almost 250 citations by other authors in the literature), and several of his articles have also been abstracted in The Journal of Economic Literature, Monetary Economics Abstract, CFA Digest and ISFA Digest. His research interests include applied macroeconomics, monetary theory and policy, money and banking, capital markets, international money and finance, economic development, and business cycles and forecasting. Maosen Zhong is an Assistant Professor of Finance at The University of Texas, Brownsville. Since joining UTB in 1999, he has published more than ten refereed articles in Decision Sciences Journal, The Financial Review, Journal of Banking and Finance, Journal of Financial Research and Applied Economics Letters, among others. His research interests include asset pricing, international finance, futures markets and applied econometrics. Salah S. Abosedra is Professor of Economics at the University of Qatar, with a PhD degree from the University of Colorado in 1984. His articles have appeared in The Journal of Energy and Development and OPEC Review, among others. His current research interests include natural gas pricing and financing, and stochastic behaviour of oil prices.

1

Footnote.

195

Admap_ad.qxd

12/04/02

14:52

Page 196

Admap The leading international monthly journal for advertising, marketing, media & research practitioners Admap will keep you up-to-date with leading edge practice, important issues and new thinking worldwide about advertising and marketing communications. Designed primarily for practitioners but also invaluable for academics, it draws on material from around the globe to keep its readers abreast of the very latest ideas and techniques. And it shows how new research tools and the lessons of successful practice can be applied to help extract the greatest possible benefit from the communications budget.

Subscription Order Form I wish to subscribe to Admap (ISSN 0001-8295). I would like my subscription to run for the next 12 months. Rate: £220.00 per annum, Europe: £220.00 per annum, Rest of the World: US$430.00 per annum Two ways to pay: I enclose a cheque for _________ made payable to WARC. Please charge _________ to my Visa/American Express/Diners/MasterCard Card No.

Expiry date

Delivery address for my monthly copies of Admap: Name

Job Title

Company Company size (please tick):

Under 50

50+

250+

500+

Nature of business Address Postcode Tel. No.

Country Fax No.

(in case of order query)

Signed

Date

If you do not wish to receive promotional mailings from other companies, please tick here 

Please return this form to: Admap, Farm Road, Henley-on-Thames, Oxfordshire, RG9 1EJ, UK or fax to +44 1491 418600 with your credit card details. ADM_997

Related Documents

Sales
May 2020 26
Sales Meeting
June 2020 3
Sales Quota
May 2020 4
Sales Tax
May 2020 13
Sales 2009
October 2019 13

More Documents from ""

A#6
May 2020 26
Article 25
May 2020 38
Trust2
May 2020 42
Appraisal1
May 2020 28
Wireless Network
May 2020 35