Introduction 1

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Introduction 1 as PDF for free.

More details

  • Words: 2,644
  • Pages: 13
Introduction Forecasting process is not simply identifying and using a method to compute a numerical estimate of what will be the demand in future, it is rather a continuing process that requires constant monitoring and adjustment.

Why are forecasts wrong? There are a number of reasons for forecast error. These include: • •









• •





Customers often make secret plans, or plan which manufacturing firm is not party to, or no plans at all. Often, an imperfect source is used to make forecasts when better sources are available sometimes the best source is used but the best information available from that source is not received. Information lags can lead to using old information where newer information is available, so that the new trend is spotted later. Decision making and communication of decisions being omitted or delayed, e.g. a promotion campaign is an important factor causing forecasting error. Over reliance on one source without confirmation from other sources can give rise to local distortions of the information. Conversely if too many inaccurate sources are used they may incorrectly weight the resulting forecast. Communication links may not have been established or are not established reliably, such that the resulting forecast is based on incomplete or partially out of date information. Changes of plan or environmental changes can of course happen. If good communications links are established these will be picked up. Changes in purchasers' behavior can occur for a number of reasons, but the ones, which should not be a surprise, are technology changes, dissatisfaction with product or service or changes in competitors' behavior (which the company should know or at least anticipate). There are a number of sales malpractices that cause demand distortion, such as 3 for 2 offers, January sales, etc. There are a number of inventory planning malpractices which can distort demand such as excessive batch sizes, or safety stocks. MRP1 systems in particular give rise to a phenomenon called "nervousness" which is caused by changes in planning parameters such as Bills of Material, safety stocks, batch sizes, leadtimes, planning horizons, and scrap allowances, engineering changes, stock errors or PI stock corrections add to this problem. With MRP2 and APS systems the problem is made worse by any change to routings, capacity, or work centers, priority rules or correction of inaccurate information. Previous plan failures (or operational malpractices) can cause a surge in output as the problem finally gets fixed. This, when viewed in the future (in the next forecast), can be viewed as increased sales. One of the great difficulties in this computer world is our over-reliance on computer models for forecasting. This spawns the following types of problem:



• •

• • •

• • •





Inaccurate models (Not recognizing demand patterns such as seasonality or trend, or most importantly sparse or unusual demand on which mathematical forecasting models are unsound). Over complex models (because you can). Calculating safety stocks for all items in the same way. (A fast moving inexpensive item with many customers is quite a different risk to a slow moving expensive item with few customers, although they may exhibit equal standard deviation of demand). Unsubstantiated belief in an invalid model. Unsubstantiated belief in the data source(s), maybe simply because it is (they are) the most readily available. Precision verses accuracy. Computer systems are capable of doing calculations to many decimal places based on entirely inaccurate information. Be aware that you may be precisely wrong.

Not recognizing the business cycle or product life cycle shown above. Incomplete, due to the difficulty of collecting the data on a timely or reliable basis. Inconsistent, due to not collecting the same data consistently in consecutive forecasts, due to some systems failure. (An apparent trend in this case could simply be simply due to some additional or omitted data. The point at which demand is measured. If demand is measured at a point in the supply chain where there is distortion due to local factors, the resulting forecast will be wrong. For example if demand is measured by dispatches and dispatches are batched to provide full loads the demand will appear lumpy. It will also be distorted by failure to deliver orders on time (back orders) at this point. If you measure customer orders received, you are ignoring the underlying demand for orders you did not win for other reasons. If you measure enquiries you will be including artificial demand based on price checking exercises from customers. If you measure output you will be measuring production batches not demand. If you measure demand on your customers you will include their distortions and any current or previous sourcing decisions they have made, not their future ones. And last but least natural variation which you can do nothing about, but is by far the least problem.

The implications of forecast error

The obvious implication of forecast error is an over or under reaction to the latest trend. This gives rise to risks of: • • • • • • •

Missing the market Lost sales Dissatisfied customers Obsolescence, out of shelf life stock Wasted expenditure or spending too early Stock Large product recalls caused by stock building

Figure 4: The implications of Forecast Error

Features of Forecasts •

Assumes causal system past = future.



Forecasts rarely perfect because of randomness.



Forecasts more accurate for groups vs. individuals.



Forecast accuracy decreases as time horizon increases.

Elements of a Good Forecast

Timely

Reliable

l

u gf

n

i n a

Accurate

Written

e

M

Easy to use

Fig – 1

Types of Forecasts There are the various forecasting methods: Basic types of forecasting: •

Judgmental - uses subjective inputs



Time series - uses historical data assuming the future will be like the past



Associative models - uses explanatory variables to predict the future

Judgmental Forecasts •

Executive opinions



Sales force opinions



Consumer surveys



Outside opinion



Delphi method –

Opinions of managers and staff



Achieves a consensus forecast

Time Series Forecasts •

Trend - long-term movement in data



Seasonality - short-term regular variations in data



Cycle – wavelike variations of more than one year’s duration



Irregular variations - caused by unusual circumstances



Random variations - caused by chance

Time series method

are statistical techniques that make use of historical data

accumulated over a period of time .these methods assume that identifiable historical patterns or trends for demand over time will repeat themselves. They include moving average, exponential smoothening, and linear trend line. They are among the most popular methods for short range forecasting among services and manufacturing companies. Regression methods are used for forecasting by establishing a mathematical relationship between two or more variables. We are interested in identifying a relationships between variables and demand .if we know that something has caused demand to behave in a certain way in thee past, we would like to identify the relationship so if the same thing happen again in future , we can predict what demand will be.

Forecasting accuracy Forecasting accuracy forecast is never completely accurate; Forecasts will always deviate from actual demand. This difference between the forecast and the actual is the forecast error. In simple cases, a forecast is compared with an outcome at a single time-point and a summary of forecast errors is constructed over a collection of such time-points. Here the

forecast may be assessed using the difference or using a proportional error. By convention, the error is defined using the value of the outcome minus the value of the forecast. In other cases, a forecast may consist of predicted values over a number of lead-times; in this case an assessment of forecast error may need to consider more general ways of assessing the match between the time-profiles of the forecast and the outcome. If a main application of the forecast is to predict when certain thresholds will be crossed, one possible way of assessing the forecast is to use the timing-error—the difference in time between when the outcome crosses the threshold and when the forecast does so. When there is interest in the maximum value being reached, assessment of forecasts can be done using any of: • • •

the difference of times of the peaks; the difference in the peak values in the forecast and outome; the difference between the peak value of the outcome and the value forecast for that time point.

Forecast error can be a calendar forecast error or a cross-sectional forecast error, when we want to summarize the forecast error over a group of units. If we observe the average forecast error for a time-series of forecasts for the same product or phenomenon, then we call this a calendar forecast error or time-series forecast error. If we observe this for multiple products for the same period, then this is a cross-sectional performance error. Although forecast error is inevitable, the objective of forecasting is that it be as slight as possible. Because we are basically dealing with methods that project from past data, our Forecasts will be less reliable the further into the future we predict. Casual or explanatory models will generally give the greatest accuracy, especially in the predicting turning point but they do not so at a great cost in computing time and data storage. •

The importance of appropriate evaluation criteria –

Different criteria may lead to totally different conclusions.



Evaluation criteria must be designed to reveal the problems in the model.



Error - difference between actual value and predicted value



Mean Absolute Deviation (MAD) –



Mean Squared Error (MSE) –



Average absolute error Average of squared error

Mean Absolute Percent Error (MAPE)

Monitoring the Forecasting System any forecasting system needs to be regular monitored for error magnitude and bias. Reasonably errors are to be expected, but any forecaster dreads bias. There are following techniques for the monitoring of forecasts are: Mean Absolute Deviation Mean absolute deviation, or MAD, is one of the most popular and simpler to use the measures of forecast error. MAD is an average of the difference between the forecast and actual demand computed by following formula

MAD

=



Actual

− Forecast n

Where n is the total no. of periods. •

MAD –

Easy to compute



Weights errors linearly

Computational value for MAD Table 1

PERIOD 1 2 3 4 5 6 7 8 9 10 11 12

DEMAND Dt 37 40 41 37 45 50 43 47 56 52 55 54 557

FORCAST

ERROR

Ft ( α = 0.30 )

( Dt − Ft )

37 37 37 38.83 38.28 40.29 43.2 43.14 44.3 47.81 49.06 50.84

3.00 3.10 -1.83 6.72 9.69 -0.20 3.86 11.7 4.19 5.94 3.15 49.32

Dt − Ft 3.00 3.10 1.83 6.72 9.69 0.20 3.86 11.7 4.19 5.94 3.15 53.38

MAD =

Σ Dt − Ft n

= 53.39/11 = 4.85 Table 2 Period

Actual 1 2 3 4 5 6 7 8 9 10 11 12

The

Alpha = 0.1 Error 42 40 43 40 41 39 46 44 45 38 40

smaller

the

42 41.8 41.92 41.73 41.66 41.39 41.85 42.07 42.36 41.92 41.73

value

of

Alpha = 0.4 Error -2.00 1.20 -1.92 -0.73 -2.66 4.61 2.15 2.93 -4.36 -1.92

MAD,

the

42 41.2 41.92 41.15 41.09 40.25 42.55 43.13 43.88 41.53 40.92

-2 1.8 -1.92 -0.15 -2.09 5.75 1.45 1.87 -5.88 -1.53

forecast

is

more

accurate.

Mean Absolute Percent Error (MAPE) The mean absolute percent error measures the absolute error as a percentage of demand rather than per period. It eliminates the problem of interpreting the measures of accuracy relative to the magnitude of the demand and forecast values, as MAD does. •

MAPE

MAPE =

∑( Actual

− Forecast / Actual*100) n



Puts errors in perspective

From above problem: MAPE = 53.39/557*100 = 9.6%. Mean Squared Error (MSE) –

Squares error



More weight to large errors

=

MSE

∑ ( Actual



Forecast)

2

n -1

Table 3

Period 1 2 3 4 5 6 7 8

Actual 217 213 216 210 213 219 216 212

MAD= MSE= MAPE=

Forecast 215 216 215 214 211 214 217 216

(A-F) 2 -3 1 -4 2 5 -1 -4 -2

|A-F| 2 3 1 4 2 5 1 4 22

(A-F)^2 (|A-F|/Actual)*100 4 0.92 9 1.41 1 0.46 16 1.90 4 0.94 25 2.28 1 0.46 16 1.89 76 10.26

2.75 10.86 1.28

Controlling the Forecast Random Variation If the out of control condition was caused by Random variations, obviously no action is required. Such occurrences can be reduced by selecting a sufficiently large value of MAPE or tracking signal. Inaccurate Parameter Values Although the forecasts may be satisfactory initially larger error cold as time passes, which indicates that true parameter values of smoothing constant such as α and β used in the model have changed with time .the model may still be correct, but revised estimates of the model parameters may be required. In these cases, higher values of parameter or smaller values of number of periods in the model can correct out –of- control situation. Once the forecast

appears to be back in control, the smoothing constant can again be revised tom its normal values.

Change in Time Series Process The demand patter for the product could have changed with time. Terms such as trend, seasonal, and cyclical may need to be added or deleted. Corrective actions should be taken to obtain a better representation of the process. If the demand process appears to be highly autocorrelated, then use of different modes should be explored. Change in the Variance of the Demand Larger smoothing constants can be used to reflect the changed condition of the demand pattern. In any event , when corrective actions are taken to bring the forecasting system under control, the value of MAD, MAFE and tracking signal should be calculated by resetting the time periods to prevent the past affecting the new control measures. •



Control chart –

A visual tool for monitoring forecast errors



Used to detect non-randomness in errors

Forecasting errors are in control if –

All errors are within the control limits



No patterns, such as trends or cycles, are present

Sources of Forecast errors •

Model may be inadequate



Irregular variations



Incorrect use of forecasting technique

Tracking Signal Tracking signal indicates if the forecast is consistently biased high or low. Forecasts can go out of control and start providing inaccurate forecasts for several reasons, including a change in trend, the unanticipated appearance of a cycle, or an irregular variation such as unseasonable weather, a promotional campaign, new competitions, or a political event that distracts consumers.

-Ratio of cumulative error to MAD

Tracking signal

=

∑(Actual -Forecast) MAD

Bias – Persistent tendency for forecasts to be -Greater or less than actual values. Choosing a Forecasting Technique •

No single technique works in every situation



Two most important factors





Cost



Accuracy

Other factors include the availability of: –

Historical data



Computers



Time needed to gather and analyze the data



Forecast horizon

Operations Strategy Forecasts are the basis for many decisions •

Work to improve short-term forecasts



Accurate short-term forecasts improve

Example of tracking signal: Table 4 PERIOD DEMAND FORECAST ERROR

MAD TRACKING

1 2 3 4 5

37 40 41 37 45

37.00 37.00 37.90 38.83 38.28

3.00 3.1 -1.83 6.72

3.00 6.1 4.27 10.9

3.00 3.05 2.64 3.66

SIGNAL 1.00 2.00 1.62 3.00

6

50

40.29

9.69

9 20.6

4.87

4.25

-0.20

8 20.4

4.09

5.01

4.06

6.00

7

43

43.20

8

47

43.14

3.86

8 24.3

9

56

44.30

11.7

4 36.0

5.01

7.19

10

52

47.81

4.19

4 40.2

4.92

8.18

5.94

3 46.1

5.02

9.20

3.15

7 49.3

4.85

10.17

11 12

55 54

49.06 50.84

2

Related Documents

1 Introduction
November 2019 3
1 Introduction
November 2019 6
1 Introduction
December 2019 6
1 Introduction
November 2019 3
Introduction 1
November 2019 6
Introduction 1
June 2020 1