ARIMA modellerinin ateoretik modeller olduğunu hatırlamanız gerektiğini düşünüyorum , bu nedenle tahmini regresyon katsayılarını yorumlamak için olağan yaklaşım gerçekten ARIMA modellemesine geçmiyor.
Tahmini ARIMA modellerini yorumlamak (veya anlamak) için, bir dizi yaygın ARIMA modelinin sergilediği farklı özelliklerin farkında olmak iyi bir şey olacaktır.
Farklı ARIMA modelleri tarafından üretilen tahmin türlerini araştırarak bu özelliklerden bazılarını araştırabiliriz. Aşağıda aldığım ana yaklaşım budur, ancak iyi bir alternatif , farklı ARIMA modelleri (veya stokastik fark denklemleri) ile ilişkili dürtü tepki işlevlerine veya dinamik zaman yollarına bakmak olacaktır . Sonunda bunlar hakkında konuşacağım.
AR (1) Modelleri
Bir an için bir AR (1) modeli düşünelim. Bu modelde, değeri ne kadar düşükse, o kadar hızlı yakınsama oranı (ortalamaya) olduğunu söyleyebiliriz . Α 1 için farklı değerlere sahip küçük bir simüle edilmiş AR (1) modeli için tahminlerin doğasını araştırarak AR (1) modellerinin bu yönünü anlamaya çalışabiliriz .α1α1
Tartışacağımız dört AR (1) modeli seti cebirsel notasyonda şu şekilde yazılabilir:
burada C bir sabittir ve gösterimin geri kalanı OP'den gelir. Görülebileceği gibi, her model sadece α 1 değerine göre farklılık gösterir.
Yt=C+0.95Yt−1+νt (1)Yt=C+0.8Yt−1+νt (2)Yt=C+0.5Yt−1+νt (3)Yt=C+0.4Yt−1+νt (4)
Cα1
Aşağıdaki grafikte, bu dört AR (1) modeli için örnek dışı tahminler çizdim. olan AR (1) modeli için öngörülerin, diğer modellere göre daha yavaş bir oranda birleştiği görülebilir . Α 1 = 0.4 olan AR (1) modeli için tahminler , diğerlerinden daha hızlı bir oranda birleşir.α1=0.95α1=0.4
Not: kırmızı çizgi yatay olduğunda, simüle edilen serinin ortalamasına ulaşmıştır.
MA (1) Modelleri
Now let's consider four MA(1) models with different values for θ1. The four models we'll discuss can be written as:
Yt=C+0.95νt−1+νt (5)Yt=C+0.8νt−1+νt (6)Yt=C+0.5νt−1+νt (7)Yt=C+0.4νt−1+νt (8)
In the graph below, I have plotted out-of-sample forecasts for these four different MA(1) models. As the graph shows, the behaviour of the forecasts in all four cases are markedly similar; quick (linear) convergence to the mean. Notice that there is less variety in the dynamics of these forecasts compared to those of the AR(1) models.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
AR(2) Models
Things get a lot more interesting when we start to consider more complex ARIMA models. Take for example AR(2) models. These are just a small step up from the AR(1) model, right? Well, one might like to think that, but the dynamics of AR(2) models are quite rich in variety as we'll see in a moment.
Let's explore four different AR(2) models:
Yt=C+1.7Yt−1−0.8Yt−2+νt (9)Yt=C+0.9Yt−1−0.2Yt−2+νt (10)Yt=C+0.5Yt−1−0.2Yt−2+νt (11)Yt=C+0.1Yt−1−0.7Yt−2+νt (12)
The out-of-sample forecasts associated with each of these models is shown in the graph below. It is quite clear that they each differ significantly and they are also quite a varied bunch in comparison to the forecasts that we've seen above - except for model 2's forecasts (top right plot) which behave similar to those for an AR(1) model.
Note: when the red line is horizontal, it has reached the mean of the simulated series.
The key point here is that not all AR(2) models have the same dynamics! For example, if the condition,
α21+4α2<0,
is satisfied then the AR(2) model displays pseudo periodic behaviour and as a result its forecasts will appear as stochastic cycles. On the other hand, if this condition is not satisfied, stochastic cycles will not be present in the forecasts; instead, the forecasts will be more similar to those for an AR(1) model.
It's worth noting that the above condition comes from the general solution to the homogeneous form of the linear, autonomous, second-order difference equation (with complex roots). If this if foreign to you, I recommend both Chapter 1 of Hamilton (1994) and Chapter 20 of Hoy et al. (2001).
Testing the above condition for the four AR(2) models results in the following:
(1.7)2+4(−0.8)=−0.31<0 (13)(0.9)2+4(−0.2)=0.01>0 (14)(0.5)2+4(−0.2)=−0.55<0 (15)(0.1)2+4(−0.7)=−2.54<0 (16)
As expected by the appearance of the plotted forecasts, the condition is satisfied for each of the four models except for model 2. Recall from the graph, model 2's forecasts behave ("normally") similar to an AR(1) model's forecasts. The forecasts associated with the other models contain cycles.
Application - Modelling Inflation
Now that we have some background under our feet, let's try to interpret an AR(2) model in an application. Consider the following model for the inflation rate (πt):
πt=C+α1πt−1+α2πt−2+νt.
A natural expression to associate with such a model would be something like:
"inflation today depends on the level of inflation yesterday and on the level of inflation on the day before yesterday". Now, I wouldn't argue against such an interpretation, but I'd suggest that some caution be drawn and that we ought to dig a bit deeper to devise a proper interpretation. In this case we could ask, in which way is inflation related to previous levels of inflation? Are there cycles? If so, how many cycles are there? Can we say something about the peak and trough? How quickly do the forecasts converge to the mean? And so on.
These are the sorts of questions we can ask when trying to interpret an AR(2) model and as you can see, it's not as straightforward as taking an estimated coefficient and saying "a 1 unit increase in this variable is associated with a so-many unit increase in the dependent variable" - making sure to attach the ceteris paribus condition to that statement, of course.
Bear in mind that in our discussion so far, we have only explored a selection of AR(1), MA(1), and AR(2) models. We haven't even looked at the dynamics of mixed ARMA models and ARIMA models involving higher lags.
To show how difficult it would be to interpret models that fall into that category, imagine another inflation model - an ARMA(3,1) with α2 constrained to zero:
πt=C+α1πt−1+α3πt−3+θ1νt−1+νt.
Say what you'd like, but here it's better to try to understand the dynamics of the system itself. As before, we can look and see what sort of forecasts the model produces, but the alternative approach that I mentioned at the beginning of this answer was to look at the impulse response function or time path associated with the system.
This brings me to next part of my answer where we'll discuss impulse response functions.
Impulse Response Functions
Those who are familiar with vector autoregressions (VARs) will be aware that one usually tries to understand the estimated VAR model by interpreting the impulse response functions; rather than trying to interpret the estimated coefficients which are often too difficult to interpret anyway.
The same approach can be taken when trying to understand ARIMA models. That is, rather than try to make sense of (complicated) statements like "today's inflation depends on yesterday's inflation and on inflation from two months ago, but not on last week's inflation!", we instead plot the impulse response function and try to make sense of that.
Application - Four Macro Variables
For this example (based on Leamer(2010)), let's consider four ARIMA models based on four macroeconomic variables; GDP growth, inflation, the unemployment rate, and the short-term interest rate. The four models have been estimated and can be written as:
Ytπtutrt====3.20+0.22Yt−1+0.15Yt−2+νt4.10+0.46πt−1+0.31πt−2+0.16πt−3+0.01πt−4+νt6.2+1.58ut−1−0.64ut−2+νt6.0+1.18rt−1−0.23rt−2+νt
where
Yt denotes GDP growth at time
t,
π denotes inflation,
u denotes the unemployment rate, and
r denotes the short-term interest rate (3-month treasury).
The equations show that GDP growth, the unemployment rate, and the short-term interest rate are modeled as AR(2) processes while inflation is modeled as an AR(4) process.
Rather than try to interpret the coefficients in each equation, let's plot the impulse response functions (IRFs) and interpret them instead. The graph below shows the impulse response functions associated with each of these models.
Don't take this as a masterclass in interpreting IRFs - think of it more like a basic introduction - but anyway, to help us interpret the IRFs we'll need to accustom ourselves with two concepts; momentum and persistence.
These two concepts are defined in Leamer (2010) as follows:
Momentum: Momentum is the tendency to continue moving in the same
direction. The momentum effect can offset the force of regression
(convergence) toward the mean and can allow a variable to move away
from its historical mean, for some time, but not indefinitely.
Persistence: A persistence variable will hang around where it is and
converge slowly only to the historical mean.
Equipped with this knowledge, we now ask the question: suppose a variable is at its historical mean and it receives a temporary one unit shock in a single period, how will the variable respond in future periods? This is akin to asking those questions we asked before, such as, do the forecasts contains cycles?, how quickly do the forecasts converge to the mean?, etc.
At last, we can now attempt to interpret the IRFs.
Following a one unit shock, the unemployment rate and short-term interest rate (3-month treasury) are carried further from their historical mean. This is the momentum effect. The IRFs also show that the unemployment rate overshoots to a greater extent than does the short-term interest rate.
We also see that all of the variables return to their historical means (none of them "blow up"), although they each do this at different rates. For example, GDP growth returns to its historical mean after about 6 periods following a shock, the unemployment rate returns to its historical mean after about 18 periods, but inflation and short-term interest take longer than 20 periods to return to their historical means. In this sense, GDP growth is the least persistent of the four variables while inflation can be said to be highly persistent.
I think it's a fair conclusion to say that we've managed (at least partially) to make sense of what the four ARIMA models are telling us about each of the four macro variables.
Conclusion
Rather than try to interpret the estimated coefficients in ARIMA models (difficult for many models), try instead to understand the dynamics of the system. We can attempt this by exploring the forecasts produced by our model and by plotting the impulse response function.
[I'm happy enough to share my R code if anyone wants it.]
References
- Hamilton, J. D. (1994). Time series analysis (Vol. 2). Princeton: Princeton university press.
- Leamer, E. (2010). Macroeconomic Patterns and Stories - A Guide for MBAs, Springer.
- Stengos, T., M. Hoy, J. Livernois, C. McKenna and R. Rees (2001). Mathematics for Economics, 2nd edition, MIT Press: Cambridge, MA.