ARIMA model yorumu


19

ARIMA modelleri hakkında bir sorum var. Diyelim ki tahmin etmek istediğim bir zaman var ve bir modeli öngörme alıştırması yapmak için iyi bir yol gibi görünüyor. Şimdi gecikmeli 'nin bugün önce olaylardan etkilenir benim serisinin ima. Bu mantıklı. Peki hataların yorumlanması nedir? Önceki kalıntım (hesaplamamda ne kadar uzakta olduğum) bugün serilerimin değerini etkiliyor mu? Gecikmeli artıklar, regresyonun ürünü / geri kalanı olduğu için bu regresyonda nasıl hesaplanır?YtARIMA(2,2)

ΔYt=α1ΔYt1+α2ΔYt2+νt+θ1νt1+θ2νt2
Y

4
Sana ARIMA modelleri olduğunu hatırlamak gerektiğini düşünüyorum atheoretic yorumlama tahmin regresyon katsayılarının normal kurallar kalmaz, modeller kesinlikle aynı şekilde geçerlidir. ARIMA modellerinin bilinmesi gereken bazı özellikleri vardır. Örneğin, bir AR (1) ' de değerleri ne kadar düşükse, α1o kadar yakınsama oranı olur. Ancak, örneğin bir AR (2) modelini ele alalım. Tüm AR (2) modelleri aynı değildir! Örneğin, eğer koşul (α12+4α2<0) karşılanırsa, AR (2) yalancı periyodik davranış gösterir ve bunun sonucu olarak tahminleri stokastik döngülerdir.
Graeme Walsh

3
(devamı ...) Biraz benzer bir şekilde, vektör otoregresyonları ile uğraşırken, tahmin edilen katsayılardan ziyade dürtü tepki fonksiyonlarını (IRF'ler) yorumlama eğilimi vardır; katsayıları yorumlamak genellikle çok zordur, ancak genellikle IRF'lerden söz edilebilir. Meraktan, yazarların bir ARIMA modelindeki katsayıları yorumlamaya çok dikkat ettikleri birçok makale buldunuz mu?
Graeme Walsh

2
Bir gösterim sorunu var gibi görünüyor. " ARIMA(2,2) " doğru olamaz, çünkü ARIMA modelleri AR / I / MA bileşenlerinin her biri için sırasıyla üç terime (p,d,q) sahipken ARMA modelleri iki (örneğin ARMA(2,2) ) - ama ilk var gibi görünüyor, bu da ARIMA anlamına gelir ( 2 , 1 , 2 )ARIMA(2,1,2) . Lütfen niyetinizi yansıtacak şekilde düzenleyin.
Glen_b -Reinstate Monica

2
@Glen_b Başka bir soruda aynı şeyi sorduğumu hatırlıyorum . Görünüşe göre bir çeşit kopyamız var. Bu soru ve bağlantılı olan soru çok benzer.
Graeme Walsh

Yanıtlar:


36

ARIMA modellerinin ateoretik modeller olduğunu hatırlamanız gerektiğini düşünüyorum , bu nedenle tahmini regresyon katsayılarını yorumlamak için olağan yaklaşım gerçekten ARIMA modellemesine geçmiyor.

Tahmini ARIMA modellerini yorumlamak (veya anlamak) için, bir dizi yaygın ARIMA modelinin sergilediği farklı özelliklerin farkında olmak iyi bir şey olacaktır.

Farklı ARIMA modelleri tarafından üretilen tahmin türlerini araştırarak bu özelliklerden bazılarını araştırabiliriz. Aşağıda aldığım ana yaklaşım budur, ancak iyi bir alternatif , farklı ARIMA modelleri (veya stokastik fark denklemleri) ile ilişkili dürtü tepki işlevlerine veya dinamik zaman yollarına bakmak olacaktır . Sonunda bunlar hakkında konuşacağım.

AR (1) Modelleri

Bir an için bir AR (1) modeli düşünelim. Bu modelde, değeri ne kadar düşükse, o kadar hızlı yakınsama oranı (ortalamaya) olduğunu söyleyebiliriz . Α 1 için farklı değerlere sahip küçük bir simüle edilmiş AR (1) modeli için tahminlerin doğasını araştırarak AR (1) modellerinin bu yönünü anlamaya çalışabiliriz .α1α1

Tartışacağımız dört AR (1) modeli seti cebirsel notasyonda şu şekilde yazılabilir: burada C bir sabittir ve gösterimin geri kalanı OP'den gelir. Görülebileceği gibi, her model sadece α 1 değerine göre farklılık gösterir.

Yt=C+0.95Yt1+νt                               (1)Yt=C+0.8Yt1+νt                                (2)Yt=C+0.5Yt1+νt                                (3)Yt=C+0.4Yt1+νt                                (4)
Cα1

Aşağıdaki grafikte, bu dört AR (1) modeli için örnek dışı tahminler çizdim. olan AR (1) modeli için öngörülerin, diğer modellere göre daha yavaş bir oranda birleştiği görülebilir . Α 1 = 0.4 olan AR (1) modeli için tahminler , diğerlerinden daha hızlı bir oranda birleşir.α1=0.95α1=0.4

enter image description here

Not: kırmızı çizgi yatay olduğunda, simüle edilen serinin ortalamasına ulaşmıştır.

MA (1) Modelleri

Now let's consider four MA(1) models with different values for θ1. The four models we'll discuss can be written as:

Yt=C+0.95νt1+νt                               (5)Yt=C+0.8νt1+νt                                (6)Yt=C+0.5νt1+νt                                (7)Yt=C+0.4νt1+νt                                (8)

In the graph below, I have plotted out-of-sample forecasts for these four different MA(1) models. As the graph shows, the behaviour of the forecasts in all four cases are markedly similar; quick (linear) convergence to the mean. Notice that there is less variety in the dynamics of these forecasts compared to those of the AR(1) models.

enter image description here

Note: when the red line is horizontal, it has reached the mean of the simulated series.

AR(2) Models

Things get a lot more interesting when we start to consider more complex ARIMA models. Take for example AR(2) models. These are just a small step up from the AR(1) model, right? Well, one might like to think that, but the dynamics of AR(2) models are quite rich in variety as we'll see in a moment.

Let's explore four different AR(2) models:

Yt=C+1.7Yt10.8Yt2+νt                               (9)Yt=C+0.9Yt10.2Yt2+νt                                (10)Yt=C+0.5Yt10.2Yt2+νt                                (11)Yt=C+0.1Yt10.7Yt2+νt                                (12)

The out-of-sample forecasts associated with each of these models is shown in the graph below. It is quite clear that they each differ significantly and they are also quite a varied bunch in comparison to the forecasts that we've seen above - except for model 2's forecasts (top right plot) which behave similar to those for an AR(1) model.

enter image description here

Note: when the red line is horizontal, it has reached the mean of the simulated series.

The key point here is that not all AR(2) models have the same dynamics! For example, if the condition,

α12+4α2<0,
is satisfied then the AR(2) model displays pseudo periodic behaviour and as a result its forecasts will appear as stochastic cycles. On the other hand, if this condition is not satisfied, stochastic cycles will not be present in the forecasts; instead, the forecasts will be more similar to those for an AR(1) model.

It's worth noting that the above condition comes from the general solution to the homogeneous form of the linear, autonomous, second-order difference equation (with complex roots). If this if foreign to you, I recommend both Chapter 1 of Hamilton (1994) and Chapter 20 of Hoy et al. (2001).

Testing the above condition for the four AR(2) models results in the following:

(1.7)2+4(0.8)=0.31<0                               (13)(0.9)2+4(0.2)=0.01>0                                 (14)(0.5)2+4(0.2)=0.55<0                               (15)(0.1)2+4(0.7)=2.54<0                               (16)

As expected by the appearance of the plotted forecasts, the condition is satisfied for each of the four models except for model 2. Recall from the graph, model 2's forecasts behave ("normally") similar to an AR(1) model's forecasts. The forecasts associated with the other models contain cycles.

Application - Modelling Inflation

Now that we have some background under our feet, let's try to interpret an AR(2) model in an application. Consider the following model for the inflation rate (πt):

πt=C+α1πt1+α2πt2+νt.
A natural expression to associate with such a model would be something like: "inflation today depends on the level of inflation yesterday and on the level of inflation on the day before yesterday". Now, I wouldn't argue against such an interpretation, but I'd suggest that some caution be drawn and that we ought to dig a bit deeper to devise a proper interpretation. In this case we could ask, in which way is inflation related to previous levels of inflation? Are there cycles? If so, how many cycles are there? Can we say something about the peak and trough? How quickly do the forecasts converge to the mean? And so on.

These are the sorts of questions we can ask when trying to interpret an AR(2) model and as you can see, it's not as straightforward as taking an estimated coefficient and saying "a 1 unit increase in this variable is associated with a so-many unit increase in the dependent variable" - making sure to attach the ceteris paribus condition to that statement, of course.

Bear in mind that in our discussion so far, we have only explored a selection of AR(1), MA(1), and AR(2) models. We haven't even looked at the dynamics of mixed ARMA models and ARIMA models involving higher lags.

To show how difficult it would be to interpret models that fall into that category, imagine another inflation model - an ARMA(3,1) with α2 constrained to zero:

πt=C+α1πt1+α3πt3+θ1νt1+νt.

Say what you'd like, but here it's better to try to understand the dynamics of the system itself. As before, we can look and see what sort of forecasts the model produces, but the alternative approach that I mentioned at the beginning of this answer was to look at the impulse response function or time path associated with the system.

This brings me to next part of my answer where we'll discuss impulse response functions.

Impulse Response Functions

Those who are familiar with vector autoregressions (VARs) will be aware that one usually tries to understand the estimated VAR model by interpreting the impulse response functions; rather than trying to interpret the estimated coefficients which are often too difficult to interpret anyway.

The same approach can be taken when trying to understand ARIMA models. That is, rather than try to make sense of (complicated) statements like "today's inflation depends on yesterday's inflation and on inflation from two months ago, but not on last week's inflation!", we instead plot the impulse response function and try to make sense of that.

Application - Four Macro Variables

For this example (based on Leamer(2010)), let's consider four ARIMA models based on four macroeconomic variables; GDP growth, inflation, the unemployment rate, and the short-term interest rate. The four models have been estimated and can be written as:

Yt=3.20+0.22Yt1+0.15Yt2+νtπt=4.10+0.46πt1+0.31πt2+0.16πt3+0.01πt4+νtut=6.2+1.58ut10.64ut2+νtrt=6.0+1.18rt10.23rt2+νt
where Yt denotes GDP growth at time t, π denotes inflation, u denotes the unemployment rate, and r denotes the short-term interest rate (3-month treasury).

The equations show that GDP growth, the unemployment rate, and the short-term interest rate are modeled as AR(2) processes while inflation is modeled as an AR(4) process.

Rather than try to interpret the coefficients in each equation, let's plot the impulse response functions (IRFs) and interpret them instead. The graph below shows the impulse response functions associated with each of these models.

enter image description here

Don't take this as a masterclass in interpreting IRFs - think of it more like a basic introduction - but anyway, to help us interpret the IRFs we'll need to accustom ourselves with two concepts; momentum and persistence.

These two concepts are defined in Leamer (2010) as follows:

Momentum: Momentum is the tendency to continue moving in the same direction. The momentum effect can offset the force of regression (convergence) toward the mean and can allow a variable to move away from its historical mean, for some time, but not indefinitely.

Persistence: A persistence variable will hang around where it is and converge slowly only to the historical mean.

Equipped with this knowledge, we now ask the question: suppose a variable is at its historical mean and it receives a temporary one unit shock in a single period, how will the variable respond in future periods? This is akin to asking those questions we asked before, such as, do the forecasts contains cycles?, how quickly do the forecasts converge to the mean?, etc.

At last, we can now attempt to interpret the IRFs.

Following a one unit shock, the unemployment rate and short-term interest rate (3-month treasury) are carried further from their historical mean. This is the momentum effect. The IRFs also show that the unemployment rate overshoots to a greater extent than does the short-term interest rate.

We also see that all of the variables return to their historical means (none of them "blow up"), although they each do this at different rates. For example, GDP growth returns to its historical mean after about 6 periods following a shock, the unemployment rate returns to its historical mean after about 18 periods, but inflation and short-term interest take longer than 20 periods to return to their historical means. In this sense, GDP growth is the least persistent of the four variables while inflation can be said to be highly persistent.

I think it's a fair conclusion to say that we've managed (at least partially) to make sense of what the four ARIMA models are telling us about each of the four macro variables.

Conclusion

Rather than try to interpret the estimated coefficients in ARIMA models (difficult for many models), try instead to understand the dynamics of the system. We can attempt this by exploring the forecasts produced by our model and by plotting the impulse response function.

[I'm happy enough to share my R code if anyone wants it.]

References

  • Hamilton, J. D. (1994). Time series analysis (Vol. 2). Princeton: Princeton university press.
  • Leamer, E. (2010). Macroeconomic Patterns and Stories - A Guide for MBAs, Springer.
  • Stengos, T., M. Hoy, J. Livernois, C. McKenna and R. Rees (2001). Mathematics for Economics, 2nd edition, MIT Press: Cambridge, MA.

3
Love the application of IRF to non-VARs. They always seem to be associated and I'd never thought of using IRFs on mere ARIMAs. (That plus, who can really understand what MA terms do?)
Wayne

2
What a great answer!
Richard Hardy

9

Note that due to Wold's decomposition theorem you can rewrite any stationary ARMA model as a MA() model, i.e. :

ΔYt=j=0ψjνtj

In this form there are no lagged variables, so any interpretation involving notion of a lagged variable is not very convincing. However looking at the MA(1) and the AR(1) models separately:

Yt=νt+θ1νt1

Yt=ρYt1+νt=νt+ρνt1+ρ2νt1+...

you can say that error terms in ARMA models explain "short-term" influence of the past, and lagged terms explain "long-term" influence. Having said that I do not think that this helps a lot and usually nobody bothers with the precise interpretation of ARMA coefficients. The goal usually is to get an adequate model and use it for forecasting.


+1 This is more or less what I was trying to get at in my comments above.
Graeme Walsh

Ha, I did not see your comments, when I was writing the answer. I suggest converting them to the answer.
mpiktas

8

I totally agree with the sentiment of the previous commentators. I would like to add that all ARIMA model can also be represented as a pure AR model. These weights are referred to as the Pi weights as compared to the pure MA form (Psi weights) . In this way you can view (interpret) an ARIMA model as an optimized weighted average of the past values. In other words rather than assume a pre-specified length and values for a weighted average , an ARIMA model delivers both the length (n) of the weights and the actual weights (c1,c2,...,cn).

Y(t)=c1Y(t1)+c2Y(t2)+c3Y(t3)+...+cnY(tn)+a(t)

In this way an ARIMA model can be explained as the answer to the question

  1. How many historical values should I use to compute a weighted sum of the past?
  2. Precisely what are those values?
Sitemizi kullandığınızda şunları okuyup anladığınızı kabul etmiş olursunuz: Çerez Politikası ve Gizlilik Politikası.
Licensed under cc by-sa 3.0 with attribution required.