Dağılımı nedir


26

Ne belirleme katsayısı dağılımı, ya da R, kare R 2R2 Boş hipotez altında, doğrusal, tek değişkenli çoklu regresyon, H 0 : β = 0H0:β=0 ?

Bu, k belirleyicisinin ksayısına ve n > k örneklerinin sayısına nasıl bağlıdır n>k? Bu dağılımın modu için kapalı formlu bir ifade var mı?

Özellikle, basit regresyon için (bir x tahmincisi ile x) bu dağılımın sıfırda modda olduğu hissine sahibim , ancak çoklu regresyon için mod sıfır olmayan bir pozitif değerdedir. Bu gerçekten doğruysa, bu "aşama geçişi" nin sezgisel bir açıklaması var mı?


Güncelleştirme

@Alecos'un aşağıda gösterildiği gibi, dağılım gerçekten k = 2k=2 ve k = 3 olduğunda sıfırda zirve yapar ve k > 3k=3 olduğunda sıfırda değil . Bu aşama geçişi üzerine geometrik bir görüş olması gerektiğini düşünüyorum. OLS geometrik görünüşünü göz önünde bulundurun: y bir vektördür R , n , X, tanımlar, bir k boyutlu bir alt uzay vardır. En Küçük Kareler çıkıntı tekabül y bu matrisini ve R ' 2 arasındaki açının karesi kosinüs y ve projeksiyon y .k>3yRnXkyR2yy^

Şimdi, @ Alecos'un cevabından, eğer bütün vektörler rastgele olursa, bu açının olasılık dağılımının k = 2 ve k = 3 için 90 ∘'da zirve yapacağı , ancak k için < 90 değerinde bir moda sahip olacağı sonucuna varır . > 3 . Niye ya?!90k=2k=3<90k>3


Güncelleme 2: @ Alecos'un cevabını kabul ediyorum, ancak yine de burada bazı önemli bilgileri kaçırdığıma dair bir his var. Herhangi biri, bu olguyu “açık” kılacak bir başka (geometrik ya da olmayan) görüş önerisinde bulunursa, bir lütuf sunmaktan mutluluk duyarım.


1
Hata normalliği kabul etmeye istekli misiniz?
Dimitriy V. Masterov

1
Evet, sanırım bu soruyu cevaplandırılabilir kılmak için bir varsayım yapılması gerekiyor (?).
Amip, Reinstate Monica'yı


1
@Khashaa: Aslında, sorumu buraya göndermeden önce bu blogspot sayfasını bulduğumu itiraf etmeliyim. Dürüst olmak gerekirse, hala bu fenomenin tartışılmasını forumumuzda yapmak istedim, bu yüzden bunu görmedim.
amip diyor Reinstate Monica

Yanıtlar:


33

Belirli hipotez için (tüm regresör katsayıları sıfır, olduğunu değil ve normallik altında bu testte incelenen edilmez sabit terim dahil), biliyoruz (örneğin Maddala 2001, s. 155, ama not orada, bkz k sayımları sabit terim olmadan regresörler, yani ifade biraz farklı görünüyor)k

F = n - kk - 1 R,21 - R ' 2 bir merkezi olarak dağıtılırF(k-1,n-k)rastgele değişkenin.

F=nkk1R21R2
F(k1,nk)

Sabit terim test etmemize rağmen, kk etmesek de saydığını unutmayın.

Etrafta dolaşmak,

( K - 1 ) K - ( k - 1 ) F R, 2 = ( n - k ) R, 2( k - 1 ) F = R ' 2 [ ( N - k ) + ( k - 1 ) F ]

(k1)F(k1)FR2=(nk)R2(k1)F=R2[(nk)+(k1)F]

R, 2 = ( k - 1 ) F(nk)+(k1)F

R2=(k1)F(nk)+(k1)F

But the right hand side is distributed as a Beta distribution, specifically

R2Beta(k12,nk2)

R2Beta(k12,nk2)

The mode of this distribution is

modeR2=k121k12+nk22=k3n5

modeR2=k121k12+nk22=k3n5

FINITE & UNIQUE MODE
From the above relation we can infer that for the distribution to have a unique and finite mode we must have

k3,n>5

k3,n>5

This is consistent with the general requirement for a Beta distribution, which is

{α>1,β1},OR{α1,β>1}

{α>1,β1},OR{α1,β>1}

as one can infer from this CV thread or read here.
Note that if {α=1,β=1}{α=1,β=1}, we obtain the Uniform distribution, so all the density points are modes (finite but not unique). Which creates the question: Why, if k=3,n=5k=3,n=5, R2R2 is distributed as a U(0,1)U(0,1)?

IMPLICATIONS
Assume that you have k=5k=5 regressors (including the constant), and n=99n=99 observations. Pretty nice regression, no overfitting. Then

R2|β=0Beta(2,47),modeR2=1470.021

R2β=0Beta(2,47),modeR2=1470.021

and density plot

enter image description here

Intuition please: this is the distribution of R2R2 under the hypothesis that no regressor actually belongs to the regression. So a) the distribution is independent of the regressors, b) as the sample size increases its distribution is concentrated towards zero as the increased information swamps small-sample variability that may produce some "fit" but also c) as the number of irrelevant regressors increases for given sample size, the distribution concentrates towards 11, and we have the "spurious fit" phenomenon.

But also, note how "easy" it is to reject the null hypothesis: in the particular example, for R2=0.13R2=0.13 cumulative probability has already reached 0.990.99, so an obtained R2>0.13R2>0.13 will reject the null of "insignificant regression" at significance level 11%.

EK
modunu ilişkin yeni konuya cevap R 2 dağıtım, ben "sahte bir uyum" olgusuna bağlar düşünce (geometrik değil) aşağıdaki satırı, sunabilir: Biz bir veri seti üzerinde en küçük kareler çalıştırdığınızda temelde k bilinmeyen bir n doğrusal denklem sistemini çözeriz (lise matematiğindeki tek fark, o zaman "bilinen katsayılar" olarak adlandırdığımız şeydir, doğrusal regresyonda "değişkenler / regresörler", "bilinmeyen x" dediğimiz şey) şimdi "bilinmeyen katsayılar", ve "sabit terimler", "bağımlı değişken" dediğimiz şeyleri çağır. K < n olduğu süreceR2nkk<n the system is over-identified and there is no exact solution, only approximate -and the difference emerges as "unexplained variance of the dependent variable", which is captured by 1R21R2. If k=nk=n the system has one exact solution (assuming linear independence). In between, as we increase the number of kk, we reduce the "degree of overidentification" of the system and we "move towards" the single exact solution. Under this view, it makes sense why R2R2 increases spuriously with the addition of irrelevant regressions, and consequently, why its mode moves gradually towards 11, as kk increases for given nn.


1
Its mathematical. For k=2k=2 the first parameter of the beta distribution (the "αα" in standard notation) becomes smaller than unity. In that case the Beta distribution has no finite mode, play around with keisan.casio.com/exec/system/1180573226 to see how the shapes change.
Alecos Papadopoulos

1
@Alecos Excellent answer! (+1) Can I strongly suggest that you add to your answer the requirement for the mode to exist? This is usually stated as α>1α>1 and β>1β>1 but more subtly, it's ok if equality holds in one of the two ... I think for our purposes this becomes k3k3 and nk+2nk+2 and at least one of these inequalities is strict.
Silverfish

2
@Khashaa Except if theory demands it, I never exclude the intercept from the regression -it is the average level of the dependent variable, regressors or no regressors (and this level is usually positive, so it would be a foolishly self-created misspecification to omit it). But I always exclude it from the F-test of the regression, since what I care about is not whether the dependent variable has a non-zero unconditional mean, but whether the regressors have any explanatory power as regards deviations from this mean.
Alecos Papadopoulos

1
+1! Are there results for the distribution of R2R2 for nonzero βjβj?
Christoph Hanck


18

I won't rederive the Beta(k12,nk2)Beta(k12,nk2) distribution in @Alecos's excellent answer (it's a standard result, see here for another nice discussion) but I want to fill in more details about the consequences! Firstly, what does the null distribution of R2R2 look like for a range of values of nn and kk? The graph in @Alecos's answer is quite representative of what occurs in practical multiple regressions, but sometimes insight is gleaned more easily from smaller cases. I've included the mean, mode (where it exists) and standard deviation. The graph/table deserves a good eyeball: best viewed at full-size. I could have included less facets but the pattern would have been less clear; I have appended R code so that readers can experiment with different subsets of nn and kk.

Distribution of R2 for small sample sizes

Values of shape parameters

The graph's colour scheme indicates whether each shape parameter is less than one (red), equal to one (blue), or more than one (green). The left-hand side shows the value of αα while ββ is on the right. Since α=k12α=k12, its value increases in arithmetic progression by a common difference of 1212 as we move right from column to column (add a regressor to our model) whereas, for fixed nn, β=nk2β=nk2 decreases by 1212. The total α+β=n12α+β=n12 is fixed for each row (for a given sample size). If instead we fix kk and move down the column (increase sample size by 1), then αα stays constant and ββ increases by 1212. In regression terms, αα is half the number of regressors included in the model, and ββ is half the residual degrees of freedom. To determine the shape of the distribution we are particularly interested in where αα or ββ equal one.

The algebra is straightforward for αα: we have k12=1k12=1 so k=3k=3. This is indeed the only column of the facet plot that's filled blue on the left. Similarly α<1α<1 for k<3k<3 (the k=2k=2 column is red on the left) and α>1α>1 for k>3k>3 (from the k=4k=4 column onwards, the left side is green).

For β=1β=1 we have nk2=1nk2=1 hence k=n2k=n2. Note how these cases (marked with a blue right-hand side) cut a diagonal line across the facet plot. For β>1β>1 we obtain k<n2k<n2 (the graphs with a green left side lie to the left of the diagonal line). For β<1β<1 we need k>n2k>n2, which involves only the right-most cases on my graph: at n=kn=k we have β=0β=0 and the distribution is degenerate, but n=k1n=k1 where β=12β=12 is plotted (right side in red).

Since the PDF is f(x;α,β)xα1(1x)β1f(x;α,β)xα1(1x)β1, it is clear that if (and only if) α<1α<1 then f(x)f(x) as x0x0. We can see this in the graph: when the left side is shaded red, observe the behaviour at 0. Similarly when β<1β<1 then f(x)f(x) as x1x1. Look where the right side is red!

Symmetries

One of the most eye-catching features of the graph is the level of symmetry, but when the Beta distribution is involved, this shouldn't be surprising!

The Beta distribution itself is symmetric if α=βα=β. For us this occurs if n=2k1n=2k1 which correctly identifies the panels (k=2,n=3)(k=2,n=3), (k=3,n=5)(k=3,n=5), (k=4,n=7)(k=4,n=7) and (k=5,n=9)(k=5,n=9). The extent to which the distribution is symmetric across R2=0.5R2=0.5 depends on how many regressor variables we include in the model for that sample size. If k=n+12k=n+12 the distribution of R2R2 is perfectly symmetric about 0.5; if we include fewer variables than that it becomes increasingly asymmetric and the bulk of the probability mass shifts closer to R2=0R2=0; if we include more variables then it shifts closer to R2=1R2=1. Remember that kk includes the intercept in its count, and that we are working under the null, so the regressor variables should have coefficient zero in the correctly specified model.

There is also an obviously symmetry between distributions for any given nn, i.e. any row in the facet grid. For example, compare (k=3,n=9)(k=3,n=9) with (k=7,n=9)(k=7,n=9). What's causing this? Recall that the distribution of Beta(α,β)Beta(α,β) is the mirror image of Beta(β,α)Beta(β,α) across x=0.5x=0.5. Now we had αk,n=k12αk,n=k12 and βk,n=nk2βk,n=nk2. Consider k=nk+1k=nk+1 and we find:

αk,n=(nk+1)12=nk2=βk,n

αk,n=(nk+1)12=nk2=βk,n
βk,n=n(nk+1)2=k12=αk,n
βk,n=n(nk+1)2=k12=αk,n

So this explains the symmetry as we vary the number of regressors in the model for a fixed sample size. It also explains the distributions that are themselves symmetric as a special case: for them, k=kk=k so they are obliged to be symmetric with themselves!

This tells us something we might not have guessed about multiple regression: for a given sample size nn, and assuming no regressors have a genuine relationship with YY, the R2R2 for a model using k1k1 regressors plus an intercept has the same distribution as 1R21R2 does for a model with k1k1 residual degrees of freedom remaining.

Special distributions

When k=nk=n we have β=0β=0, which isn't a valid parameter. However, as β0β0 the distribution becomes degenerate with a spike such that P(R2=1)=1P(R2=1)=1. This is consistent with what we know about a model with as many parameters as data points - it achieves perfect fit. I haven't drawn the degenerate distribution on my graph but did include the mean, mode and standard deviation.

When k=2k=2 and n=3n=3 we obtain Beta(12,12)Beta(12,12) which is the arcsine distribution. This is symmetric (since α=βα=β) and bimodal (0 and 1). Since this is the only case where both α<1α<1 and β<1β<1 (marked red on both sides), it is our only distribution which goes to infinity at both ends of the support.

The Beta(1,1)Beta(1,1) distribution is the only Beta distribution to be rectangular (uniform). All values of R2R2 from 0 to 1 are equally likely. The only combination of kk and nn for which α=β=1α=β=1 occurs is k=3k=3 and n=5n=5 (marked blue on both sides).

The previous special cases are of limited applicability but the case α>1α>1 and β=1β=1 (green on left, blue on right) is important. Now f(x;α,β)xα1(1x)β1=xα1f(x;α,β)xα1(1x)β1=xα1 so we have a power-law distribution on [0, 1]. Of course it's unlikely we'd perform a regression with k=n2k=n2 and k>3k>3, which is when this situation occurs. But by the previous symmetry argument, or some trivial algebra on the PDF, when k=3k=3 and n>5n>5, which is the frequent procedure of multiple regression with two regressors and an intercept on a non-trivial sample size, R2R2 will follow a reflected power law distribution on [0, 1] under H0H0. This corresponds to α=1α=1 and β>1β>1 so is marked blue on left, green on right.

You may also have noticed the triangular distributions at (k=5,n=7)(k=5,n=7) and its reflection (k=3,n=7)(k=3,n=7). We can recognise from their αα and ββ that these are just special cases of the power-law and reflected power-law distributions where the power is 21=121=1.

Mode

If α>1α>1 and β>1β>1, all green in the plot, f(x;α,β)f(x;α,β) is concave with f(0)=f(1)=0f(0)=f(1)=0, and the Beta distribution has a unique mode α1α+β2α1α+β2. Putting these in terms of kk and nn, the condition becomes k>3k>3 and n>k+2n>k+2 while the mode is k3n5k3n5.

All other cases have been dealt with above. If we relax the inequality to allow β=1β=1, then we include the (green-blue) power-law distributions with k=n2k=n2 and k>3k>3 (equivalently, n>5n>5). These cases clearly have mode 1, which actually agrees with the previous formula since (n2)3n5=1(n2)3n5=1. If instead we allowed α=1α=1 but still demanded β>1β>1, we'd find the (blue-green) reflected power-law distributions with k=3k=3 and n>5n>5. Their mode is 0, which agrees with 33n5=033n5=0. However, if we relaxed both inequalities simultaneously to allow α=β=1α=β=1, we'd find the (all blue) uniform distribution with k=3k=3 and n=5n=5, which does not have a unique mode. Moreover the previous formula can't be applied in this case, since it would return the indeterminate form 3355=003355=00.

When n=kn=k we get a degenerate distribution with mode 1. When β<1β<1 (in regression terms, n=k1n=k1 so there is only one residual degree of freedom) then f(x)f(x) as x1x1, and when α<1α<1 (in regression terms, k=2k=2 so a simple linear model with intercept and one regressor) then f(x)f(x) as x0x0. These would be unique modes except in the unusual case where k=2k=2 and n=3n=3 (fitting a simple linear model to three points) which is bimodal at 0 and 1.

Mean

The question asked about the mode, but the mean of R2R2 under the null is also interesting - it has the remarkably simple form k1n1k1n1. For a fixed sample size it increases in arithmetic progression as more regressors are added to the model, until the mean value is 1 when k=nk=n. The mean of a Beta distribution is αα+βαα+β so such an arithmetic progression was inevitable from our earlier observation that, for fixed nn, the sum α+βα+β is constant but α increases by 0.5 for each regressor added to the model.

αα+β=(k1)/2(k1)/2+(nk)/2=k1n1

Code for plots

require(grid)
require(dplyr)

nlist <- 3:9 #change here which n to plot
klist <- 2:8 #change here which k to plot

totaln <- length(nlist)
totalk <- length(klist)

df <- data.frame(
    x = rep(seq(0, 1, length.out = 100), times = totaln * totalk),
    k = rep(klist, times = totaln, each = 100),
    n = rep(nlist, each = totalk * 100)
)

df <- mutate(df,
    kname = paste("k =", k),
    nname = paste("n =", n),
    a = (k-1)/2,
    b = (n-k)/2,
    density = dbeta(x, (k-1)/2, (n-k)/2),
    groupcol = ifelse(x < 0.5, 
        ifelse(a < 1, "below 1", ifelse(a ==1, "equals 1", "more than 1")),
        ifelse(b < 1, "below 1", ifelse(b ==1, "equals 1", "more than 1")))
)

g <- ggplot(df, aes(x, density)) +
    geom_line(size=0.8) + geom_area(aes(group=groupcol, fill=groupcol)) +
    scale_fill_brewer(palette="Set1") +
    facet_grid(nname ~ kname)  + 
    ylab("probability density") + theme_bw() + 
    labs(x = expression(R^{2}), fill = expression(alpha~(left)~beta~(right))) +
    theme(panel.margin = unit(0.6, "lines"), 
        legend.title=element_text(size=20),
        legend.text=element_text(size=20), 
        legend.background = element_rect(colour = "black"),
        legend.position = c(1, 1), legend.justification = c(1, 1))


df2 <- data.frame(
    k = rep(klist, times = totaln),
    n = rep(nlist, each = totalk),
    x = 0.5,
    ymean = 7.5,
    ymode = 5,
    ysd = 2.5
)

df2 <- mutate(df2,
    kname = paste("k =", k),
    nname = paste("n =", n),
    a = (k-1)/2,
    b = (n-k)/2,
    meanR2 = ifelse(k > n, NaN, a/(a+b)),
    modeR2 = ifelse((a>1 & b>=1) | (a>=1 & b>1), (a-1)/(a+b-2), 
        ifelse(a<1 & b>=1 & n>=k, 0, ifelse(a>=1 & b<1 & n>=k, 1, NaN))),
    sdR2 = ifelse(k > n, NaN, sqrt(a*b/((a+b)^2 * (a+b+1)))),
    meantext = ifelse(is.nan(meanR2), "", paste("Mean =", round(meanR2,3))),
    modetext = ifelse(is.nan(modeR2), "", paste("Mode =", round(modeR2,3))),
    sdtext = ifelse(is.nan(sdR2), "", paste("SD =", round(sdR2,3)))
)

g <- g + geom_text(data=df2, aes(x, ymean, label=meantext)) +
    geom_text(data=df2, aes(x, ymode, label=modetext)) +
    geom_text(data=df2, aes(x, ysd, label=sdtext))
print(g)

1
Really illuminating visualization. +1
Khashaa

Great addition, +1, thanks. I noticed that you call 0 a mode when the distribution goes to + when x0 (and nowhere else) -- something @Alecos above (in the comments) did not want to do. I agree with you: it is convenient.
amoeba says Reinstate Monica

1
@amoeba from the graphs we'd like to say "values around 0 are most likely" (or 1). But the answer of Alecos is also both self-consistent and consistent with many authorities (people differ on what to do about the 0 and 1 full stop, let alone whether they can count as a mode!). My approach to the mode differs from Alecos mostly because I use conditions on alpha and beta to determine where the formula is applicable, rather than taking my starting point as the formula and seeing which k and n give sensible answers.
Silverfish

1
(+1), this is a very meaty answer. By keeping k too close to n and both small, the question studies in detail, and so decisively, the case of really small samples with relatively too many and irrelevant regressors.
Alecos Papadopoulos

@amoeba You probably noticed that this answer furnishes an algebraic answer for why, for sufficiently large n, the mode of the distribution is 0 for k=3 but positive for k>3. Since f(x)x(k3)/2(1x)(nk2)/2 then for k=3 we have f(x)(1x)(n5)/2 which will clearly have mode at 0 for n>5, whereas for k=4 we have f(x)x1/2(1x)(n6)/2 whose maximum can be found by calculus to be the quoted mode formula. As k increases, the power of x rises by 0.5 each time. It's this xα1 factor which makes f(0)=0 so kills the mode at 0
Silverfish
Sitemizi kullandığınızda şunları okuyup anladığınızı kabul etmiş olursunuz: Çerez Politikası ve Gizlilik Politikası.
Licensed under cc by-sa 3.0 with attribution required.