Neden lojistik regresyon optimizasyonu için Newton yöntemini kullanmaya yinelemeli yeniden ağırlıklı en küçük kareler denir?


19

Neden lojistik regresyon optimizasyonu için Newton yöntemini kullanmaya yinelemeli yeniden ağırlıklı en küçük kareler denir?

Benim için net görünmüyor çünkü lojistik kayıp ve en az kar kaybı tamamen farklı şeyler.


3
Aynı olduklarını sanmıyorum. IRLS, gözlemlenen Hessian yerine beklenen Hessian ile Newton-Raphson'dur.
Dimitriy V. Masterov

@ DimitriyV.Masterov teşekkürler, bana Expected Hessian vs Observed hakkında daha fazla bilgi verebilir misiniz? Ayrıca, bu açıklama
Haitao Du

Yanıtlar:


25

Özet: GLM'ler, Dimitriy V. Masterov'un belirttiği gibi, bunun yerine beklenen Hessian ile Newton-Raphson olan Fisher skorlaması ile uyuyor (yani gözlemlenen bilgiler yerine Fisher bilgilerinin bir tahminini kullanıyoruz). Kanonik bağlantı fonksiyonunu kullanırsak, gözlemlenen Hessian'ın beklenen Hessian'a eşit olduğu ortaya çıkar, bu nedenle NR ve Fisher puanlaması bu durumda aynıdır. Her iki durumda da, Fisher puanlamasının aslında ağırlıklı bir en küçük kareler doğrusal modeline uyduğunu ve bu yakınsaklıktan elde edilen katsayı tahminlerinin maksimum lojistik regresyon olasılığına göre olduğunu göreceğiz. Daha önce çözülmüş bir soruna lojistik regresyon uydurmayı azaltmanın yanı sıra, lojistik regresyonumuz hakkında bilgi edinmek için nihai WLS uyumunda doğrusal regresyon teşhisi kullanabilme avantajından da yararlanıyoruz.

Bunu lojistik regresyona odaklayacağım, ancak GLM'lerde maksimum olasılık hakkında daha genel bir perspektif için, bu bölümün bundan geçen ve daha genel bir ortamda IRLS'yi türeyen bu bölümün 15.3 bölümünü öneriyorum (sanırım John Fox'un Uygulamalı Regresyon Analizi ve Genelleştirilmiş Doğrusal Modeller ).

Sonunda bkz comments


Olasılık ve skor fonksiyonu

GLM'mizi, şeklinde bir şey yineleyerek takacağız ;

b(m+1)=b(m)J(m)1(b(m))
burada , günlük olasılığından ve Jm , gözlemlenen veya beklenen Hessian günlük olasılığı.

Bizim bağlantı işlevi, bir fonksiyonudur g koşullu ortalama eşleyen μi=E(yi|xi) doğrusal öngörücümüzle, bu nedenle ortalama modelimizg(μi)=xiTβ . h , doğrusal öngörücüyü ortalamayla eşleyen ters bağlantı fonksiyonuolsun.

Lojistik regresyon için, bağımsız gözlemleri olan bir Bernoulli olasılığımız var, bu yüzden

(b;y)=i=1nyilogh(xiTb)+(1yi)log(1h(xiTb)).
Türev alma,
bj=i=1nyih(xiTb)h(xiTb)xij1yi1h(xiTb)h(xiTb)xij
=i=1nxijh(xiTb)(yih(xiTb)1yi1h(xiTb))
=ixijh(xiTb)h(xiTb)(1h(xiTb))(yih(xiTb)).

Standart bağlantıyı kullanma

Şimdi de kurallı bağlantı işlevini kullanıyorsanız varsayalım . Sonra g - 1 c ( x ) : = h c ( x ) = 1gc=logit yanihc =hc(1-hc), bu da gc1(x):=hc(x)=11+exhc=hc(1hc)

bj=ixij(yihc(xiTb))
so
(b;y)=XT(yy^).
Furthermore, still using hc,
2bkbj=ixijbkhc(xiTb)=ixijxik[hc(xiTb)(1hc(xiTb))].

Let

W=diag(hc(x1Tb)(1hc(x1Tb)),,hc(xnTb)(1hc(xnTb)))=diag(y^1(1y^1),,y^n(1y^n)).
H=XTWX
and note how this doesn't have any yi in it anymore, so E(H)=H (we're viewing this as a function of b so the only random thing is y itself). Thus we've shown that Fisher scoring is equivalent to Newton-Raphson when we use the canonical link in logistic regression. Also by virtue of y^i(0,1) -XTWXy^i010 which can make H negative semidefinite and therefore computationally singular.

Now create the working response z=W1(yy^) and note that

=XT(yy^)=XTWz.

All together this means that we can optimize the log likelihood by iterating

b(m+1)=b(m)+(XTW(m)X)1XTW(m)z(m)
and (XTW(m)X)1XTW(m)z(m) is exactly β^ for a weighted least squares regression of z(m) on X.

Checking this in R:

set.seed(123)
p <- 5
n <- 500
x <- matrix(rnorm(n * p), n, p)
betas <- runif(p, -2, 2)
hc <- function(x) 1 /(1 + exp(-x)) # inverse canonical link
p.true <- hc(x %*% betas)
y <- rbinom(n, 1, p.true)

# fitting with our procedure
my_IRLS_canonical <- function(x, y, b.init, hc, tol=1e-8) {
  change <- Inf
  b.old <- b.init
  while(change > tol) {
    eta <- x %*% b.old  # linear predictor
    y.hat <- hc(eta)
    h.prime_eta <- y.hat * (1 - y.hat)
    z <- (y - y.hat) / h.prime_eta

    b.new <- b.old + lm(z ~ x - 1, weights = h.prime_eta)$coef  # WLS regression
    change <- sqrt(sum((b.new - b.old)^2))
    b.old <- b.new
  }
  b.new
}

my_IRLS_canonical(x, y, rep(1,p), hc)
# x1         x2         x3         x4         x5 
# -1.1149687  2.1897992  1.0271298  0.8702975 -1.2074851

glm(y ~ x - 1, family=binomial())$coef
# x1         x2         x3         x4         x5 
# -1.1149687  2.1897992  1.0271298  0.8702975 -1.2074851 

and they agree.


Non-canonical link functions

Now if we're not using the canonical link we don't get the simplification of hh(1h)=1 in so H becomes much more complicated, and we therefore see a noticeable difference by using E(H) in our Fisher scoring.

Here's how this will go: we already worked out the general so the Hessian will be the main difficulty. We need

2bkbj=ixijbkh(xiTb)(yih(xiTb)1yi1h(xiTb))
=ixijxik[h(xiTb)(yih(xiTb)1yi1h(xiTb))h(xiTb)2(yih(xiTb)2+1yi(1h(xiTb))2)]

Via the linearity of expectation all we need to do to get E(H) is replace each occurrence of yi with its mean under our model which is μi=h(xiTβ). Each term in the summand will therefore contain a factor of the form

h(xiTb)(h(xiTβ)h(xiTb)1h(xiTβ)1h(xiTb))h(xiTb)2(h(xiTβ)h(xiTb)2+1h(xiTβ)(1h(xiTb))2).
But to actually do our optimization we'll need to estimate each β, and at step m b(m) is the best guess we have. This means that this will reduce to
h(xiTb)(h(xiTb)h(xiTb)1h(xiTb)1h(xiTb))h(xiTb)2(h(xiTb)h(xiTb)2+1h(xiTb)(1h(xiTb))2)
=h(xiTb)2(1h(xiTb)+11h(xiTb))
=h(xiTb)2h(xiTb)(1h(xiTb)).
This means we will use J with
Jjk=ixijxikh(xiTb)2h(xiTb)(1h(xiTb)).

Now let

W=diag(h(x1Tb)2h(x1Tb)(1h(x1Tb)),,h(xnTb)2h(xnTb)(1h(xnTb)))
and note how under the canonical link hc=hc(1hc) reduces W to W from the previous section. This lets us write
J=XTWX
except this is now E^(H) rather than necessarily being H itself, so this can differ from Newton-Raphson. For all i Wii>0 so aside from numerical issues J will be negative definite.

We have

bj=ixijh(xiTb)h(xiTb)(1h(xiTb))(yih(xiTb))
so letting our new working response be z=D1(yy^) with D=diag(h(x1Tb),,h(xnTb)), we have =XTWz.

All together we are iterating

b(m+1)=b(m)+(XTW(m)X)1XTW(m)z(m)
so this is still a sequence of WLS regressions except now it's not necessarily Newton-Raphson.

I've written it out this way to emphasize the connection to Newton-Raphson, but frequently people will factor the updates so that each new point b(m+1) is itself the WLS solution, rather than a WLS solution added to the current point b(m). If we wanted to do this, we can do the following:

b(m+1)=b(m)+(XTW(m)X)1XTW(m)z(m)
=(XTW(m)X)1(XTW(m)Xb(m)+XTW(m)z(m))
=(XTW(m)X)1XTW(m)(Xb(m)+z(m))
so if we're going this way you'll see the working response take the form η(m)+D(m)1(yy^(m)), but it's the same thing.

Let's confirm that this works by using it to perform a probit regression on the same simulated data as before (and this is not the canonical link, so we need this more general form of IRLS).

my_IRLS_general <- function(x, y, b.init, h, h.prime, tol=1e-8) {
  change <- Inf
  b.old <- b.init
  while(change > tol) {
    eta <- x %*% b.old  # linear predictor
    y.hat <- h(eta)
    h.prime_eta <- h.prime(eta)
    w_star <- h.prime_eta^2 / (y.hat * (1 - y.hat))
    z_star <- (y - y.hat) / h.prime_eta

    b.new <- b.old + lm(z_star ~ x - 1, weights = w_star)$coef  # WLS

    change <- sqrt(sum((b.new - b.old)^2))
    b.old <- b.new
  }
  b.new
}

# probit inverse link and derivative
h_probit <- function(x) pnorm(x, 0, 1)
h.prime_probit <- function(x) dnorm(x, 0, 1)

my_IRLS_general(x, y, rep(0,p), h_probit, h.prime_probit)
# x1         x2         x3         x4         x5 
# -0.6456508  1.2520266  0.5820856  0.4982678 -0.6768585 

glm(y~x-1, family=binomial(link="probit"))$coef
# x1         x2         x3         x4         x5 
# -0.6456490  1.2520241  0.5820835  0.4982663 -0.6768581 

and again the two agree.


Comments on convergence

Finally, a few quick comments on convergence (I'll keep this brief as this is getting really long and I'm no expert at optimization). Even though theoretically each J(m) is negative definite, bad initial conditions can still prevent this algorithm from converging. In the probit example above, changing the initial conditions to b.init=rep(1,p) results in this, and that doesn't even look like a suspicious initial condition. If you step through the IRLS procedure with that initialization and these simulated data, by the second time through the loop there are some y^i that round to exactly 1 and so the weights become undefined. If we're using the canonical link in the algorithm I gave we won't ever be dividing by y^i(1y^i) to get undefined weights, but if we've got a situation where some y^i are approaching 0 or 1, such as in the case of perfect separation, then we'll still get non-convergence as the gradient dies without us reaching anything.


5
+1. I love how detailed your answers often are.
amoeba says Reinstate Monica

You stated "the coefficient estimates from this converge on a maximum of the logistic regression likelihood." Is that necessarily so, from any initial values?
Mark L. Stone

2
@MarkL.Stone ah I was being too casual there, didn't mean to offend the optimization people :) I'll add some more details (and would appreciate your thoughts on them when I do)
jld

any chance you watched the link I posted? Seems that video is talking from machine learning perspective, just optimize logistic loss, without talking about Hessain expectation?
Haitao Du

1
@hxd1011 in that pdf i linked to (link again: sagepub.com/sites/default/files/upm-binaries/…) on page 24 of it the author goes into the theory and explains what exactly makes a link function canonical. I found that pdf extremely helpful when I first came across this (although it took me a while to get through).
jld
Sitemizi kullandığınızda şunları okuyup anladığınızı kabul etmiş olursunuz: Çerez Politikası ve Gizlilik Politikası.
Licensed under cc by-sa 3.0 with attribution required.