İki rastgele değişken ve ,A
E ( A ∣ B ) = E ( B ∣ A ) E ( A )E ( B ) ?
İki rastgele değişken ve ,A
E ( A ∣ B ) = E ( B ∣ A ) E ( A )E ( B ) ?
Yanıtlar:
E [ A ∣ B ] ? = E [ B ∣ A ] E [ A ]E [ B ]
Eğer E [ B ] = 0
Genel olarak , ( 1 )
g ( B ) ? = h ( A ) E [ A ]D [ B ]
Bildiğim kadarıyla, ( 1 ) 'in
Yukarıda belirtildiği gibi, bağımsız rasgele değişkenler A
Bağımsızlık spektrum diğer ucunda, varsayalım ki
bir = gr ( B )
In a comment on this answer, Huber has suggested considering the symmetric conjectured equality E[A∣B]E[B]?=E[B∣A]E[A]
The result is untrue in general, let us see that in a simple example. Let X∣P=p have a binomial distribution with parameters n,p and P have the beta distrubution with parameters (α,β), that is, a bayesian model with conjugate prior. Now just calculate the two sides of your formula, the left hand side is EX∣P=nP, while the right hand side is E(P∣X)EXEP=α+Xn+α+βα/(α+β)nα/(α+β)
The conditional expected value of a random variable A given the event that B=b is a number that depends on what number b is. So call it h(b). Then the conditional expected value E(A∣B) is h(B), a random variable whose value is completely determined by the value of the random variable B. Thus E(A∣B) is a function of B and E(B∣A) is a function of A.
The quotient E(A)/E(B) is just a number.
So one side of your proposed equality is determined by A and the other by B, so they cannot generally be equal.
(Perhaps I should add that they can be equal in the trivial case when the values of A and B determine each other, as when for example, A=αB,α≠0 and E[B]≠0, when E[A∣B]=αB=E[B∣A]⋅α=E[B∣A]αE[B]E[B]=E[B∣A]E[A]E[B].
The expression certainly does not hold in general. For the fun of it, I show below that if A and B follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if the two variables are linear functions of each other and have the same coefficient of variation (the ratio of standard deviation over mean) in absolute terms.
For jointly normals we have
E(A∣B)=μA+ρσAσB(B−μB)
and we want to impose
μA+ρσAσB(B−μB)=[μB+ρσBσA(A−μA)]μAμB
⟹μA+ρσAσB(B−μB)=μA+ρσBσAμAμB(A−μA)
Simplify μA and then ρ, and re-arrange to get
B=μB+σ2Bσ2AμAμB(A−μA)
So this is the linear relationship that must hold between the two variables (so they are certainly dependent, with correlation coefficient equal to unity in absolute terms) in order to get the desired equality. What it implies?
First, it must also be satisfied that
E(B)≡μB=μB+σ2Bσ2AμAμB(E(A)−μA)⟹μB=μB
so no other restirction is imposed on the mean of B ( or of A) except of them being non-zero. Also a relation for the variance must be satisfied,
Var(B)≡σ2B=(σ2Bσ2AμAμB)2Var(A)
⟹(σ2A)2σ2B=(σ2B)2σ2A(μAμB)2
⟹(σAμA)2=(σBμB)2⟹(cvA)2=(cvB)2
⟹|cvA|=|cvB|
which was to be shown.
Note that equality of the coefficient of variation in absolute terms, allows the variables to have different variances, and also, one to have positive mean and the other negative.