I tried finding a proof without considering characteristic functions. Excess kurtosis does the trick. Here's the two-line answer: Kurt(U)=Kurt(X+Y)=Kurt(X)/2Kurt(U)=Kurt(X+Y)=Kurt(X)/2 since XX and YY are iid. Then Kurt(U)=−1.2Kurt(U)=−1.2 implies Kurt(X)=−2.4Kurt(X)=−2.4 which is a contradiction as Kurt(X)≥−2Kurt(X)≥−2 for any random variable.
Rather more interesting is the line of reasoning that got me to that point. XX (and YY) must be bounded between 0 and 0.5 - that much is obvious, but helpfully means that its moments and central moments exist. Let's start by considering the mean and variance: E(U)=0.5E(U)=0.5 and Var(U)=112Var(U)=112. If XX and YY are identically distributed then we have:
E(X+Y)=E(X)+E(Y)=2E(X)=0.5
E(X+Y)=E(X)+E(Y)=2E(X)=0.5
So E(X)=0.25E(X)=0.25. For the variance we additionally need to use independence to apply:
Var(X+Y)=Var(X)+Var(Y)=2Var(X)=112
Var(X+Y)=Var(X)+Var(Y)=2Var(X)=112
Hence Var(X)=124Var(X)=124 and σX=12√6≈0.204σX=126√≈0.204. Wow! That is a lot of variation for a random variable whose support ranges from 0 to 0.5. But we should have expected that, since the standard deviation isn't going to scale in the same way that the mean did.
Now, what's the largest standard deviation that a random variable can have if the smallest value it can take is 0, the largest value it can take is 0.5, and the mean is 0.25? Collecting all the probability at two point masses on the extremes, 0.25 away from the mean, would clearly give a standard deviation of 0.25. So our σXσX is large but not impossible. (I hoped to show that this implied too much probability lay in the tails for X+YX+Y to be uniform, but I couldn't get anywhere with that on the back of an envelope.)
Second moment considerations almost put an impossible constraint on XX so let's consider higher moments. What about Pearson's moment coefficient of skewness, γ1=E(X−μX)3σ3X=κ3κ3/22γ1=E(X−μX)3σ3X=κ3κ3/22? This exists since the central moments exist and σX≠0σX≠0. It is helpful to know some properties of the cumulants, in particular applying independence and then identical distribution gives:
κi(U)=κi(X+Y)=κi(X)+κi(Y)=2κi(X)
κi(U)=κi(X+Y)=κi(X)+κi(Y)=2κi(X)
This additivity property is precisely the generalisation of how we dealt with the mean and variance above - indeed, the first and second cumulants are just κ1=μκ1=μ and κ2=σ2κ2=σ2.
Then κ3(U)=2κ3(X)κ3(U)=2κ3(X) and (κ2(U))3/2=(2κ2(X))3/2=23/2(κ2(X))3/2(κ2(U))3/2=(2κ2(X))3/2=23/2(κ2(X))3/2. The fraction for γ1γ1 cancels to yield Skew(U)=Skew(X+Y)=Skew(X)/√2Skew(U)=Skew(X+Y)=Skew(X)/2–√. Since the uniform distribution has zero skewness, so does XX, but I can't see how a contradiction arises from this restriction.
So instead, let's try the excess kurtosis, γ2=κ4κ22=E(X−μX)4σ4X−3γ2=κ4κ22=E(X−μX)4σ4X−3. By a similar argument (this question is self-study, so try it!), we can show this exists and obeys:
Kurt(U)=Kurt(X+Y)=Kurt(X)/2
Kurt(U)=Kurt(X+Y)=Kurt(X)/2
The uniform distribution has excess kurtosis −1.2−1.2 so we require XX to have excess kurtosis −2.4−2.4. But the smallest possible excess kurtosis is −2−2, which is achieved by the Binomial(1,12)Binomial(1,12) Bernoulli distribution.