O (n log n) zamanında kareli seyrek polinomların toplamı hesaplanıyor mu?


18

Toplam sıfır olmayan katsayı sayısı (yani, polinomlar seyrek) olacak şekilde en fazla , derece olan derecelerine sahip polinomlarımız olduğunu varsayalım . Ben polinom hesaplamak için verimli bir algoritma ilgileniyorum:p1,...,pmnn>mn

ipi(x)2

Since this polynomial has degree at most 2n, both input and output size is O(n). In the case m=1 we can compute the result using FFT in time O(nlogn). Can this be done for any m<n? If it makes any difference, I'm interested in the special case where coefficients are 0 and 1, and the computation should be done over the integers.

Update. I realized that a fast solution for the above would imply advances in fast matrix multiplication. In particular, if pk(x)=i=1naikxi+j=1nbkjxnj then we can read off aikbkj as the coefficient of xi+nj in pk(x)2. Thus, computing pk(x)2 corresponds to computing an outer product of two vectors, and computing the sum kpk(x)2 corresponds to computing a matrix product. If there is a solution using time f(n,m) to computing kpk(x)2 then we can multiply two n-by-n matrices in time f(n2,n), which means that f(n,m)=O(nlogn) for mn would require a major breakthrough. But f(n,m)=nω/2, where ω is the current exponent of matrix multiplication, might be possible. Ideas, anyone?


1
Hi Rasmus. I think you intended for this to go on the main site. This is the meta site, for questions about the site.
Suresh Venkat

Yanıtlar:


3

Squaring a polynomial with xi nonzero coefficients takes time O(xi2) using ordinary term-by-term multiplication, so this should be preferred to the FFT for those polynomials where xi<nlogn. If ixi=n, then the number of polynomials with xi greater than nlogn is O(n/logn), and these will take time O(n3/2(logn)1/2) to square and combine (as will the remaining polynomials). This is an improvement over the obvious O(mnlogn) bound when m is Θ(n/logn).


1
What I'm interested in is a method that computes the sum without computing each term. Doing FFT or term-by-term multiplication for each product will be too slow for the application I have in mind.
Rasmus Pagh

2

Not a full answer but maybe helpful.

Caveat: It only works well if the supports of the pi2 are small.

For a polynomial q=a0+a1x++anxn, let Sq={iai0} be its support and sq=|Sq| be the size of the support. Most of the pi will be sparse, i.e, will have a small support.

There are algorithms to multiply sparse polynomials a and b in quasi-linear time in the size of the support of the product ab, see e.g. http://arxiv.org/abs/0901.4323

The support of ab is (contained in) Sa+Sb, where the sum of two sets S and T is defined as S+T:={s+tsS,tT}. If the supports of all products are small, say, linear in n in total, then one can just compute the products and add up all monomials.

It is however very easy to find polynomials a and b such that the size of the support of ab is quadratic in the sizes of the support of a and b. In this particular application, we are squaring polynomials. So the question is how much larger S+S compared to S. The usual measure for this is the doubling number |S+S|/|S|. There are sets with unbounded doubling number. But if you can exclude sets with large doubling number as supports of the pi, then you can get a fast algorithm for your problem.


1
Although I am not familiar with additive combinatorics, I think that generalized arithmetic progressions and the Freiman-Ruzsa theorem are about sets with small doubling.
Tsuyoshi Ito

@Tsuyoshi: You are right, I will edit my answer. Nevertheless, there are GAPs with large doubling constant.
5501

Personally I do not think that this approach is promising. A (pretty inaccurate) implication of the Freiman-Ruzsa theorem is that |S+S|/|S| is small only in special cases, and therefore the part “If you can exclude sets with larger doubling number as supports of the p_i” is a very big if. However, as I said, I am not familiar with additive combinatorics, and you should take my words on it with a grain of salt.
Tsuyoshi Ito

Of course it only works if the application in mind (which I do not know) gives nice supports.
5501

Then it would be easier to understand if you make that assumption more explicit in your answer. The current way of writing the assumption in the answer suggests that you consider that the assumption of small doubling number is not a big deal.
Tsuyoshi Ito

2

Just wanted to note the natural approximation algorithm. This doesn't take advantage of sparsity though.

You could use a random sequence (σi)i[n] Taking X=iσipi(x) we can compute X2 in nlogn time using FFT. Then EX2=ipi(x)2=S and VX2=O(S). So you can get a 1+ε approximation in time O(ε2nlogn).


Nice approach! But don't you need more repetitions to get all coefficients right with high probability?
Rasmus Pagh

@RasmusPagh Right, you'll probably get a log(n/δ) term if you want all coefficients to be preserved with probability 1δ.
Thomas Ahle
Sitemizi kullandığınızda şunları okuyup anladığınızı kabul etmiş olursunuz: Çerez Politikası ve Gizlilik Politikası.
Licensed under cc by-sa 3.0 with attribution required.