Sinüs dalgasının polinom yaklaşımlarını bulma


16

İşlev tarafından oluşturulan basit bir üçgen dalgaya polinom dalgayı uygulayarak sin(πx) tarafından verilen sinüs dalgasına yaklaşmak istiyorum.

T(x)=14|12mod(12x+14, 1)|

burada mod(x,1) kesirli kısmıdır :x

mod(x,y)y(xyxy)

Bir Taylor serisi bir dalgakıran olarak kullanılabilir.

S1(x)=πx2πx233!+πx255!πx277!

Yukarıdaki işlevler göz önüne alındığında, S1(T(x)) bize sinüs dalgasının iyi bir yaklaşımını verecektir. Ancak makul bir sonuç almak için serinin 7. gücüne gitmemiz gerekiyor ve zirveler biraz düşük ve tam olarak sıfır eğime sahip olmayacak.

Taylor serisi yerine, birkaç kurala göre polinom dalgayı kullanabiliriz.

  • -1, -1 ve + 1, + 1'den geçmelidir.
  • -1, -1 ve + 1, + 1'deki eğim sıfır olmalıdır.
  • Simetrik olmalı.

Gereksinimlerimizi karşılayan basit bir işlev:

S2(x)=3x2x32

ve grafikleri oldukça yakın, ancak Taylor serisi kadar yakın değil. Tepeler ve sıfır geçişler arasında gözle görülür şekilde biraz sapıyorlar. Gereksinimlerimizi karşılayan daha ağır ve daha doğru bir fonksiyon:günah ( π x )S2(T(x))sin(πx)

S3(x)=x(x25)216

Bu muhtemelen benim amacım için yeterince yakın, ama sinüs dalgasına daha yakın olan ve hesaplama açısından daha ucuz olan başka bir fonksiyonun olup olmadığını merak ediyorum. Yukarıdaki üç gereksinimi karşılayan işlevleri nasıl bulacağımı çok iyi anlıyorum, ancak bu gereksinimleri karşılayan ve aynı zamanda bir sinüs dalgasına en yakın eşleşen fonksiyonları nasıl bulacağımı bilmiyorum.

Sinüs dalgasını taklit eden polinomları bulmak için hangi yöntemler vardır (üçgen dalgaya uygulandığında)?


Açıklığa kavuşturmak için, sadece en basit seçim olmasına rağmen, sadece tek simetrik polinomları aramıyorum.

Aşağıdaki işlev gibi bir şey de ihtiyaçlarımı karşılayabilir:

S4(x)=3x2+x24+x44

Bu, negatif aralıktaki gereksinimleri karşılar ve parçalı bir çözüm, pozitif aralığa da uygulamak için kullanılabilir; Örneğin

3x2P(x,2)4P(x,4)4

nerede ise imzalı güç işlevi .P

Ayrıca, kesirli üsleri desteklemek için imzalı güç işlevini kullanan çözümlerle de ilgileniyorum, çünkü bu bize başka bir katsayı eklemeden başka bir "bükülme düğmesi" veriyor.

a0x +a1P(x, p1)

Doğru sabitler göz önüne alındığında, bu, beşinci veya yedinci dereceden polinomların ağırlığı olmadan potansiyel olarak çok iyi bir doğruluk elde edebilir. Aşağıda, elle seçilmiş sabitleri kullanarak burada açıklanan gereksinimleri karşılayan bir örnek verilmiştir: .a0=1.666¯,a1=0.666¯,p1=2.5

5x2P(x, 52)3

Aslında, bu sabitler ve ve çok yakındır . Bunları takmak sinüs dalgasına çok yakın görünen bir şey verir .π21π2e

π2x +(1π2)P(x,e)

Başka bir deyişle, , 0,0 ve π / 2,1 arasındagünaha(x)xxe6çok yakın görünüyor. Bunun önemi hakkında düşünceleriniz var mı? Belki Octave gibi bir araç bu yaklaşım için "en iyi" sabitleri keşfetmeye yardımcı olabilir.sin(x)


1
öyleyse, "daha yakın" için hata terimi tanımınız nedir? Anlatabildiğim kadarıyla, alıntıladığınız Taylor serisi, sınırlı sayıda katsayı için minimum L² hatası yaklaşık değeridir. (Sanırım.)
Marcus Müller

2
Bu arada, hedefin ne? Neden polinom dalga şekillendirici aradığınızı, hangi teknolojik temelde ve yaklaşım için ana hedeflerinizin neler olduğunu bize anlatmak gerçekten yardımcı olabilir.
Marcus Müller

@ MarcusMüller Eğer sinüs dalgasından insan kulağına ayırt edilemiyorsa, Taylor serisinin doğruluğunu çok daha ucuz bir şey için feda etmeye hazırım. Taylor serisi yaklaşımının zirveleri de beni rahatsız ediyor. Listelediğim diğer iki fonksiyondan daha "yakın" bir şey bulmak istiyorum. daha ucuza ulaşmayacağından şüpheleniyorum . S2
Konuk

1
Burada "insan kulağına" kritiktir :) Zirveler neden sizi rahatsız ediyor? Tekrar: bunu neden / hangi amaçla ve hangi kısıtlamalar altında yaptığınızı bize bildirin . Yeterli arka plan olmadan, sorunuz düzgün bir şekilde cevaplanamayacak kadar geniştir!
Marcus Müller

1
Neden bir üçgen dalga ile başlıyorsunuz? Sinüs jeneratörleri basit ve yaygındır, kare dalgalar temel harmoniğe vb.
Göre

Yanıtlar:


10

yaklaşık on yıl önce bunu Waltham MA'daki kınamaktan çok uzak olmayan Ar-Ge'ye sahip isimsiz bir müzik synthesizer şirketi için yaptım. (kim olduklarını hayal edemiyorum.) Katsayılarım yok.

ama şunu deneyin:

f(x)sin(π2x)for 1x+1=π2x(a0+a1x2+a2x4)

bu olduğunu garanti eder .f(x)=f(x)

Bunu garanti etmek için sonraf(x)|x=±1=0

f(x)=π2(a0+3a1x2+5a2x4)

(1)a0+3a1+5a2=0

Bu ilk kısıtlama. Bunu garanti etmek için , sonra|f(±1)|=1

(2)a0+a1+a2=2π

Bu ikinci kısıtlama. Ortadan kaldırmak ve numaralı ifadelerden çözme. (1) ve (2) bir 2 açısından bir 1 (ayarlamak için bırakılır):a0a2a1

a0=52π12a1

a2=12π12a1

Şimdi yalnızca bir katsayısı var en iyi performans için twiddle sola:a1

f(x)=π2x((52π12a1)+a1x2(12π+12a1)x4)

This is the way I would twiddle a1 for best performance for a sine wave oscillator. I would adjust use the above and the symmetry of the sine wave about x=1 and place exactly one entire cycle in a buffer with a power of two number of points (say 128, i don't care) and run the FFT on that perfect cycle.

FFT sonuç bölmesi 1, sinüsün gücü olacaktır ve yaklaşık . Şimdi ayarlayabilirsiniz bir 1 sizin 3. harmonik distorsiyon getirmek ve aşağı için. Ben başlardım bir 15N/2a1a15π2 so that a01. That's in bin 3 of the FFT results But the 5th harmonic distortion (value in bin 5) will be consequential (it will go up as the 3rd harmonic goes down). I would adjust a1 so that the strength of the 5th harmonic level is equal to the 3rd harmonic level. It will be around -70 dB from the 1st harmonic (as I recall). That will be the nicest-sounding sine wave from a cheap, 3-coefficient, 5th-order, odd-symmetrical polynomial.

Someone else can write the MATLAB code. How does that sound to you?


i will definitely not have time to do the MATLABing to hunt for the optimal a1 so that the 3rd harmonic is equal to the 5th harmonic, about 70 dB below the fundamental (1st harmonic). someone else needs to do that. sorry.
robert bristow-johnson

Great answer, still digesting it. Actually starting to wonder if it needs to be a 3-coefficient, 5th-order, odd-symmetrical polynomial ... Could your f'(x) actually be f(x) and be a piecewise deal around 0? Rough sketch here. Maybe this is what Ced has in mind? Still catching up to you guys.
Guest

This is a beautiful approach. I wonder if instead of taking the FFT and solving iteratively you could form the third- and fifth-order Chebyshev polynomials from your f(x), then equate the two and solve for a1?
Speedy

Must have been half asleep when I posted that "sketch," I meant to do something like this, but corrected to run through ±1 and have zero slope (can just take the derivative, fiddle around with it, integrate it again). Not sure if there's any advantage over fifth-order, just something I hadn't considered yet.
Guest

1
This really is a brilliant solution, just took a while to sink in. I hope marking it correct won't stop someone else from coming along and writing the code.
Guest

9

What is usually done is an approximation minimizing some norm of the error, often the L-norm (where the maximum error is minimized), or the L2-norm (where the mean squared error is minimized). L-approximation is done by using the Remez exchange algorithm. I'm sure you can find some open source code implementing that algorithm. However, in this case I think a very simple (discrete) l2-optimization is sufficient. Let's look at some Matlab/Octave code and the results:

x = linspace(0,pi/2,300);    % grid on [0,pi/2]
x = x(:);
% overdetermined system of linear equations
% (using odd powers only)
A3 = [x,x.^3];
A5 = [x,x.^3,x.^5];
b = sin(x);
% solve in l2 sense
c3 = A3\b;
c5 = A5\b;
f3 = A3*c3;    % 3rd order approximation
f5 = A5*c5;    % 5th order approximation

The figure below shows the approximation errors for the 3rd-order and for the 5th-order approximations. The maximum approximation errors are 8.8869e-03 and 1.5519e-04, respectively.

enter image description here

The optimum coefficients are

c3 =
   0.988720369237930
  -0.144993929056091

and

c5 =
   0.99976918199047515
  -0.16582163562776930
   0.00757183954143367

So the third-order approximation is

(1)sin(x)0.988720369237930x0.144993929056091x3,x[π/2,π/2]

and the fifth-order approximation is

(2)sin(x)0.99976918199047515x0.16582163562776930x3+0.00757183954143367x5,x[π/2,π/2]

EDIT:

I had a look into approximations with the signed power function, as suggested in the question, but the best approximation is hardly better than the third-order approximation shown above. The approximating function is

(3)f(x)=x1p(π2)1pxp,x[0,π/2]

where the constants were chosen such that f(0)=1 and f(π/2)=0. The power p was optimized to achieve the smallest maximum error in the range [0,π/2]. The optimal value for p was found to be p=2.774. The figure below shows the approximation errors for the third-order approximation (1) and for the new approximation (3):

enter image description here

(3) is 4.5e-3, but note that the third-order approximation only exceeds that error close to π/2 and that for the most part its approximation error is actually smaller than the one of the signed power function.

EDIT 2:

If you don't mind division you could also use Bhaskara I's sine approximation formula, which has a maximum approximation error of 1.6e-3:

(4)sin(x)16x(πx)5π24x(πx),x[0,π/2]

That's very helpful, thanks. This is the first time I've used Octave. I followed most of it, but how did you get the approximation error plots and maximum values?
Guest

1
@Guest: The errors are just b-f3 and b-f5, respectively. Use the plot command to plot them.
Matt L.

1
@Guest: And the maxima you get from max(abs(b-f3)) and max(abs(b-f5)).
Matt L.

@Guest: I played around with the signed power function, but the result is not significantly better than the third-order approximation I had before. Check out my edited answer. As for complexity, would it make such a big difference?
Matt L.

Thanks for looking into it. Complexity isn't a huge deal, just curious how accurate the approximation can get with relatively low complexity. I'm not quite sure how you came up with (3), but it works nicely. I'd need to use 2.752 instead for p, since anything above that will send the peaks over 1 (clipping).
Guest

7

Start with an otherwise general, odd-symmetry 5th-order parameterized polynomial:

f(x)=a0x1+a1x3+a2x5=x(a0+a1x2+a2x4)=x(a0+x2(a1+a2x2))

Now we place some constraints on this function. Amplitude should be 1 at the peaks, in other words f(1)=1. Substituting 1 for x gives:

(1)a0+a1+a2=1

That's one constraint. The slope at the peaks should be zero, in other words f(1)=0. The derivative of f(x) is

a0+3a1x2+5a2x4

and substituting 1 for x gives our second constraint:

(2)a0+3a1+5a2=0

Now we can use our two constraints to solve for a1 and a2 in terms of a0.

(3)a1=522a0a2=a032

All that's left is to tweak a0 to get a nice fit. Incidentally, a0 (and the slope at the origin) ends up being π2, as we can see from a plot of the function.

Parameter optimization

Below are a number of optimizations of the coefficients, which result in these relative amplitudes of the harmonics compared to the fundamental frequency (1st harmonic):

Comparison of approximations

In the complex Fourier series:

k=ckei2πPkx,

of a real P-periodic waveform with P=4 and time symmetry about x=1 and with half a period defined by odd function f(x) over 1x1, the coefficient of the kth complex exponential harmonic is:

ck=1P11+P({f(x)if x<1f(x2)if x1)ei2πPkxdx.

Because of the relationship 2cos(x)=eix+eix (see: Euler's formula), the amplitude of a real sinusoidal harmonic with k>0 is 2|ck|, which is twice that of the magnitude of the complex exponential of the same frequency. This can be massaged to a form which makes it easier for some symbolic mathematics software to simplify the integral:

2|ck|=24|13({f(x)if x<1f(x2)if x1)ei2π4kxdx|=12|11f(x)eiπ2kxdx13f(x2)eiπ2kxdx|=12|11f(x)eiπ2kxdx11f(x+22)eiπ2k(x+2)dx|=12|11f(x)eiπ2kxdx11f(x)eiπ2k(x+2)dx|=12|11f(x)(eiπ2kxeiπ2k(x+2))dx|=12|eiπ2x11f(x)(eiπ2kxeiπ2k(x+2))dx|=12|11f(x)(eiπ2k(x1)eiπ2k(x+1))dx|

The above takes advantage of that |eix|=1 for real x. It is easier for some computer algebra systems to simplify the integral by assuming k is real, and to simplify to integer k at the end. Wolfram Alpha can integrate individual terms of the final integral corresponding to the terms of the polynomial f(x). For the coefficients given in Eq. 3 we get amplitude:

=|48((1)k1)(16a0(π2k210)5×(5π2k248))π6k6|

5th order, continuous derivative

We can solve for the value of a0 that gives equal amplitude 2|ck|of the 3rd and the 5th harmonic. There will be two solutions corresponding to the 3rd and the 5th harmonic having equal or opposite phases. The best solution is the one that minimizes the maximum amplitude of the 3rd and above harmonics and equivalently the maximum relative amplitude of the 3rd and above harmonics compared to the fundamental frequency (1st harmonic):

a0=3×(132375π2130832)16×(15885π216354)1.569778813,a1=522a0=79425π2654168×(15885π2+16354)0.6395576276,a2=a032=15885π216×(15885π216354)0.06977881382.

This gives the fundamental frequency at amplitude 1367961615885π616354π41.000071420 and both the 3rd and the 5th harmonic at relative amplitude 18906 or about 78.99 dB compared to the fundamental frequency. A kth harmonic has relative amplitude (1(1)k)|8177k279425|142496k6.

7th order, continuous derivative

Likewise, the optimal 7th order polynomial approximation with the same initial constraints and the 3rd, 5th, and 7th harmonic at the lowest possible equal level is:

f(x)=a0x1+a1x3+a2x5+a3x7=x(a0+a1x2+a2x4+a3x7)=x(a0+x2(a1+x2(a2+a3x2)))

a0=2a2+4a3+321.570781972,a1=4a2+6a3+120.6458482979,a2=347960025π4405395408π216×(281681925π4405395408π2+108019280)0.07935067784,a3=16569525π416×(281681925π4405395408π2+108019280)0.004284352588.

This is the best of four possible solutions corresponding to equal/opposite phase combinations of the 3rd, 5th, and 7th harmonic. The fundamental frequency has amplitude 2293523251200281681925π8405395408π6+108019280π40.9999983752, and the 3rd, 5th, and 7th harmonics have relative amplitude 11555395123.8368 dB compared to the fundamental. A kth harmonic has relative amplitude (1(1)k)|1350241k450674426k2+347960025|597271680k8 compared to the fundamental.

5th order

If the requirement of a continuous derivative is dropped, the 5th order approximation will be more difficult to solve symbolically, because the amplitude of the 9th harmonic will rise above the amplitude of the 3rd, 5th, and the 7th harmonic if those are constrained to be equal and minimized. Testing 16 different solutions corresponding to different subsets of three harmonics from {3,5,7,9} being of equal amplitude and of equal or opposite phases, the best solution is:

f(x)=a0x1+a1x3+a2x5a0=1a1a21.570034357a1=3×(2436304π22172825π4)8×(1303695π41827228π2+537160)0.6425216143a2=1303695π416×(1303695π41827228π2+537160)0.07248725712

The fundamental frequency has amplitude 10804305921303695π61827228π4+537160π20.9997773320. The 3rd, 5th, and 9th harmonics have relative amplitude 726377791.52 dB, and the 7th harmonic has relative amplitude 7260833103310027392.6 dB compared to the fundamental. A kth harmonic has relative amplitude (1(1)k)|67145k42740842k2+19555425|33763456k6.

This approximation has a slight corner at the half-cycle boundaries, because the polynomial has zero derivative not at x=±1 but at x±1.002039940. At x=1 the value of the derivative is about 0.004905799828. This results in slower asymptotic decay of the amplitudes of the harmonics at large k, compared to the 5th order approximation that has a continuous derivative.

7th order

A 7th order approximation without continuous derivative can be found similarly. The approach requires testing 120 different solutions and was automated by the Python script at the end of this answer. The best solution is:

f(x)=a0x1+a1x3+a2x5+a3x7a0=1a1a2a31.5707953785726114835a1=5×(4374085272375π66856418226992π4+2139059216768π2)16×(2124555703725π63428209113496π4+1336912010480π2155807094720)0.64590724797262922190a2=2624451163425π63428209113496π416×(2124555703725π63428209113496π4+1336912010480π2155807094720)0.079473610232926783079a3=124973864925π616×(2124555703725π63428209113496π4+1336912010480π2155807094720)0.0043617408329090447344

The fundamental frequency has amplitude 169918012823961602124555703725π83428209113496π6+1336912010480π4155807094720π21.0000024810802368487. The largest relative amplitude of the harmonics above the fundamental is 502400688077133.627 dB. compared to the fundamental. A kth harmonic has relative amplitude (1(1)k)|162299057k6+16711400131k4428526139187k2+2624451163425|4424948250624k8.

Python source

from sympy import symbols, pi, solve, factor, binomial

numEq = 3 # Number of equations
numHarmonics = 6 # Number of harmonics to evaluate

a1, a2, a3, k = symbols("a1, a2, a3, k")
coefficients = [a1, a2, a3]
harmonicRelativeAmplitude = (2*pi**4*a1*k**4*(pi**2*k**2-12)+4*pi**2*a2*k**2*(pi**4*k**4-60*pi**2*k**2+480)+6*a3*(pi**6*k**6-140*pi**4*k**4+6720*pi**2*k**2-53760)+pi**6*k**6)*(1-(-1)**k)/(2*k**8*(2*pi**4*a1*(pi**2-12)+4*pi**2*a2*(pi**4-60*pi**2+480)+6*a3*(pi**6-140*pi**4+6720*pi**2-53760)+pi**6))

harmonicRelativeAmplitudes = []
for i in range(0, numHarmonics) :
    harmonicRelativeAmplitudes.append(harmonicRelativeAmplitude.subs(k, 3 + 2*i))

numCandidateEqs = 2**numHarmonics
numSignCombinations = 2**numEq
useHarmonics = range(numEq + 1)

bestSolution = []
bestRelativeAmplitude = 1
bestUnevaluatedRelativeAmplitude = 1
numSolutions = binomial(numHarmonics, numEq + 1)*2**numEq
solutionIndex = 0

for i in range(0, numCandidateEqs) :
    temp = i
    candidateNumHarmonics = 0
    j = 0
    while (temp) :
        if (temp & 1) :
            if candidateNumHarmonics < numEq + 1 :
                useHarmonics[candidateNumHarmonics] = j
            candidateNumHarmonics += 1
        temp >>= 1
        j += 1
    if (candidateNumHarmonics == numEq + 1) :
        for j in range(0,  numSignCombinations) :
            eqs = []
            temp = j
            for n in range(0, numEq) :
                if temp & 1 :
                    eqs.append(harmonicRelativeAmplitudes[useHarmonics[0]] - harmonicRelativeAmplitudes[useHarmonics[1+n]])
                else :
                    eqs.append(harmonicRelativeAmplitudes[useHarmonics[0]] + harmonicRelativeAmplitudes[useHarmonics[1+n]])
                temp >>= 1
            solution = solve(eqs, coefficients, manual=True)
            solutionIndex += 1
            print "Candidate solution %d of %d" % (solutionIndex, numSolutions)
            print solution
            solutionRelativeAmplitude = harmonicRelativeAmplitude
            for n in range(0, numEq) :                
                solutionRelativeAmplitude = solutionRelativeAmplitude.subs(coefficients[n], solution[0][n])
            solutionRelativeAmplitude = factor(solutionRelativeAmplitude)
            print solutionRelativeAmplitude
            solutionWorstRelativeAmplitude = 0
            for n in range(0, numHarmonics) :
                solutionEvaluatedRelativeAmplitude = abs(factor(solutionRelativeAmplitude.subs(k, 3 + 2*n)))
                if (solutionEvaluatedRelativeAmplitude > solutionWorstRelativeAmplitude) :
                    solutionWorstRelativeAmplitude = solutionEvaluatedRelativeAmplitude
            print solutionWorstRelativeAmplitude
            if (solutionWorstRelativeAmplitude < bestRelativeAmplitude) :
                bestRelativeAmplitude = solutionWorstRelativeAmplitude
                bestUnevaluatedRelativeAmplitude = solutionRelativeAmplitude                
                bestSolution = solution
                print "That is a new best solution!"
            print

print "Best Solution is:"
print bestSolution
print bestUnevaluatedRelativeAmplitude
print bestRelativeAmplitude

This is a variation on Robert's answer, and is the route I eventually took. I'm leaving it here in case it helps anyone else.
Guest

wow, solving it analytically. i woulda just used MATLAB and an FFT and sorta hunt around for the answer.
you did very well.
robert bristow-johnson

2
actually @OlliNiemitalo, i think -79 dB is good enough for the implementation of a digital synth sine wave oscillator. it can be driven by a triangle wave, which is generated easily from the abs value of a sawtooth, which is most easily generated with a fixed-point phase accumulator.
no one will hear a difference between that 5th-order polynomial sine wave and a pure sine.
robert bristow-johnson

1
Polynomials in general as f have the advantage that by increasing the order, the error can be made arbitrarily small. Rational functions have the same advantage, but a division is typically more costly to compute than multiplication. For example in Intel i7, a single thread can do 7-27 times as many multiplications and additions than divisions in the same time. Approximating some alternative f means decomposing it to elementary ops, typically multiplications and additions which always amount to polynomials. Those could be optimized to approximate sine directly versus via f.
Olli Niemitalo

1
@OlliNiemitalo, I see what you mean... if division is that much slower than multiplication (and I guess things like roots / fractional exponents will be even worse), then an approach like the above with a "good, fast f0" is going to wind up factoring out to a Taylor-series-like-polynomial anyway. I guess since it's an approximation anyway, some kind of cheap root approximation could potentially overtake the polynomial approach at some level of accuracy, but that's kinda off in the weeds for what was essentially supposed to be a math question.
Guest

5

Are you asking this for theoretical reasons or a practical application?

Usually, when you have an expensive to compute function over a finite range the best answer is a set of lookup tables.

One approach is to use best fit parabolas:

n = floor( x * N + .5 );

d = x * N - n;

i = n + N/2;

y = L_0 + L_1[i] * d + L_2[i] * d * d;

By finding the parabola at each point that meets the values for d being -1/2, 0, and 1/2, rather than using the derivatives at 0, you ensure a continuous approximation. You could also shift the x value, rather than the array index to deal with your negative x values.

Ced

=================================================

Followup:

The amount of effort, and the results, that have gone into finding good approximations is very impressive. I was curious as to how my boring and bland piecewise parabolic solution would compare. Not surprisingly, it does much better. Here are the results:

   Method    Minimum    Maximum     Mean       RMS
  --------   --------   --------   --------   --------
     Power   -8.48842    1.99861   -4.19436    5.27002
    OP S_3   -2.14675    0.00000   -1.20299    1.40854
     Bhask   -1.34370    1.63176   -0.14367    0.97353
     Ratio   -0.24337    0.22770   -0.00085    0.16244
     rbj 5   -0.06724    0.15519   -0.00672    0.04195
    Olli5C   -0.16367    0.20212    0.01003    0.12668
     Olli5   -0.26698    0.00000   -0.15177    0.16402
    Olli7C   -0.00213    0.00000   -0.00129    0.00143
     Olli7   -0.00005    0.00328    0.00149    0.00181
    Para16   -0.00921    0.00916   -0.00017    0.00467
    Para32   -0.00104    0.00104   -0.00001    0.00053
    Para64   -0.00012    0.00012   -0.00000    0.00006

The values represent 1000x the error between the approximation and the actual evaluated every .0001 from a scale of 0 to 1 (inclusive), so 10001 points in all. The scale is converted to evaluate the functions from 0 to π/2, except for Olli Niemitalo's equations which use the 0 to 1 scale. The columns values should be clear from the headers. The results don't change with a .001 spacing.

The "Power" line is the equation: xxe6.

The rbj 5 line is the same as Matt L's c5 solution.

The 16, 32, and 64 are the number of intervals that have parabolic fits. Of course there are insignificant discontinuities in the first derivative at each interval boundary. The values of the function are continuous though. Increasing the number of intervals only increases the memory requirements (and initialization time), it does not increase the amount of calculation needed for the approximation, which is less than any of the other equations. I chose powers of two because a fixed point implementation could save a division by using an AND in such cases. Also, I didn't want the count to be commensurate with the test sampling.

I did run Olli Niemitalo's python program and got this as part of the printout: "Candidate solution 176 of 120" I thought that was odd, so I am mentioning it.

If anybody wants me to include any of the other equations, please let me know in the comments.

Here is the code for the piecewise parabolic approximations. The entire test program is too long to post.

#=============================================================================
def FillParab( argArray, argPieceCount ):

#  y = a d^2 + b d + c

#  ym = a .25 - b .5 + c
#  y  =                c
#  yp = a .25 + b .5 + c

#  c = y
#  b = yp - ym
#  a = ( yp + ym - 2y ) * 2

#---- Calculate Lookup Arrays

        theStep = pi * .5 / float( argPieceCount - 1 )
        theHalf = theStep * .5

        theL0 = zeros( argPieceCount )
        theL1 = zeros( argPieceCount )
        theL2 = zeros( argPieceCount )

        for k in range( 0, argPieceCount ):
         x  = float( k ) * theStep

         ym = sin( x - theHalf )
         y  = sin( x )
         yp = sin( x + theHalf )

         theL0[k] = y
         theL1[k] = yp - ym
         theL2[k] = ( yp + ym - 2.0 * y ) * 2

#---- Do the Fill

        theN = len( argArray )

        theFactor = pi * .5 / float( theN - 1 )

        for i in range( 0, theN ):
         x  = float( i ) * theFactor

         kx = x / theStep
         k  = int( kx + .5 )
         d  = kx - k

         argArray[i] = theL0[k] + ( theL1[k] + theL2[k] * d ) * d

#=============================================================================

=======================================

Appendum

I have included Guest's S3 function from the original post as "OP S_3" and Guest's two parameter formula from the comments as "Ratio". Both are on the 0 to 1 scale. I don't think the Ratio one is suitable for either calculation at runtime or for building a lookup table. After all, it is significantly more computation for the CPU than just a plain sin() call. It is interesting mathematically though.


Good work! I fixed that bug ("176 of 120").
Olli Niemitalo

Nice update, this makes more sense to me now. The xxe6 probably doesn't need to be tested, I just threw it out there because I was trying to figure out the significance of e which seemed to keep popping up while I was playing with this. A better rational expression to test might be something like this: f0(x)=|x|asign(x) ; b=f0(1) ; f1(x)=f0(x)bx ; c=1f1(1) ; f2(x)=f1(x)c ... now a should be set to about 223...
Guest

...or f0(x) can be pretty much any other odd-symmetrical function; sigmoids seem to work well, like ax1ax+1 (but then the right value for a needs to be found, of course). Here's a plot... as Olli mentions, this probably isn't practical for on-the-fly computation, but I guess it could be useful for building a lookup table.
Guest

Or a more accurate 2-param version of that, a0xa1xa0x+a1x looks pretty good with a013 and a1109
Guest
Sitemizi kullandığınızda şunları okuyup anladığınızı kabul etmiş olursunuz: Çerez Politikası ve Gizlilik Politikası.
Licensed under cc by-sa 3.0 with attribution required.