TCS'de güzel sonuçlar


29

Son zamanlarda bir arkadaşım (TCS’de çalışan) bir konuşmada “yaşamındaki TCS’deki güzel sonuçların tümünü (veya mümkün olduğunca çok görmek istedi)” diye belirtti. Bu tür bana bu alanda güzel sonuçlar hakkında merak ve bu nedenle aşağıdaki soru için motivasyon yaptı:

Teorik bilgisayar biliminde hangi sonuçlar (veya fikirler) sizce güzel? Sebebini de söyleseydin çok iyi olurdu. [Fikirler matematiğe dayansa bile, ancak TCS'de ilgi uyandırdı ve kullanım bulsa bile iyi olur]

Cantor'un diyagonal argümanı olarak bir cevapla başlayacağım çünkü basit, zarif ve güçlü bir sonuç.


2
Bu sorunun neredeyse yinelemesi (ancak yalnızca yakınında, çünkü algoritmalar TCS'nin uygun bir alt kümesidir)
Jeffε

3
Bu şu anki haliyle iyi bir soru ise, lütfen İyi Öznel, Kötü Öznel bakınız .
Kaveh

5
En azından bunun CW olması gerekiyor.
Suresh Venkat

1
Belki de soruyu algoritmik olmayan sonuçlara odaklanmak için değiştirebiliriz - diğer iş parçacığı algoritmalar hakkındadır.
Vijay D

4
Lance Fortnow'un blogunda her on yılın "favori teoremleri" listeleniyor. Bu listelerde oldukça güzel sonuçlar var.
MCH

Yanıtlar:


21

Durma sorununun kararsızlığı.

Birçok nedenden dolayı güzel. Bu bir imkansızlık sonucudur. İspat köşegenleştirmeyi kullanır. Bu ifade çok çeşitli hesaplama modelleri için geçerlidir. Özellikle standart programlama dilleri kullanılarak çeşitli şekillerde formüle edilebilir. Bu bilgisayar tarihinde bir havza sonucuydu. Bu açıklamayı genişletmek Rice Teoremine, Turing derecelerine ve daha birçok harika sonuca yol açar. Etc Etc Etc.


17

Benim görüşüme göre, Curry-Howard yazışmaları en güzel teorik sonuçlardan biri ve beni araştırma yapmaya iten şeydi.

Bir yandan iki sistemin, bir yandan programların ve diğer yandan kanıtların aynı yapıya sahip olduğu fikri neredeyse felsefi niteliktedir: bazı genel "akıl yürütme kalıpları" var mı?


Personel, Curry-Howard yazışmalarını, farklı bağlamlar nedeniyle kopyalanmış teorilerin kanonik örneği olarak görüyorum, oysakiler aynı matematiksel ifadeye sahipler. Mevcut yapıları tanıyamayan ve tekerleği yeniden icat edemeyen insanların utancı olarak düşünülmelidir.
Ludovic Patey,

11
Tamamen aynı fikirde değilim. Eğer Curry-Howard cılız insanlar hakkında kopyalama işiyle ilgiliyse, modern matematiğin çoğu, özellikle birleştirici, cebir ve topolojideki yapılarla ilgili sonuçlar.
Vijay D,

Matematiğin temel olarak yapılar arasındaki korelasyonları bulmaktan oluştuğu ve bir korelasyonun tanımı gereği, teorilerin en azından bazı kısımlarında bazı kopyaları ortaya çıkaran bir haklısınız. Tutarlı olmak için matematiğin özünde bir utanç olduğu sonucuna varmalıyım, çünkü kopyaları görebilseydik teoremler açık ve matematiğin yararsız olacağını düşünürdüm. ^^
Ludovic Patey,

Turingoid: I agree. I have come to similar conclusions (about reinventing the wheel) when working with symmetry concept. It is really a shame, that we are unable to work at level of primary symmetry/asymmetry relations. IMO there will be a collapse of some of actual sciences into wider ones when we finally break through.
Mooncer

1
If only there were some way to automate the process.
Jeffε

17

The possibility of public-key cryptography, for example, Diffie-Hellman key exchange scheme.

It breaks the very strong preconception that people have to meet before exchanging secrets on an insecure channel.


16

I was and still am surprised by Euclid's algorithm. To me, it is a testament to power of human thinking - that people could conceive of such an algorithm so early (around 300 BC if I trust my memory).

Fast forwarding, there is mind numbing literature on the subject. I think Scott Aaronson's list should be helpful in this regard - though, as Aaronson himself says its not complete (and not entirely theoretical)


15

Yao's technique to use von Neumann's Minmax Theorem to prove lower bounds for Randomized Algorithms. I find it as something out of this world.

Probabilistic method to prove the existence of objects that we find it difficult to construct including the Lovasz Local Lemma. These techniques are so simple, yet so powerful.

Madhu Sudan's coding theory constructions using polynomials.

Expanders (this started off as Ramanujan graphs) and Extractors and their applications in Pseudorandomness.

Cooley and Tukey's Fast Fourier Transform algorithm to find DFT. (Though, as assumed by Tukey, this was a rediscovery of a well known technique, at least known to Gauss!)

Barrington's Theorem, (a very surprising result in its time)

Parallel Repetition Theorem (though the result is nice, the proof is not easy)

Lovasz Theta function to estimate the shannon capacity of a graph.

Ellipsoid algorithm that showed that LP is in P, surprising many at a time when many still suspected it could be NP-Complete.


The probabilistic method is not really a result. It's just an immediate feature of the definition of probability. For similar reasons it's hard to argue it is special to TCS (despite there being a book with the same name).
Lembik

14

surprisingly one of the most obvious answers not added yet. sometimes one works too much with something to see it impartially. the theory of NP completeness launched by Cook/Levin and immediately amplified by Karp who gave an early indication of its ubiquitousness, even more prescient in retrospect. in many ways this is the birth of modern TCS & complexity theory, and its core/key/notorious question P=?NP is still open after four decades of intense study/attack. P=?NP has a $1M Claymath award for its solution.

the Cook proof introduced the NDTM which is apparently not at all a mere theoretical curiosity but an almost extremely fundamental part of TCS. launched a thousand ships, so to speak. moreover, it continually resists/defies efforts via one of the other key/powerful TCS techniques mentioned in this list, diagonalization, seen in eg the BGS-75 Oracle/Relativization results-- suggesting that there must be something exotic and different about any possible solution, also further suggested/expanded by the Razborov-Rudich Natural Proofs paper (2007 Godel prize).

there are many, many refs on the subj but one more recent with some 1sthand account of the history can be found in The P=?NP Question and Godel's Lost Letter by RJ Lipton


Actually, NDTM's already appear in Turing's 1936 paper as "choice machines"; see Wikipedia.
Jeffε

1
oops, ok. thx for correction. anyway the cook paper is maybe 1st to show the NDTM is much different than a DTM in a complexity theory sense.
vzn

Oops! Was just about to post this. I was also surprised it wasn't posted immediately.
Andrew D. King

14

Kolmogorov Complexity and the incompressibility method.

The incompressibility method - based on Kolmogorov complexity - provided a new and intuitive way of formulating proofs. In a typical proof using incompressibility method, one first chooses an incompressible object from the class under discussion. The argument invariably says that if a desired property does not hold, then, in contrast with the assumption, the object can be compressed and this yelds the required contradiction.

See for example the proof that there is an infinite number of primes, the alternative proof of the Godel's incompleteness theorem, or the connections between Kolmogorov Complexity and Computational Complexity, ....


11

I was (and still am) astonished by Kleene's Second Recursion Theorem. On the surface, it seems simple and not very useful but I later found out it's deep both mathematically and philosophically.

When I also read about the variant proven on Turing Machines (very very informally stating that machines can obtain their own descriptions or equivalently that there are machines that output their own description, like a program that prints itself..), I felt my brain twist so hard, yet intrigued like never before. Then, you see how the theorem is used to give one line proofs for undecidability of halting problem and unrecognizability of minimal machines..etc.


11

Shannon's source and channel coding theorems.

A mathematical definition that distinguished between the transmitted, receiver and medium and which ignored the semantics of the message was a big step. Entropy, in the context of data is a fantastically useful notion. And because information theory should be better known.


Also note that Shannon almost invented information theory in its seminal paper.
Alejandro Piad

11

A beautiful result which builds on the PCP theorem states that it is computationally hard (NP-hard) to satisfy more than 7/8 of the clauses of 3SAT formula even for satisfiable ones.


4
Even more amazing since 7/8 of the clauses can be satisfied quite trivially (by a random assignment or a greedy algorithm.)
Jan Johannsen

1
This result isn't exactly the PCP theorem. It builds on the PCP theorem but needs much more work than that.
MCH

10

shors algorithm for factoring in BQP. in my opinion/memory, quantum computation was more just a theoretical curiosity until this result in 1994, at which point it seems the literature and research interest into QM computing exploded. its still arguably one of the most important QM algorithms known. awarded the 1999 Gödel prize. it also reveals that factoring in QM computation is actually in a sense somewhat better understood than in classical computing where eg the question of whether Factoring is NP complete is still open.


1
note that factoring being NP-complete would be a big shock, as it would imply coNP = NP
Sasho Nikolov

2
I would put Simon's algorithm together with Shor's.
Juan Bermejo Vega

10

seems to me the AKS P-time primality test is quite beautiful in various senses. a breakthrough at the time, one of the great but rather rare breakthroughs seen in complexity theory in our lifetimes. it solves a problem dating back to greek antiquity & relates to some of the earliest algorithms invented (sieve of eratosthenes), ie identifying primes efficiently. its a constructive proof that primality detection is in P as opposed to many great proofs that are unfortunately nonconstructive.

its interconnected to the RSA cryptography algorithm mentioned in another answer because that algorithm needs to find large primes quickly, prior to the AKS algorithm this was only probabilistically possible. its fundamentally connected to number theory & other deep problems eg the Riemann conjecture which in many ways is the original realm of algorithmics.

awarded the 2006 Gödel Prize and the 2006 Fulkerson Prize


3
This is definitely an important result, but beautiful? Really?
Jeffε

I agree with the above comment by JeffE. The result is vastly significant and thats what has been pointed out in the answer, rather than how (or what idea(s) used in) AKS primality testing is/are beautiful.
Nikhil

to me a "vastly significant" result is beautiful. "your mileage may vary".
vzn

7
Miller-Rabin is quite beautiful, on the other hand
Sasho Nikolov

1
dont know why people would consider the probabilistic algorithm superior in beauty to the exact algorithm. yes, AKS is largely based on Miller-Rabin but is major advance removing the randomization that was missed (or maybe not seen as possible) for decades & finally found. to me thats beautiful. moreover number theory is just a beautiful area of mathematics/algorithmics [with the theory of primes starring in number theory], this perspective can be seen in eg the famous book Mathematicians Apology by GH Hardy.
vzn

10

I think graph minor theorem by Robertson and Seymour was most wonderful theories I ever seen (and partially read it). First of all it's quiet complicated, but base conjectures are not hard and may be everyone working in TCS can guess them. Their extreme effort to prove them was wonderful. In fact after I read some of the papers in that series I understand power of human mind.

Also graph minor theorem has a great impact on different fields of TCS. Like graph theory, approximation algorithm, parametrized algorithms, logic, ...


9

One of my favourite family of results is that various problems of a seemingly infinite nature are decidable.

  1. The first order theory of real closed fields is decidable (by Tarski). Euclidean geometry is also a model of the axioms of real-closed fields, hence, by Tarski, first order statements in this model are decidable.
  2. Presburger arithmetic is decidable.
  3. First order theory of algebraically closed fields (this includes complex numbers) is decidable.
  4. Monadic second order logic over infinite (and finite) words is decidable. The proof is elegant and can be taught to undergrads.

8

There are lots of lovely results about probabilistic algorithms, which are deceptively simple and a great step forward in the way we think about computation.

von Neumann's trick for implementing a fair coin with a biased one. We are so used to probabilistic algorithms now, but from an outside perspective, this is unbelieveably cool. Both the algorithm and proof are accessible to anyone who knows high-school probability.


I would have expected you to mention Yao's minmax principle for finding lower bounds on the expected running times of Las Vegas algorithms. It connects ideas of game theory with probability and algorithms.
karthik

Sure. But I'm spamming this question with enough answers already. Please add your favourite result as an answer.
Vijay D

8

The result by Tim Griffin that control operators such as call/cc are related to classical logic, extending the Curry-Howard correspondence.

Basically, the typing of call/cc is such that if E has type ¬¬τ, then call/cc(E) has type τ. This works out when interpreting the type ¬τ as τ, as is standard in logic, and corresponds to the type of a function that takes a τ and never returns. That's exactly what a continuation does, making the correspondence work.

His paper, "A Formulae-as-types notion of control", appears in POPL 1990.


7

My favorite is Rabin's linear time algorithm for computing the closest pair of points in the plane (or more precisely its simplification). It get across the importance of the computation model, the power of randomized algorithms, and some elegant way to think about randomized algorithms.

This said, CS is still far from achieving the level of elegance one encounters in mathematics (well, they had 5000 years head start), from basic definitions/results in calculus, topology (fixed point theorems), combinatorics, geometry (Pythagorean theorem http://en.wikipedia.org/wiki/File:Pythag_anim.gif), etc.

If you look for beauty, look for it everywhere...


5

This result is probably a bit recent to qualify as fundamental, but I believe that the types-as-homotopy-types interpretation qualifies. This view allows interpreting types from constructive type theory as sets with certain geometric properties, in this case homotopy.

I find this point of view to be particularly beautiful as it makes certain previously complex observations about type theory simple, for instance the fact that"axiom K" is not derivable.

An overview of this budding field by Steve Awodey can be found here.


2

Zero-knowledge proof is a very interesting concept. It allows for an entity, the prover, to prove (with high probability) to another entity, the verifier, that it knows "a secret" (a solution to some NP-problem, a modular square-root of some number, a discrete log of some number etc …) without giving any information at all about the secret (which is difficult at first glance, since the first idea for proving that you know a secret is to actually tell the secret, and that any communication that could result in the verifier believing that you know the secret can a priori only increase the knowledge of the verifier about the secret).

Sitemizi kullandığınızda şunları okuyup anladığınızı kabul etmiş olursunuz: Çerez Politikası ve Gizlilik Politikası.
Licensed under cc by-sa 3.0 with attribution required.