A prime solution

Published : Aug 17, 2002 00:00 IST

A team of young researchers at the Indian Institute of Technology, Kanpur, solves a problem in number theory and computer science that had eluded the best of minds for decades.

PRIME numbers are simply and precisely defined. An integer greater than one is prime if it is divisible only by itself and by 1. If an integer is not prime, it is composite. Primes - 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41 and so on - are the building blocks of positive integers. The Fundamental Theorem of Arithmetic states that every positive integer is a product of prime numbers in one and only one way, except for the order of the factors. For example, 60=22 x 31 x 51 or 504=23 x 32 x 7, and uniquely so. The prime factors of a number determine its properties. Thus, primes occupy a central place in number theory (the study of properties of numbers). Yet, they are terribly elusive.

A fundamental problem in mathematics is: given a number n, how do we know whether it is prime or composite. This naturally leads to these questions: how do we go about finding primes, and once we believe that we have found a prime, how do we prove that it is truly a prime? Karl Friedrich Gauss (1777-1855), one of the greatest mathematicians, says in his Disquisitiones Arithmeticae (1801): "The dignity of the science itself seems to require that every possible means be explored for the solution of a problem so elegant and so celebrated."

Since the late 1960s, with the advent of computers, there has been a shift of focus from finding a mathematical formula to devise an algorithm (a recipe of steps) that can test for primality of a given number, however large, in an efficient manner. This problem has received immense attention in recent years owing to the widespread use of prime numbers in encryption algorithms such as the RSA algorithm. (The problem of factorising a composite number into its prime factors is a much harder one.) Testing of primality thus became an important problem in theoretical computer science. If an algorithmic solution to the problem posed is found, then an executable computer programme can be written. From the perspective of theoretical computer science, therefore, two things are required. One, a certificate or proof that the algorithm does give the correct answer. Two, a measure of the efficiency of the algorithm. Namely, how well the algorithm uses the computer resources - such as time or the number of steps in the algorithm, space or memory utilised - as a function of the "input size" of the problem to obtain the solution.

Consider the algorithm for ordinary multiplication. The input size for a given number n is the encoding of the number to execute the algorithm. The number of bits by which a given number is represented is essentially the number of digits in the number. An N-digit number n is roughly represented as 10N; equivalently, for any given number n, its number of digits N is roughly log n. As is easily seen, the number of steps involved in the simple school algorithm for multiplication of two N-digit numbers is proportional to N2. The number of steps involved is thus at best some power of N; that is, a polynomial in the input size. Therefore, the time taken by a computer programme, written to execute the multiplication algorithm, will also grow with the number of digits at best as some power of N.

Contrast this with the school algorithm for primality testing. To test the primality of small numbers, say all those less than 1010, one of the most efficient algorithms is the following ancient algorithm due to the Greek mathematician Eratosthenes (circa 240 B.C.). To test n for primality, just divide by all the primes less than the square root of n. For example, to determine if 211 is a prime, we just divide by 2, 3, 5, 7, 11 and 13. The number of steps involved here grows as the number of divisions that one has to do, namely n or n1/2. In terms of the input size, the number of digits N, which is approximately log n, the number of steps n1/2 is nothing but [10N]1/2. That is, the algorithm grows exponentially with the input size. So if the number n has, say, 100 digits, the time taken for the algorithm to return an answer could take longer than one's life time. One can easily imagine input sizes for which programmes would be running for ever!

Scientists at the Computer Science and Engineering (CSE) department of the Indian Institute of Technology, Kanpur (IIT-K), have solved an outstanding problem of number theory and computer science - to obtain a deterministic and unconditional algorithm in which the number of steps - equivalently the running time of the programme - grows at best as a power of the input size; that is, as a "polynomial" in the input size. Manindra Agrawal, a CSE professor and his two B.Tech students, Neeraj Kayal and Nitin Saxena, have discovered this "polynomial time" algorithm that had eluded the best of minds in the game for decades. Posted on the IIT-K web site on August 6, the paper has attracted the attention of top scientists in the field. The paper is yet to be published but has already been scrutinised by top scientists, including Shafi Goldwasser of the Massachusetts Institute of Technology (MIT), who called it the best result in theoretical computer science (TCS) in the last 10 years, and Carl Pomerance of Bell Labs, who is credited with one of the algorithms currently in use and who has called the algorithm "beautiful".

There are several algorithms in vogue for testing primality, but they are either probabilistic (with small probabilities for returning a composite number as a prime or failing on a prime number) or conditional (like using some unproven hypothesis), or deterministic and unconditional but working in non-polynomial time. More pertinently, Agrawal and company also give a certificate or proof that the algorithm works in polynomial time and always returns correct answers. The work has been hailed as the most outstanding work in theoretical computer science to emerge from India. "In some sense, Manindra is competing with himself as the last greatest result in TCS from India is also his," points out Jaikumar Radhakrishnan of the Tata Institute of Fundamental Research (TIFR), Mumbai.

"The proof is simple, elegant and beautiful," points out R. Balasubramanian, director of the Institute of Mathematical Sciences (IMSc), Chennai. "It does not use deep mathematics except for a 15-year-old result. If Manindra and company had not done it, I wonder if anybody else would have done it. It is not a simple extension of somebody else's result, which is what everyone else in the field has been doing. Even though the ideas are simple they are very ingenious," Balasubramanian adds.

All the earlier approaches have started with the basic equation that is satisfied by all primes, known as the Fermat's Little Theorem. In 1976, G.I. Miller used Fermat's Little Theorem and assumed the validity of the as yet unproven Extended Riemann Hypothesis (ERH) and obtained a deterministic polynomial-time algorithm. This was modified in 1980 by M.G. Rabin, who introduced a probability function (random coin tosses) and obtained a much more efficient, unconditional (that is, without using any unproven hypothesis like ERH) but randomised polynomial-time algorithm. If the number was a prime, the Rabin algorithm gave the correct answer, but if it was composite, it sometimes determined it to be prime. However, the Miller-Rabin algorithm is a very efficient test for practical real-life implementation in areas such as cryptography because the level of error is extremely small and the probability of getting a correct answer far outweighs the error probability. Since then a number of randomised algorithms in polynomial time have emerged. Indeed, if randomisation had to be given up it seemed one had to invoke some condition such as the ERH.

A breakthrough came in 1983 when L.M. Adleman, C. Pomerance and R.S. Rumeley (APR) gave a deterministic and unconditional algorithm but in an almost polynomial time - it gave answers in NloglogN steps - while all the earlier deterministic and unconditional algorithms required exponential time. This was, however, much less efficient than the Miller-Rabin algorithm. This was further improved in 1984 by H. Cohen and A.K. Lenstra to yield the primality of a 100-digit number in a matter of seconds. The next big leap in primality testing came in 1986 with the work of S. Goldwasser and J. Kilian who gave a randomised algorithm using some properties of a class of curves known as elliptic curves. Adleman and D. Huang improved upon the Goldwasser-Kilian algorithm to derive a randomised polynomial algorithm that also always produced a proof of the correctness of the primality testing.

Given the trend in the field during the 1980s, the general belief in the community was there should exist a polynomial time algorithm for proving primality, but finding it was going to be involved and difficult. The greatness of the Agrawal-Kayal-Saxena (AKS) algorithm is that it is relatively simple, not requiring fancy mathematics of elliptic curves and the like. It is deterministic and relies on no unproven assumptions. The reason for the AKS algorithm's success is simple. Unlike others who believed that the elliptic curve method was the only viable approach to the problem and, as in any other field, the newer algorithms were basically incremental advances over the Goldwasser and Kilian approach, AKS chose to abandon the well-treaded approach altogether and explored new lines of attack on the problem using simple concepts of algebra.

The key to AKS' result is a new, generalised version of Fermat's Little Theorem, a polynomial equation. The equation is such that it becomes zero if the number n, whose primality is being tested, is a prime. The number n occurs as the exponent in the polynomial constructed by AKS. Agrawal had the conviction that the solution to the problem lay in algorithmically computing the equation for the given number n and showing that it vanishes if it is a prime. He had been carrying this conviction for four years since he used this property in 1999 to give a randomised algorithm in association with his thesis adviser Somenath Biswas of CSE, IIT-K.

Indeed, Agrawal had been talking at conferences and meetings about his idea but everyone, including mathematicians, who apparently use the trick that AKS have finally come up with in the context of some analytical proofs in number theory, did not think that the approach was feasible for a computational algorithm. The crux of the difficulty was that as the size of n increases the number of quantities that need to be computed to demonstrate that the polynomial equation vanishes for prime n also increases greatly. The task was to reduce the problem to a manageable one. This is what AKS achieved by some clever tricks using concepts of undergraduate algebra and a 15-year old theorem.

"I had a conjecture and I felt that if I could prove the conjecture, one should be able to reduce the problem and find a deterministic solution," says Agrawal. During 1999-2001, Agrawal and Kayal and Saxena did several experimental runs using the computer to see if the conjecture was true. "Initial results encouraged us, and this formed their B. Tech project report in August 2001," says Agrawal. Kayal and Saxena finished their B. Tech and joined Agrawal early this year to do their Ph.D in the same topic. In April 2002, some general intermediate results were obtained, including a deterministic algorithm invoking ETH. "They did not go home for summer," says Agrawal, who is all praise for the two youngsters. "We had not realised it then that we were so close, but in July we finally managed to prove it except for one thing which required a certain distribution of primes over a certain interval of integers to be valid. We are not so familiar with theorems in mathematics, but an Internet search revealed this 15-year-old theorem by Fouvry," says Agrawal. "But now I feel even that probably is not required."

The trick lay in not proving exactly the conjecture they had started out with but in constructing yet another modified polynomial and studying its algebraic properties. Basically two clever strategies did the trick. First, reduce the problem by projecting the polynomial down to a smaller domain of numbers. Then, using some clever and simple algebraic manipulations, the computation with the polynomial over the small domain could be demonstrated to be equivalent to computation over a large domain, a property for which Agrawal has coined the phrase 'lifting'. In short, the problem of computing a large number of quantities for large n1 was reduced to computing quantities for another smaller number n2 related to n1 through some simple algebraic equation.

The algorithm has been shown to run in polynomial time at most as N12, where N is the number of digits. For a special class of primes known as Sophie-Germain primes, using the widely believed conjecture on their density, the algorithmic time complexity is reduced to N6. Indeed, Agrawal believes that one should be able to do better than this and the time taken should go as N4. Indeed, from the dozen lines of the algorithm that is there in the paper posted on the web, by some simple argument Agrawal is able to reduce the algorithm to just four lines. But he still thinks that the proof is ugly even as it is being called "elegant and beautiful". "The most beautiful thing in the proof perhaps is the 'lifting' property of the polynomial functions. I had this idea for nearly six months," says Agrawal.

Expectedly AKS' work has caused sufficient excitement around the world. Attempts are already on to improve the result and already some improvements have been posted on the net, according to Balasubramanian. "Already I know a few people are trying to simplify and generalise the result to see what best could be proved. I would be surprised if this elegant but powerful method does not lead to stronger results," says Balasubramanian. A much simplified proof has apparently been obtained by Balasubramanian himself. As regards the proof that the algorithm works and indeed produces answers correctly, the proof follows immediately from the very construction of the algorithm, says Agrawal.

One would recall the discovery of an exciting algorithm by another Indian scientist Narendra Karmarkar, then working in the U.S. (currently in Pune) in the area of linear programming. In some sense, this work is much superior to that in the sense that first the complexity in the prime number problem is much more, and in terms of polynomial time achieved, the Karmarkar algorithm is much less efficient, according to computer science experts. Karmarkar's algorithm had been touted to generate a great deal of revenue and that the patent on it would generate huge royalty for Bell Labs. In reality, however, the algorithm never found widespread application and the patent generated hardly any revenue.

What are the implications of the AKS-algorithm? At present there is no possible application for it, says Agrawal who had taken up the problem purely as an "intellectual challenge". He, however, believes that in the next few years its efficiency will be improved greatly by many people and it should outperform the currently popular algorithms to verify primality in the context of cryptography, particularly in certain situations where large primes are needed and no error can be tolerated. How about a patent then? "Patenting an algorithm is messy. I was against it though the institute wanted it. Finally, we decided against it," says Agrawal.

What about the possible use of the technique to crack the unsolved and the much harder problem of factorisation of composite numbers in polynomial time? "I do not have any strong intuitive opinion on that. It is a new algebraic approach and has revealed rich structure to work with. (It is) certainly worth exploring and I shall put the boys on the job," says Agrawal. "At a very fundamental level, the problem is one of searching and verification, which is the domain of computational complexity in theoretical computer science," he says.

Algorithmic complexity theory classifies problems depending on how difficult they are to solve. A problem is assigned to P class (where P stands for 'polynomial time')if the number of steps needed to solve it is at most some power of the problem size. That is, the problem is easy, feasible and tractable. A problem is assigned to NP class (where NP stands for 'non-deterministic polynomial time') if it permits a non-deterministic solution but the proof of a given solution is bounded by some power of the problem's size. That is, verification of the solution is easy, feasible and tractable. Primality testing, as mentioned before, is much easier than the problem of factorisation and has long been believed to be in P. Only now with AKS' work, it has actually been shown to belong to P. Indeed, the AKS paper is titled "Primes is in P". The big (philosophical) question in theoretical computer science is whether all problems are easy, feasible and tractable; that is whether P=NP? Researchers, however, would like to believe that there will always be some unsolvable problems and the conjecture P is not equal to NP. This has been posed as the millennium problem with a big award attached to it.

Agrawal is a home-grown scientist who began his work on complexity theory under Biswas at IIT-K itself. A B.Tech product of IIT-K, he spent some time at the Chennai Mathematics Institute (of SPIC Foundation) for a few years after his Ph.D in 1991 and returned to IIT-K as a professor in 1996. Guwahati-born Kayal and Allahabad-born Saxena seem unfazed by all the excitement and are keen to continue working in complexity theory. "It was just simple algebra that we worked with. But there is so much to learn in mathematics and computer science," says Saxena.

Interestingly, both were part of the Indian Mathematics Olympiad Squad (INMOS) of 1997. "I had cracked a problem when I was given a problem to solve, which I was told no one had solved in the International Olympiad competition," says Saxena. Why did they not join mathematics for undergraduation? Peer pressure, they say. Computer science is perceived as a professional course which would fetch jobs. But their heart is in mathematics and they figured out a way of doing mathematics, and high-quality mathematics at that, staying in computer science. While Kayal had decided to remain in India, Saxena had wanted to go abroad. It is pleasantly ironical that he did not get a scholarship to do his Ph.D abroad at the university of his choice. Now, barely after one semester of their Ph.D, this work should qualify for their theses.

Sign in to Unlock member-only benefits!
  • Bookmark stories to read later.
  • Comment on stories to start conversations.
  • Subscribe to our newsletters.
  • Get notified about discounts and offers to our products.
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide to our community guidelines for posting your comment