First, I suppose you've already heard of the Church-Turing test, which says that everything we call “computation” is something that can be done using a Turing machine (or any of many other equivalent models) . Thus, the Turing-complete language is one in which any calculations can be expressed. Conversely, a Turing-incomplete language is one in which there are some computations that cannot be expressed.
Well, that was not very informative. Let me give you an example. There is one thing that you cannot do in any Turing-incomplete language: you cannot write a simulator of a Turing machine (otherwise you could encode any calculations on a simulated Turing machine).
Well, this is still not very informative. the real question is: what useful program cannot be written in Turing? Well, no one came up with a definition of a “useful program” that includes all the programs that someone wrote somewhere for a useful purpose, and that does not include all of Turing's machine calculations. Therefore, developing a Turing-incomplete language in which you can write all useful programs is still a very long-term research goal.
Now there are several very different types of Turing-incomplete languages, and they differ in what they cannot do. However, there is a common theme. If you are developing a language, there are two main ways to ensure that a language is completed by Turing:
Take a look at a few examples of complete languages other than Turing, which some people may nevertheless call programming languages.
Early versions of FORTRAN did not have dynamic memory allocation. You had to figure out in advance how much memory your calculation would need and allocate it. Despite this, FORTRAN was once the most widely used programming language.
An obvious practical limitation is that before starting the program, you must predict the memory requirements of your program. This may be difficult and may be impossible if the size of the input data is not limited in advance. At that time, the person who fed the input was often the person who wrote the program, so this was not so important. But this is not the case for most programs written today.
Coq is a language designed for proof theorems . Now proving theorems and running programs are very closely related , so you can write programs in Coq just as you prove a theorem. Intuitively, the proof of Theorem "A means that B" is a function that takes the proof of Theorem A as an argument and returns the proof of Theorem B.
Since the purpose of the system is to prove the theorems, you cannot allow a programmer to write arbitrary functions. Imagine that the language allowed you to write a stupid recursive function that was just called (select the line that uses your favorite language):
theorem_B boom (theorem_A a) { return boom(a); } let rec boom (a : theorem_A) : theorem_B = boom (a) def boom(a): boom(a) (define (boom a) (boom a))
You cannot allow the existence of such a function to convince you that A means B, otherwise you can prove something, not just true theorems! Thus, Coq (and similar theoretical theorems) prohibit arbitrary recursion. When you write a recursive function, you must prove that it always ends, so that whenever you run it on the proof of Theorem A, you know that it will build the proof of Theorem B.
The immediate practical limitation of Coq is that you cannot write arbitrary recursive functions. Since the system must be able to reject all functions not related to termination, the insolubility of the stop problem (or, more generally, the Riesz Theorem ) ensures that there are also terminating functions that are also rejected. An additional practical difficulty is that you must help the system prove that your function is completing.
A lot of research is currently underway to make secure systems more like a programming language, without compromising their guarantee that if you have a function from A to B, this is no worse than the mathematical proof that A implies B Expanding the system to take more complementary functions is one of the research topics. Other distribution areas include solving real-world problems such as input / output and concurrency. Another challenge is to make these systems accessible to mere mortals (or perhaps to convince ordinary mortals that they are truly accessible).
Synchronous programming languages are languages intended for programming real-time systems, that is, systems in which a program must respond with less than n clock cycles. They are mainly used for critical systems such as vehicle control or alarms. These languages provide reliable guarantees of how long a program will take to execute and how much memory it can allocate.
Of course, the analogy of such strong guarantees is that you cannot write programs whose memory consumption and runtime you cannot predict in advance. In particular, you cannot record a program whose consumption or time depends on the input data.
There are many specialized languages that do not even try to program languages and therefore can remain far from Turing's completeness: regular expressions, data languages, most markup languages, ...
By the way, Douglas Hofstadter wrote several very interesting non-fiction books about computing, in particular Godel, Asher, Bach: The Eternal Golden Braid . I don’t remember if he explicitly discusses the limitations of Turing-incompleteness, but reading his books will definitely help you understand more technical materials.
Gilles Aug 16 '10 at 12:08 2010-08-16 12:08
source share