Imperative Languages are inherently coupled

After doing a revisit to Prolog after working full time on C#, part time in python, and continued experiments in Haskell I have come to see that there exists a sweet spot in languages where the language is not tied to any individual computer, but rather a general statement of logic.

Von Neumann Languages

C/C++ style languages, assembly, and any imperative language that allows you access to memory. These languages are obviously efficient, because their abstraction is the Von Neumann architecture computer you program on today. A side effect of this is that programs become tied to an abstraction of a machine instead of their true logical semantics. This yields an indirect statement of logic. Thus making the program's representation different than a direct statement of math or logic. (however you can isolate parts that directly describe their logic, by only using rvalues). These languages require a very specific abstraction of the computers we know today.

So what? why would we care if Von Neumann languages are coupled to a machine? Well, one example is that in my brief reading of quantum computers, I've learned that without running a classical computer emulation, these languages would not make sense on a quantum computer. Because these languages rely so heavily on the abstraction of a transistor based computer where states can be modified. Quantum computers do not have such modification abilities (disclaimer: I know only as much about quantum computers the first few google search results). There are languages which are so universal to logic that they could run on any computer that could exist, since they follow math instead of Von Neumann computers.

All of these languages model closest to a Turing machine than anything found in formal logic. They are not based directly in mathematics, but rather based on some machine which you can give instructions to that mutates a tape. The programs written for such machines are just as powerful as anything else and could be more efficient in the real world, but would require complete re-writing to remove the aspect of the machine from the logic the programmer was trying to say.

Functional Languages

Functional languages attempt to remove the machine from the picture, and model directly after logic (traditionally assumed to be lambda calculus). But since lambda calculus is a representation of deductive logic without the coupling of thinking of a specific machine, it is just as universal as pure logic. The choice between functional and logical languages would seem to be a matter of preference regarding which logic abstraction you like more: lambda calculus or formal logic.

Logical Languages

I've nearly explained this already, but logical languages are, like functional languages, universal because they have abstracted the "machine" from their logic. Choosing them is simply a matter of deciding which logic abstraction you enjoy more.


When it comes down to it, every language has hints of a Von Neumann language sprinkled on it. Any language that seeks to be useful on our computer of today would have this since simply that is how they would ever do IO, or interact with another program (even in Haskell you can performUnsafeIO to perform arbitrary modifications to memory).

However, what I see the sweet spot of languages, are languages which have a small amount of optional Von Neumann language features, but are based entirely on logic. I don't mean Logic Programming is the only way to go. There are functional languages which offer the comfort of thinking only in logic, such as Haskell.

So the question is, why would I program for a machine, when I could decouple the machine, and write down my ideas in logic?