Curious About E

Just recently discovered the E Programming Language. Very interesting concept. It’s nice to see a something growing out of the Algol/C/C++/Java tree that improves the syntax. And a mainstream language that can handle concurrency well without blocks and locks is definitely needed. Making programmers deal with threads and mutexes is definitely too hard, just like assigning registers in Assembly language is too hard. I wonder, however, if E doesn’t merely shift the focus of the problem without truly solving it. If I understand E in a Walnut, programmers using E still have to keep mental track of the difference between local, immediate-return method calls and possibly-remote, eventual “promises.” The when (xVow) -> done(x) syntax also looks a bit odd.

My question is this: is it necessary to distinguish, at the language level, between immediate values and promised values? As long as the compiler knows which is which, who cares? The when-catch construct seems to exist primarily to help programmers used to conventional threaded multiprocessing and try/catch blocks catch their breath.

I have always assumed that in purely functional languages — Haskell, ML, maybe Scheme? — expressions could theoretically be evaluated in any order, on any processor. Note I say theoretically, as I am not aware of any implementations that do this. Could one write an expression, using values that may come from any computer, anywhere, and trust that the compiler will do the necessary things to ensure that all the values end up in the right place at the right time to produce the final result?

Update, 4 May 2006: Maybe not. A Note on Distributed Computing says that, in the real world of latency, unreliable networks, and component failures, local and remote objects are inherently different and have to be treated differently at the application level. A reasonable argument, but I still like to believe it would be possible to have the differences handled by a compiler.

Programming by Linguists II

Given the task of designing a programming language, which must be exactly defined for engineering purposes, what would a linguist — as opposed to a mathemetician — do?

Given the task of designing a programming language, which must be exactly defined for engineering purposes, what would a linguist — as opposed to a mathemetician — do?

The first step would probably be to define parts of speech. Most programming languages really have only two parts of speech: nouns (variables) and verbs (functions). Named or keyword arguments might count as adverbs, and object properties are often treated like adjectives, but they are not used the same way as in real languages, where they can completely change the meaning of the words they modify.

Next comes grammar: sentence structure, connecting words. Again, despite the complexity of their punctuation rules (which a linguist would classify as mechanics, not grammar) most programming has a very simple grammar. We have statements, which correspond to sentences; and expressions, which are usually mathematical. (Lisp does away with the former, and as a result is much less like natural languages, a barrier to understanding that contributes to its obscurity.)

The third and most complicated step would probably be defining vocabulary, i.e. coming up with names for everything. Programmers are notoriously bad at this. One can hardly blame them for not being linguists themselves, but I think computer science ought to include at least some basic theory of naming. One difficulty is the usual requirement that every name used in a single program be unique. Scoped namespaces alleviate some of the trouble, but it is still difficult to come up with names that are both distinct enough to have an obvious meaning and concise enough to be convenient. Lisp’s panoply of car/cdr functions (cadr, cdar, caadr, caddr, …) might win the prize for the worst name choices. One also encounters library functions with exceedingly long names such as libfooRemoveAllFoosFromBaz. (Even with nested scopes and object-oriented programming, this example would probably be written as Foo.Baz.foos.removeAll, which is not much better.)

Vocabulary for a programming language typically includes both the core language and a set of standard libraries. One flaw with nearly all programming languages, Lisp being the notable exception, is the demarcation between the “core” language and its libraries. In all but a few highly specialized languages, libraries will comprise the meat of almost every program. In the linguistic analogy, one could almost say that the core of a language is its grammar, while the libraries are its vocabulary.

It is tempting to assume that human languages have infinite precision. That is, they can express any concept that the mind can conceieve of. This is not entirely true. There are many documented examples of concepts that exist in one language but for which there is no adequate word, sometimes not even a combination of words, in another language. Much of what are considered “cultural differences” are caused by different vocabularies that define the set of concepts that can be expressed, and therefore understood, by different cultures.

The fact that nearly every programming language draws its words from English makes things harder still. English is terribly dependent on context for meaning, using the same simple words to express many different concepts. For example, consider the verb to put, whose English definition fills a full dictionary page. Add to this the habit of almost all languages of using compound phrases to name single concepts. German habitually mashes these phrases together to make new words, but names like “ListOfPrimeNumbers” feel awkward in English.

So vocabulary is difficult every way around. Of course, linguists are rarely consulted when new English words are being invented, although they will decide whether those words appear in a dictionary. In naming functions, invented words would be a barrier to comprehension. And yet, it is difficult to find existing words to identify the many operations which have no direct relevance to the problem at hand but which are nonetheless necessary to help the computer do its job. Think of prototypes, declarations, typecasts, and conversions.

The Double Disconnect

In looking around the offices where I have worked, I see innumerable places where existing technology could be leveraged to expedite, simplify, or otherwise enhance day-to-day workflow. Things could happen faster, cheaper, with fewer staff. So why isn’t this done? I think there are two fundamental problems:

The first is the disconnect between the potential of the technology and the knowledge of those who use it. There’s an awful lot of software out there. I expect almost no one understands all the capabilities of even standard office suites. I have been called a “Word specialist” by temporary employers, but I certainly won’t claim to know every feature of Microsoft Word. I list Excel on my resume, but Excel has hundreds of features I know nothing about. It is difficult, even in this day and age, to hire quality technical personnel with sufficiently broad knowledge to combine the capabilities of different software packages in useful ways.

The second disconnect is between those who know how to use the technology and those who direct how it is to be used, i.e. management. Dilbert’s pointy-haired boss notwithstanding, I don’t really believe managers deserve the blame for misuse of technology. They lack training and experience, but they also lack technical personnel who are sufficiently skilled at realizing and applying the potential of technology. This problem is only made worse when managers learn something of technology’s potential without learning the complexity of the steps required to implement it.

So we have not one but two vast gulfs of understanding. No wonder everyone keeps muddling along with half-baked solutions. I don’t know how to fix this. Yet another piece of software is obviously not the solution. I think nothing short of a paradigm shift in the way software is designed and used would suffice.

Oh, Those Hyphens!

Why does Lisp use hyphens instead of underscores or CamelCase? [Read more…]

Lisp differs from almost every other programming language on Earth by using hyphens in its function/variable names. C/Perl/Java/etc. programmers, who can only use hypens for subtraction, like to argue about the relative merits of NamesWithCamelCaseInThem, which are hard to read, or names_with_lots_of_underscores_in_them, which are hard to type. What the debaters rarely mention, it seems, is that neither style is particularly logical.

When you define a function or variable name, you are effectively defining a new word in the vocabulary of your program. Most Western languages allow the definition of “compound words” composed of multiple words. In normal language, these compound words are occasionally mashed together (e.g. “weekend,” “weblog”) but more often left separate (e.g. “dining room,” “miles per hour”).

No programming language parser has yet been written that can understand compound words with spaces in them, so we have to compromise. Western languages, English in particular, already have a compromise in place: they use hypens to form compound words which would be awkward to write as single words or as separate ones (e.g. “all-inclusive,” “president-elect”).

So Lisp’s hyphenated names are not only easier to read than CamelCase and easier to type than underscores, they are actually the most logical choice for creating new compound words. Yet another example of Lisp’s innate superiority. ;)

Programming by Linguists I

What would a programming language designed by a linguist look like? [Read more…]

Why do we speak of programming languages instead of programming notation? The latter seems a more accurate term for the mixture of words, punctuation, symbols, and and spacing that make up most programming, which owes more to mathematics than to language. We even call it code, an admission that it is not a real human language. Real languages allow infinitely more variety of expression, but are correspondingly harder to define precisely. Programming languages presumably grew out of a need to express concepts particular to computers in a way that could be engineered in electronic circuits. Engineering does not allow for ambiguity, which is a good thing. This all makes sense given that the pioneers of computer science were engineers and mathematicians. But I like to speculate: what would a programming language designed by a linguist look like?

It would almost certainly use far less punctuation. Natural languages tend to have a much higher ratio of words to punctuation and layout. Chinese, at one extreme, has almost no punctuation; English, at the other, has a fair amount. In either case, however, the punctuation and layout serve only as indicators for what would normally be conveyed by tone, inflection, and rate of speech.

Most programming languages are difficult to impossible to understand when spoken, unless one “reads” the punctuation characters aloud, which disrupts the flow of speech. Who ever reads this:

for (int i = 1; i <= 10; i++) {
        printf("%d\n", i);

as “for open parenthesis int i equal zero semicolon i less than equal ten semicolon i plus plus close parenthesis open bracket print f open parenthesis double quotation mark percent d slash n double quotation mark comma i close parenthesis semicolon close bracket”? No one, except maybe a beginning C programmer. On the other hand, how does one read the above program aloud? One can describe what it does in a sentence, “Print the integers from one to ten, one per line” but that version does not even scratch the surface of what the computer does.

To be continued…