Idiomatic Macros for Common Lisp

Every programming language has idioms–patterns used so frequently that they are almost considered part of the language itself. (See these C idioms for examples.) Common Lisp, with almost unlimited possibilities for syntax, depends more heavily on idioms to remain comprehensible. Some examples are defsomething or define-something for definitions, something-p for predicate tests, and make-something for constructors.

Marco Antoniotti created a clever macro package called DEFINER that codifies the most common of these, definitions, into a single macro with consistent syntax for any object that is being defined. With DEFINER, all definitions can be made with a single macro: def, which is followed by a symbol specifying what is being defined. So defun becomes def function, defvar becomes def var, and define-condition becomes def condition.

Of course, this is Lisp, so def can be turned on itself to produce new definition forms with def definer. New def forms can then be created that follow the same syntactic pattern of:

(def type-of-thing name properties...)

As Marco writes, DEFINER “makes it possible to homogeneize a natural programming practice followed by several CL programmers.”

I say, why not extend this practice to other idioms? Here’s a list of other Common Lisp idioms I think could be well-served by the same sort of homogeneization:

Use with thing to delimit the scope of certain things–local variables, special variables, functions, open files, etc. Then let becomes with lexicals, labels becomes with functions, and with-open-file becomes with file.
A generic constructor for any anonymous (unnamed) object. This could ensure that all object constructors follow a predictable pattern after make instance, and turns lambda into the infinitely more readable make function. Also begs the question of why one can’t create anonymous classes.
A generic predicate, borrowed from the FiveAM testing framework. Should alway return a boolean true/false value. Gives us the more readable is even instead of evenp.
Replace Lisp’s scattering of equality predicates–eq, eql, equal, equalp, =, string=, etc.–with forms like equal object, equal contents, equal number, and equal string. One could alternately use is equal, although that looks odd in English grammar.
A generic type converter. Makes it more obvious, for example, when string is being used as a type specifier and when as a coercion function. Also makes it very easy to define new converters between types, assuming these are implemented as generic functions.
Generic accessor. Replaces both primitives like car as well as object-related functions like slot-value.
Replacement for setf. Every get form should have a corresponding set. Primitives like rplacd become set cdr.

And of course, to top it off we would want def with, def maker, def is, def equality, and def as. And while we’re at it, def idiom!

The point of all this is to make code more readable. One can reduce the 978 Common Lisp symbols to pairs of symbols, the first naming the general category of operation and the second specifying the exact operation within that category.

When examining unfamiliar code that uses these idiomatic macros, a programmer can look at the categorical names to get a rough idea of what is happening even without knowing about the specific functions being used. This can make Common Lisp more manageable without sacrificing any of its expressiveness.

Better Abstractions

A common complaint about Object-Oriented Programming (OOP) is that classes can make simple data hard to deal with: “I don’t want a DirectoryList object, I just want a list of strings.” I would say this is not a flaw inherent in OOP but rather in the way it is commonly used. “Encapsulation” and “data hiding” encourage creation of abstract interfaces completely divorced from their underlying implementation, even when that underlying implementation is in fact very simple. The problem is that abstract interfaces usually only provide a tiny subset of the operations one might potentially want to perform on the data hiding behind them.

What we have, essentially, are interfaces that abstract “down” to increasing levels of speficity, farther and farther away from the data themselves. Wouldn’t it be better if we could abstract “up,” choosing the representation of the data based on what we want to do with it? For example, take a very common, very general case: a collection of data elements with similar structure. There are only a few basic operations we are likely to perform on such a collection:

  1. Add an item
  2. Remove an item
  3. Modify an item
  4. Sort the items based on some criteria
  5. Find one or more items that satisfy certain criteria
  6. Perform agregate calculations on the items (max, min, etc.)

This is the genius of SQL: it provides the first three with simple commands, INSERT, UPDATE, and DELETE; and the last three with the multipurpose SELECT. An SQL query looks the same regardless of how big the data set is or what algorithm is used to index it. Unfortunately, this idea has never quite made it into general purpose programming languages.

Ultimately, it shouldn’t matter whether data are stored in an array in local memory or on a cluster of relational database servers located across the internet. The things we want to do with the data remain the same. A compiler should be able to translate abstract operations on collections into the actual work needed to perform them in a specific implementation. Ideally, we could just start feeding data into the system and let the compiler choose the most efficient implementation, switching over automatically when the collection becomes too big for a particular method. This is similar to the way Common Lisp (and other languages) automatically switch to arbitrary-precision integers when a calculation grows too large for fixed-width integers.

For an example of this, I turn, strangely enough, to one of the hairier aspects of Perl: tied variables. Basically, a tied variable looks and behaves exactly like one of Perl’s built-in data types (scalar, array, or hash table) when you use it, but it is actually an object that calls methods in response to the operations performed on it. For example, the Tie::File module represents a file as an array of strings, one per line. Delete an item from the array, and you remove a line from the file. Same with insertion, modification, and sorting. Presto! A complete random-access file library that doesn’t require the user to learn a new interface. The implementation does not load the whole file into memory, so it is efficient even for very large files. Iterating over lines in a file used to require a special syntax unique to files; with Tie::File it can be done with standard Perl control structures that iterate over lists.

The even more ambitious Data::All module attempts the same thing in a more general way: tying any external data format (CSV, RDBMS, Excel spreadsheets…) to standard Perl data structures. It’s still alpha code, but the idea is brilliant.

In short, I would like object-oriented interfaces that abstract data towards its simplest possible representation rather than the most specific representation. I do not want objects that limit what I can do with the data they contain.

Types Are Not Classes, But Why?

In Common Lisp, you can’t do this:

(defgeneric g (x))
(defmethod g ((x (and integer (satisfies evenp))) ...)

Because (and integer ...) is not a class. It is a valid type specifier and thus could be used in declarations, but it cannot be used in a method argument list.

It’s fairly easy to see why this is the case: allowing compound type specifiers in method argument lists effectively means creating anonymous classes on-the-fly, which would play havoc with the type hierarchies on which method dispatch depends.

But why can’t we unify types and classes? Every CLOS class has a corresponding type of the same name, and the built-in types are also classes. Furthermore, it is fairly obvious that (and integer (satisfies evenp)) is a sub-type of integer, so why not go ahead and treat it like a class that inherits from integer?

A Textual CAPTCHA in Lisp

Playing with the first in a (hopefully ongoing) series of Common Lisp Quizzes, I wrote a simple text-only CAPTCHA (completely automated public Turing test to tell computers and humans apart). My solution and others are posted at CL Quiz #1.

CAPTCHA> (generate-captcha)
"You started out with forty-nine Lisp Machines. Not through any fault of your own, you lost three Lisp Machines. Years later, when you were reflecting on this whole sordid process, you counted up how many you had. What was the result?"

As per the suggestion in the quiz problem, I used CL’s format "~r" to make up simple arithmetic questions. To make it harder for a computer to parse, I embedded the questions in complex sentences selected from stored lists. The results are at least hard enough that the Google Calculator can’t solve them, but even a modest AI with language capabilities would likely have little trouble.

This and discussions on the CL-Quiz mailing list got me thinking about CAPTCHAs in general. They’re really an unwinnable proposition. In order to check that a given answer is correct, it must be possible to solve the test algorithmically. Given that, it’s only a matter of time before someone writes an algorithm to solve it. Most CAPTCHAs in common use rely on adding enough “noise” to the image, sound, or text to confuse a computer, but as computer pattern-recognition abilities get better, the necessary noise level reaches the point that humans get confused as well.

Visual Regular Expressions

Steve Oualline wrote a nifty little Perl program to graph regular expressions. And Oliver Steele wrote an even niftier OpenLaszlo app to show how regular expressions work.

Together, they make the best (unintended) argument I’ve seen for visual programming languages. As Oualline writes in his article, “Humans can process images far faster and better than any computer. We don’t do so well when it comes to text.” His demonstration with a 359-character regular expression for validating email addresses proves the point.

Programmers need better tools to help them visualize code. Architects and engineers have been using highly sophisticated CAD tools for years. Why don’t programmers have the same tools?

Curious About E

Just recently discovered the E Programming Language. Very interesting concept. It’s nice to see a something growing out of the Algol/C/C++/Java tree that improves the syntax. And a mainstream language that can handle concurrency well without blocks and locks is definitely needed. Making programmers deal with threads and mutexes is definitely too hard, just like assigning registers in Assembly language is too hard. I wonder, however, if E doesn’t merely shift the focus of the problem without truly solving it. If I understand E in a Walnut, programmers using E still have to keep mental track of the difference between local, immediate-return method calls and possibly-remote, eventual “promises.” The when (xVow) -> done(x) syntax also looks a bit odd.

My question is this: is it necessary to distinguish, at the language level, between immediate values and promised values? As long as the compiler knows which is which, who cares? The when-catch construct seems to exist primarily to help programmers used to conventional threaded multiprocessing and try/catch blocks catch their breath.

I have always assumed that in purely functional languages — Haskell, ML, maybe Scheme? — expressions could theoretically be evaluated in any order, on any processor. Note I say theoretically, as I am not aware of any implementations that do this. Could one write an expression, using values that may come from any computer, anywhere, and trust that the compiler will do the necessary things to ensure that all the values end up in the right place at the right time to produce the final result?

Update, 4 May 2006: Maybe not. A Note on Distributed Computing says that, in the real world of latency, unreliable networks, and component failures, local and remote objects are inherently different and have to be treated differently at the application level. A reasonable argument, but I still like to believe it would be possible to have the differences handled by a compiler.

Programming by Linguists II

Given the task of designing a programming language, which must be exactly defined for engineering purposes, what would a linguist — as opposed to a mathemetician — do?

The first step would probably be to define parts of speech. Most programming languages really have only two parts of speech: nouns (variables) and verbs (functions). Named or keyword arguments might count as adverbs, and object properties are often treated like adjectives, but they are not used the same way as in real languages, where they can completely change the meaning of the words they modify.

Next comes grammar: sentence structure, connecting words. Again, despite the complexity of their punctuation rules (which a linguist would classify as mechanics, not grammar) most programming has a very simple grammar. We have statements, which correspond to sentences; and expressions, which are usually mathematical. (Lisp does away with the former, and as a result is much less like natural languages, a barrier to understanding that contributes to its obscurity.)

The third and most complicated step would probably be defining vocabulary, i.e. coming up with names for everything. Programmers are notoriously bad at this. One can hardly blame them for not being linguists themselves, but I think computer science ought to include at least some basic theory of naming. One difficulty is the usual requirement that every name used in a single program be unique. Scoped namespaces alleviate some of the trouble, but it is still difficult to come up with names that are both distinct enough to have an obvious meaning and concise enough to be convenient. Lisp’s panoply of car/cdr functions (cadr, cdar, caadr, caddr, …) might win the prize for the worst name choices. One also encounters library functions with exceedingly long names such as libfooRemoveAllFoosFromBaz. (Even with nested scopes and object-oriented programming, this example would probably be written as Foo.Baz.foos.removeAll, which is not much better.)

Vocabulary for a programming language typically includes both the core language and a set of standard libraries. One flaw with nearly all programming languages, Lisp being the notable exception, is the demarcation between the “core” language and its libraries. In all but a few highly specialized languages, libraries will comprise the meat of almost every program. In the linguistic analogy, one could almost say that the core of a language is its grammar, while the libraries are its vocabulary.

It is tempting to assume that human languages have infinite precision. That is, they can express any concept that the mind can conceieve of. This is not entirely true. There are many documented examples of concepts that exist in one language but for which there is no adequate word, sometimes not even a combination of words, in another language. Much of what are considered “cultural differences” are caused by different vocabularies that define the set of concepts that can be expressed, and therefore understood, by different cultures.

The fact that nearly every programming language draws its words from English makes things harder still. English is terribly dependent on context for meaning, using the same simple words to express many different concepts. For example, consider the verb to put, whose English definition fills a full dictionary page. Add to this the habit of almost all languages of using compound phrases to name single concepts. German habitually mashes these phrases together to make new words, but names like “ListOfPrimeNumbers” feel awkward in English.

So vocabulary is difficult every way around. Of course, linguists are rarely consulted when new English words are being invented, although they will decide whether those words appear in a dictionary. In naming functions, invented words would be a barrier to comprehension. And yet, it is difficult to find existing words to identify the many operations which have no direct relevance to the problem at hand but which are nonetheless necessary to help the computer do its job. Think of prototypes, declarations, typecasts, and conversions.

The Double Disconnect

In looking around the offices where I have worked, I see innumerable places where existing technology could be leveraged to expedite, simplify, or otherwise enhance day-to-day workflow. Things could happen faster, cheaper, with fewer staff. So why isn’t this done? I think there are two fundamental problems:

The first is the disconnect between the potential of the technology and the knowledge of those who use it. There’s an awful lot of software out there. I expect almost no one understands all the capabilities of even standard office suites. I have been called a “Word specialist” by temporary employers, but I certainly won’t claim to know every feature of Microsoft Word. I list Excel on my resume, but Excel has hundreds of features I know nothing about. It is difficult, even in this day and age, to hire quality technical personnel with sufficiently broad knowledge to combine the capabilities of different software packages in useful ways.

The second disconnect is between those who know how to use the technology and those who direct how it is to be used, i.e. management. Dilbert’s pointy-haired boss notwithstanding, I don’t really believe managers deserve the blame for misuse of technology. They lack training and experience, but they also lack technical personnel who are sufficiently skilled at realizing and applying the potential of technology. This problem is only made worse when managers learn something of technology’s potential without learning the complexity of the steps required to implement it.

So we have not one but two vast gulfs of understanding. No wonder everyone keeps muddling along with half-baked solutions. I don’t know how to fix this. Yet another piece of software is obviously not the solution. I think nothing short of a paradigm shift in the way software is designed and used would suffice.

Oh, Those Hyphens!

Lisp differs from almost every other programming language on Earth by using hyphens in its function/variable names. C/Perl/Java/etc. programmers, who can only use hypens for subtraction, like to argue about the relative merits of NamesWithCamelCaseInThem, which are hard to read, or names_with_lots_of_underscores_in_them, which are hard to type. What the debaters rarely mention, it seems, is that neither style is particularly logical.

When you define a function or variable name, you are effectively defining a new word in the vocabulary of your program. Most Western languages allow the definition of “compound words” composed of multiple words. In normal language, these compound words are occasionally mashed together (e.g. “weekend,” “weblog”) but more often left separate (e.g. “dining room,” “miles per hour”).

No programming language parser has yet been written that can understand compound words with spaces in them, so we have to compromise. Western languages, English in particular, already have a compromise in place: they use hypens to form compound words which would be awkward to write as single words or as separate ones (e.g. “all-inclusive,” “president-elect”).

So Lisp’s hyphenated names are not only easier to read than CamelCase and easier to type than underscores, they are actually the most logical choice for creating new compound words. Yet another example of Lisp’s innate superiority. ;)

Programming by Linguists I

Why do we speak of programming languages instead of programming notation? The latter seems a more accurate term for the mixture of words, punctuation, symbols, and and spacing that make up most programming, which owes more to mathematics than to language. We even call it code, an admission that it is not a real human language. Real languages allow infinitely more variety of expression, but are correspondingly harder to define precisely. Programming languages presumably grew out of a need to express concepts particular to computers in a way that could be engineered in electronic circuits. Engineering does not allow for ambiguity, which is a good thing. This all makes sense given that the pioneers of computer science were engineers and mathematicians. But I like to speculate: what would a programming language designed by a linguist look like?

It would almost certainly use far less punctuation. Natural languages tend to have a much higher ratio of words to punctuation and layout. Chinese, at one extreme, has almost no punctuation; English, at the other, has a fair amount. In either case, however, the punctuation and layout serve only as indicators for what would normally be conveyed by tone, inflection, and rate of speech.

Most programming languages are difficult to impossible to understand when spoken, unless one “reads” the punctuation characters aloud, which disrupts the flow of speech. Who ever reads this:

for (int i = 1; i <= 10; i++) {
        printf("%d\n", i);

as “for open parenthesis int i equal zero semicolon i less than equal ten semicolon i plus plus close parenthesis open bracket print f open parenthesis double quotation mark percent d slash n double quotation mark comma i close parenthesis semicolon close bracket”? No one, except maybe a beginning C programmer. On the other hand, how does one read the above program aloud? One can describe what it does in a sentence, “Print the integers from one to ten, one per line” but that version does not even scratch the surface of what the computer does.

To be continued…