An Opinionated Review of Clojure Applied

Why write a book about open-source software? (Not for the money. Trust me.) I’ve seen far too many “technical books” that merely regurgitate the documentation of a bunch of open source libraries. I’m happy to say that Clojure Applied, by my friends and colleagues Alex Miller and Ben Vandgrift, is not in this category. They sent me a free copy in return for review, so here it is.

As my other colleague Luke VanderHart pointed out in his recent talk at ClojureTRE, the biggest gap in written material about about Clojure — and a good candidate for the source of most “documentation” complaints — has been the lack of narrative. Clojure has great reference documentation, and lots of Clojure libraries come with great tutorials, but there aren’t many comprehensive stories about building complete applications.

Clojure Applied tells a story about how to write Clojure applications. This is not just a book about Clojure, it’s a book about how to write software which assumes you’re going to use Clojure to do it. Clojure Applied would probably not be a good choice as your first Clojure book, although it would be an excellent book to read while you are learning Clojure.

The narrative develops like most successful Clojure programs: from the bottom up, starting with data. Instead of primitives or abstract concepts, the first chapter begins with modeling domain data using maps and records. This is the right place to start, and I could wish this chapter went into even more detail. For example, I would have liked to see a comparison of nested versus “flat” map structures (“flatter” is easier to work with).

Domain modeling is a difficult concept to describe, so I can’t fairly criticize Ben & Alex’s efforts here, but I do think they lose their way slightly by introducing Prismatic’s Schema library very early on. To be sure, Schema is a powerful library that a lot of Clojure developers find useful. But placed here, along with discussions of type-based dispatch, it leaves the reader with the idea that types are a central feature of domain modeling in Clojure. I disagree. Thinking in terms of types early in the design process often leads to unnecessarily restrictive choices, producing inflexible designs.

Further compounding the error of focusing on types, this chapter wanders into more advanced topics such as protocols and multimethods. There’s even a technique for dynamically extending protocols at runtime, an advanced tactic I would not recommend under most circumstances.

Embracing the possibilities of design in a dynamically-typed language requires a willingness to work with values whose types may be unknown at certain points. For example, the type of the “accumulator” value in a transducer is hard to define, because for most transducers its type is irrelevant. The ability to ignore types when they are not needed is what makes dynamic languages so expressive.

On the other hand, I have seen many large Clojure programs drift into incomprehensibility by failing to constrain their types in any way, passing complex nested structures everywhere. In this case, the lack of validation leads to a kind of inside-out spaghetti code in which it’s impossible to deduce the type of something by reading the code which uses it. Given the choice between these two extremes, “over-typed” code will be easier to untangle than “under-typed,” so perhaps introducing validation early is a good idea.

Moving along, Chapter Two covers the other Clojure collection types. This is beginner material, but presented in terms of how and why you might want to use each type of collection. This chapter also covers some common beginner questions, such as how to search a sequential collection. Another advanced topic which I would not recommend — defining a new collection type implementing Clojure’s interfaces — sneaks in here, but I’ll give it a pass because it helps you understand the collection interfaces.

Chapter Three zeroes in on the sequential collections, in particular the sequence library. Here the narrative is about combining sequence functions, in particular the pattern of filter-map-reduce. This pattern is so fundamental that an experienced Clojure programmer (or Lisp, or any functional language) might not even think about it, but it’s a critical step to becoming an effective user of Clojure. This chapter also introduces Transducers. Even though Transducers might be considered an “advanced” topic, I think they belong here alongside sequences. The concepts are the same, and Transducers are really quite straightforward if you’re only looking, as this chapter does, at how to use them and not how they are implemented.

Part II, “Applications,” is probably the best section in the book. This is the critical piece missing from the first-round books about Clojure (including my own). How do you start combining all these little functional pieces into a working program?

The first chapter in this section describes mutable references with the most comprehensible real-world example I have seen, and also includes an excellent explanation of identity versus state. Another chapter describes all the various techniques for using multiple cores, including an important discussion of pipelines and core.async go blocks as processes.

Then there’s an entire chapter devoted to components as they are expressed in namespaces. Ben & Alex introduce all the concepts necessary to design and implement components without using my Component library, which I think is a smart choice. The Component library arrives in the following chapter, along with the idea of composing an application out of many components.

Part III, “Practices,” covers testing, output formats, and deployment.

The chapter on testing has a good description of the trade-offs between example-based and property-based testing, but doesn’t delve into the more difficult areas of integration or whole-system testing. Advanced testing techniques really deserve an entire book of their own.

“Formatting data” covers the usual suspects: JSON, EDN, and Transit. In my experience, the choice of data format is usually dictated by external constraints, but this chapter at least makes the trade-offs clear.

Finally, the chapter on deployment is a high-level overview of everything from GitHub to Elastic Beanstalk. There’s even a discussion of open-source licensing and contributor agreements. Heroku gets the most attention, which makes sense for a book targeted at mostly-beginners, but at least this chapter introduces some of the concerns one might want to think about when choosing a deployment platform.

After the last chapter, there’s a bonus pair of appendices. The first briefly covers the “roots” of Clojure, with links to source material. The second summarizes some principal motivations behind Clojure’s design as a guide to “Thinking in Clojure.” This latter section might have been more usefully incorporated into the text of the book, but that’s harder to write and can tend toward the preachy, so I can’t complain.

Circling back to where I started, Clojure Applied is a great book to read while learning Clojure. It’s not a language tutorial. It’s not stuffed with revolutionary ideas. Most importantly, it doesn’t try to do too much. It’s just solid, practical advice. Even the recommendations I disagree with are not bad ideas, just different preferences. Follow Ben & Alex’s advice while building your first Clojure program, and you’ll have a solid foundation to explore your own ideas and preferences.

Clojure Don’ts: Lazy Effects

This is probably my number one Clojure Don’t.

Laziness is often useful. It allows you to express “infinite” computations, and only pay for as much of the computation as you need.

Laziness also allows you to express computations without specifying when they should happen. And that’s a problem when you add side-effects.

By definition, a side-effect is something that changes the world outside your program. You almost certainly want it to happen at a specific time. Laziness takes away your control of when things happen.

So the rule is simple: Never mix side effects with lazy operations.

For example, if you need to do something to every element in a collection, you might reach for map. If thing you’re doing is a pure function, that’s fine. But if the thing you’re doing has side effects, map can lead to very unexpected results.

For example, this is a common new-to-Clojure mistake:

(take 5 (map prn (range 10)))

which prints


This is the old “chunked sequence” conundrum. Like many other lazy sequence functions, map has an optimization which allows it to evaluate batches of 32 elements at a time.

Then there’s the issue of lazy sequences not being evaluated at all. For example:

(do (map prn [0 1 2 3 4 5 6 7 8 9 10])
    (println "Hello, world!"))

which prints only:

Hello, world!

You might get the advice that you can “force” a lazy sequence to be evaluated with doall or dorun. There are also snippets floating around that purport to “unchunk” a sequence.

In my opinion, the presence of doall, dorun, or even “unchunk” is almost always a sign that something never should have been a lazy sequence in the first place.

Only use pure functions with the lazy sequence operations like map, filter, take-while, etc. When you need side effects, use one of these alternatives:

  • doseq: good default choice, clearly indicates side effects
  • run!: new in Clojure 1.7, can take the place of (dorun (map ...))
  • reduce, transduce, or something built on them

The last requires some more explanation. reduce and transduce are both non-lazy ways of consuming sequences or collections. As such, they are technically safe to use with side-effecting operations.

For example, this composition of take and map:

(transduce (comp (take 5)
                 (map prn))
           (range 10))

only prints 5 elements of the sequence, as requested:


The single-argument version of map returns a transducer which calls its function once for each element. The map transducer can’t control when the function gets evaluated — that’s in the hands of transduce, which is eager (non-lazy). The single-argument take limits the reduction to the first five elements.

As a general rule, I would not recommend using side-effecting operations in transducers. But if you know that the transducer will be used only in non-lazy operations — such as transduce, run!, or into — then it may be convenient.

(defn operation [input]
  ;; do something with input, return result
  (str "Result for " input))

(prn (into #{}
           (comp (take 3)
                 (map operation))
           (range 100)))

reduce, transduce, and into are useful when you need to collect the return value of the side-effecting operation.

Clojure Don’ts: Redundant map

Today’s Clojure Don’t is the opposite side of the coin to the heisenparameter.

If you have an operation on a single object, you don’t need to define another version just to operate on a collection of those objects.

That is, if you have a function like this:

(defn process-thing [thing]
  ;; process one thing

There is no reason to also write this:

(defn process-many-things [things]
  (map process-thing things))

The idiom “map a function over a collection” is so universal that any Clojure programmer should be able to write it without thinking twice.

Having a separate definition for processing a group of things implies that there is something special about processing a group instead of a single item. (For example, a more efficient batch implementation.) If that’s the case, then by all means write the batch version as well. But if not, then a function like process-many-things just clutters up your code while providing no benefit.

Clojure Don’ts: Single-branch if

A short Clojure don’t for today. This one is my style preference.

You have a single expression which should run if a condition is true, otherwise return nil.

Most Clojure programmers would probably write this:

(when (condition? ...)
  (then-expression ...))

But you could also write this:

(if (condition? ...)
  (then-expression ...)

Or even this, because the “else” branch of if defaults to nil:

(if (condition? ...)
  (then-expression ...))

There’s an argument to be made for any one of these.

The second variant, if ... nil, makes it very explicit that you want to return nil. The nil might be semantically meaningful in this context instead of just a “default” value.

Some people like the third variant, if with no “else” branch, because they think when is only for side-effects, leaving the single-branch if for “pure” code.

But for me it comes down, as usual, to readability.

The vast majority of the time, if contains both “then” and “else” expressions.

Sometimes a long “then” branch leaves the “else” branch dangling below it. I’m expecting this, so when I read an if my eyes automatically scan down to find the “else” branch.

If I see an if but don’t find an “else” branch, I get momentarily confused. Maybe a line is missing or the code is mis-indented.

Likewise, if I see an if explicitly returing nil, it looks like a mistake because I know it could be written as when. This is a universal pattern in Clojure: lots of expressions (cond, get, some) return nil as their default case, so it’s jarring to see a literal nil as a return value.

So my preferred style is the first version. In general terms:

An if should always have both “then” and “else” branches.
Use when for a condition which should return nil in the negative case.

Clojure Don’ts: The Heisenparameter

A pattern I particularly dislike: Function parameters which may or may not be collections.

Say you have a function that does some operation on a batch of inputs:

(defn process-batch [items]
  ;; ... do some work with items ...

Say further that, for this process, the fundamental unit of work is always a batch. Processing one thing is just a batch size of one.

Lots of processes are like this: I/O (arrays of bytes), database APIs (transactions of rows), and so on.

But maybe you have lots of code that mostly deals with one thing at a time, and only occasionally makes a larger batch. In the name of “convenience,” people write things like this:

(defn wrap-coll
  "Wraps argument in a vector if it is not already a collection."
  (if (coll? arg)

(defn process
  "Processes a single input or a collection of inputs."
  (process-batch (wrap-coll input)))

This is prevalent in dynamically-typed languages of all stripes. I think it’s a case of mistakenly choosing convenience over clarity.

This leads easily to mistakes like iterating over a collection, calling process on each element, when the same work could be done more efficiently in a batch.

Now imagine reading some code when you encounter a call to this function:

(process stuff)

Is stuff a collection or a single object? Who knows?

When you read code, there’s a kind of ad-hoc, mental type-inference going on. This is true regardless of what typing scheme your language uses. Narrowing the range of possible types something can be makes it easier to reason about what type it actually is.

The more general principle:
Be explicit about your types even when they’re dynamic.

If the operation requires a collection, then pass it a collection every time.

A “helper” like wrap-coll saves you a whopping two characters over just wrapping the argument in a literal vector, at the cost of lost clarity and specificity.

If you often forget to wrap the argument correctly, consider adding a type check:

(defn process-batch [items]
  {:pre [(coll? items)]}
  ;; ... 

If there actually are two distinct operations, one for a single object and one for a batch, then they should be separate functions:

(defn process-one [item]
  ;; ...

(defn process-batch [items]
  ;; ...

Clojure Don’ts: Optional Arguments with Varargs

Another Clojure don’t today. This one is a personal style preference, but I’ll try to back it up.

Say you want to define a function with a mix of required and optional arguments. I’ve often seen this:

(defn foo [a & [b]]
  (println "Required argument a is" a)
  (println "Optional argument b is" b))

This is a clever trick. It works because & [b] destructures the sequence of arguments passed to the function after a. Sequential destructuring doesn’t require that the number of symbols match the number of elements in the sequence being bound. If there are more symbols than values, they are bound to nil.

(foo 3 4)
;; Required argument a is 3
;; Optional argument b is 4
;;=> nil

(foo 9)
;; Required argument a is 9
;; Optional argument b is nil
;;=> nil

I don’t like this pattern for two reasons.

One. Because it’s variable arity, the function foo accepts any number of arguments. You won’t get an error if you call it with extra arguments, they will just be silently ignored.

(foo 5 6 7 8)
;; Required argument a is 5
;; Optional argument b is 6
;;=> nil

Two. It muddles the intent. The presence of & in the parameter vector suggests that this function is meant to be variable-arity. Reading this code, I might start to wonder why. Or I might miss the & and think this function is meant to be called with a sequence as its second argument.

A couple more lines make it clearer:

(defn foo
   (foo a nil))
  ([a b]
   (println "Required argument a is" a)
   (println "Optional argument b is" b)))

The intent here is unambiguous: The function takes either one or two arguments, with b defaulting to nil. Trying to call it with more than two arguments will throw an exception, telling you that you did something wrong.

And one more thing: it’s faster. Variable-arity function calls have to allocate a sequence to hold the arguments, then go through apply. Timothy Baldridge did a quick performance comparison showing that calls to a function with multiple, fixed arities can be much faster than variable-arity (varargs) function calls.

Clojure Do’s: Uncaught Exceptions

Some more do’s and don’ts for you. This time it’s a ‘do.’

In the JVM, when an exception is thrown on a thread other than the main thread, and nothing is there to catch it, nothing happens. The thread dies silently.

This is bad news if you needed that thread to do some work. If all the worker threads die, the application could appear to be “up” but cease to do any useful work. And you’ll never know why.

In Clojure, this could happen on any thread you created with core.async/thread, a worker thread used by core.async/go, or a thread that was created for you by a Java framework such as a Servlet container.

One solution is to just wrap the body of every thread or go in a try/catch block. There are good reasons for doing this: you can get fine-grained control over how exceptions are handled. But it’s easy to forget, and it’s tedious to repeat if you can’t do anything useful with the exception besides log it.

So at a minimum, I recommend always including this snippet of code somewhere in the start-up procedure of your application:

;; Assuming require [ :as log]
 (reify Thread$UncaughtExceptionHandler
   (uncaughtException [_ thread ex]
     (log/error ex "Uncaught exception on" (.getName thread)))))

This bit of code has saved my bacon more times than I can count.

This is a global, JVM-wide setting. There can be only one default uncaught exception handler. Individual Threads and ThreadGroups can have their own handlers, which get called in preference to the default handler. See Thread.setDefaultUncaughtExceptionHandler.

I’ve tried more aggressive measures, such as terminating the whole JVM process on any uncaught exception. While I think this is technically the correct thing to do, it turns out to be annoying in development.

Also annoying is the fact that some Java frameworks are designed to let threads fail silently. They just allocate a new thread in a pool and keep going. If your application is logging lots of uncaught exceptions but appears to be working normally, look to your container framework to see if that’s expected behavior.

The Hidden Future

Another wrinkle: exceptions inside a future are always caught by the Future. The exception will not be thrown until something calls Future.get (deref in Clojure).

Be aware that ExecutorService.submit returns a Future, so if you’re using an ExecutorService you need to make sure something is eventually going to consume that Future to surface any exceptions it might have caught.

The parent interface Executor.execute does not return a Future, so exeptions will reach the default exception handler.

Using ExecutorService.submit instead of Executor.execute was a bug in very early versions of core.async.

Record Constructors

Some more Clojure Do’s and Don’ts for you. This week: record constructors.

Don’t use interop syntax to construct records

defrecord and deftype compile into Java classes, so it is possible to construct them using Java interop syntax like this:

(defrecord Foo [a b])

(Foo. 1 2)
;;=> #user.Foo{:a 1, :b 2}

But don’t do that. Interop syntax is for interop with Java libraries.

Since Clojure version 1.3, defrecord and deftype automatically create constructor functions. Use those instead of interop syntax.

For records, you get two constructor functions: one taking the values of fields in the same order they appear in the defrecord:

(defrecord Foo [a b])

(->Foo 1 2)
;;=> #user.Foo{:a 1 :b 2}

And another taking a map whose keys are keywords with the same names as the fields:

(map->Foo {:b 4 :a 3})
;;=> #user.Foo{:a 3, :b 4}

deftype only creates the first kind of constructor, taking the field values in order.

(deftype Bar [c d])

(->Bar 5 6)
;;=> #<Bar user.Bar@2168aeae>

Constructor functions are ordinary Clojure Vars. You can pass them to higher-order functions and :require :as or :refer them into other namespaces just like any other function.

Do add your own constructor functions

You cannot modify or customize the constructor functions that defrecord and deftype create.

It’s common to want additional functionality around constructing an object, such as validation and default values. To get this, just define your own constructor function that wraps the default constructor.

(defrecord Customer [id name phone email])

(defn customer
  "Creates a new customer record."
  [{:keys [name phone email]}]
  {:pre [(string? name)
         (valid-phone? phone)
         (valid-email? email)]}
  (->Customer (next-id) name phone email))

You don’t necessarily have to use :pre conditions for validation; that’s just how I wrote this example.

It’s up to you to maintain a convention to always use your custom constructor function instead of the automatically-generated one.1

I frequently define a custom constructor function for every new record type, even if I don’t need it right away. That gives me a place to add validation later, without searching for and replacing every instance of the default constructor.

Even custom constructor functions should follow the rules for safe constructors. In general, that means no side effects and no “publishing” the object to another place before the constructor is finished. Keep the “creation” of an object (the constructor) separate from “starting” or “using” it, whatever that means for your code.



Theoretically you could make the default constructors private with alter-meta!, but I’ve never found it necessary.

Clojure Do’s: Namespace Aliases

Third in a series, this time with some style recommendations based on my personal experience.

In a small project with only a few developers, things like naming and style conventions don’t matter all that much, because almost everyone has worked with almost all of the code.

With bigger teams and bigger code bases — think tens of developers, tens of thousands of lines of Clojure — there’s a good chance that anyone reading the code has never seen it before. For that reader, a few conventions can be a big help.

Optimizing for readability usually means being more verbose. Don’t abbreviate unless you have to.

It also means optimizing for a reader who is not necessarily familiar with the entire code base, or even an entire file. They’ve just jumped to a function definition in their editor, or maybe pulled a line number from a stack trace. They don’t want to take the time to understand how all the different namespaces relate. They especially don’t want to have to scroll to the top of the file just to see where a symbol comes from.

So these conventions are about maximizing readability at the level of single function definitions. Yes, it means more typing. But it makes it much easier to navigate a large codebase maintained by multiple people.

As a general first rule, make the alias the same as the namespace name with the leading parts removed.

(ns com.example.application
   [ :as io]
   [clojure.string :as string]))

Keep enough trailing parts to make each alias unique. Did you know that namespace aliases can have dots in them?

[ :as data.xml]
[clojure.xml :as xml]

Eliminate redundant words such as “core” and “clj” in aliases.

[clj-http :as http]
[clj-time.core :as time]
[clj-time.format :as time.format]

Use :refer sparingly. It’s good for symbols that have no alphabetic characters, such as >! <! >!! <!! in core.async, or heavily-used macros such those in clojure.test.

You can combine :refer and :as in the same :require clause.

[clojure.core.async :as async :refer [<! >! <!! >!!]]
[clojure.test :refer [deftest is]]

There are always exceptions. For example, some namespaces have established conventions for aliases:

[datomic.api :as d]

Whatever convention you adopt, use consistent aliases everywhere. This makes it easier for everyone to read the code, and makes it possible to search for code with text-based tools like grep.

Clojure Don’ts: isa?

Dynamic typing is cool, but sometimes you just want to know the type of something.

I’ve seen people write this:

(isa? (type x) SomeJavaClass)

As its docstring describes, isa? checks inheritance relationships, which may come from either Java class inheritance or Clojure hierarchies.

isa? manually walks the class inheritance tree, and has special cases for vectors of types to support multiple-argument dispatch in multimethods.

;; isa? with a vector of types.
;; Both sequences and vectors are java.util.List.
(isa? [(type (range)) (type [1 2 3])]
      [java.util.List java.util.List])
;;=> true

Hierarchies are an interesting but rarely-used feature of Clojure.

(derive java.lang.String ::immutable)

(isa? (type "Hello") ::immutable)
;;=> true

If all you want to know is “Is x a Foo?” where Foo is a Java type, then (instance? Foo x) is simpler and faster than isa?.

Some examples:

(instance? String "hello")
;;=> true

(instance? Double 3.14)
;;=> true

(instance? Number 3.14)
;;=> true

(instance? java.util.Date #inst "2015-01-01")
;;=> true

Note that instance? takes the type first, opposite to the argument order of isa?. This works nicely with condp:

(defn make-bigger [x]
  (condp instance? x
    String (clojure.string/upper-case x)
    Number (* x 1000)))

(make-bigger 42)
;;=> 42000

(make-bigger "Hi there")
;;=> "HI THERE"

instance? maps directly to Java’s Class.isInstance(Object). It works for both classes and interfaces, but does not accept nil as a type.

(isa? String nil)      ;;=> false

(instance? String nil) ;;=> false

(isa? nil nil)         ;;=> true

(instance? nil nil)    ;; NullPointerException

Remember that defrecord and deftype produce Java classes as well:

(defrecord FooBar [a])

(instance? FooBar (->FooBar 42))
;;=> true

Remember also that records and types are classes, not Vars, so to reference them from another namespace you must :import instead of :require them.

instance? won’t work correctly with Clojure protocols. To check if something supports a protocol, use satisfies?.