Clojure Don’ts: isa?

Dynamic typing is cool, but sometimes you just want to know the type of something.

I’ve seen people write this:

(isa? (type x) SomeJavaClass)

As its docstring describes, isa? checks inheritance relationships, which may come from either Java class inheritance or Clojure hierarchies.

isa? manually walks the class inheritance tree, and has special cases for vectors of types to support multiple-argument dispatch in multimethods.

;; isa? with a vector of types.
;; Both sequences and vectors are java.util.List.
(isa? [(type (range)) (type [1 2 3])]
      [java.util.List java.util.List])
;;=> true

Hierarchies are an interesting but rarely-used feature of Clojure.

(derive java.lang.String ::immutable)

(isa? (type "Hello") ::immutable)
;;=> true

If all you want to know is “Is x a Foo?” where Foo is a Java type, then (instance? Foo x) is simpler and faster than isa?.

Some examples:

(instance? String "hello")
;;=> true

(instance? Double 3.14)
;;=> true

(instance? Number 3.14)
;;=> true

(instance? java.util.Date #inst "2015-01-01")
;;=> true

Note that instance? takes the type first, opposite to the argument order of isa?. This works nicely with condp:

(defn make-bigger [x]
  (condp instance? x
    String (clojure.string/upper-case x)
    Number (* x 1000)))

(make-bigger 42)
;;=> 42000

(make-bigger "Hi there")
;;=> "HI THERE"

instance? maps directly to Java’s Class.isInstance(Object). It works for both classes and interfaces, but does not accept nil as a type.

(isa? String nil)      ;;=> false

(instance? String nil) ;;=> false

(isa? nil nil)         ;;=> true

(instance? nil nil)    ;; NullPointerException

Remember that defrecord and deftype produce Java classes as well:

(defrecord FooBar [a])

(instance? FooBar (->FooBar 42))
;;=> true

Remember also that records and types are classes, not Vars, so to reference them from another namespace you must :import instead of :require them.

instance? won’t work correctly with Clojure protocols. To check if something supports a protocol, use satisfies?.

Clojure Don’ts: Concat

Welcome to what I hope will be an ongoing series of Clojure do’s and don’ts. I want to demonstrate not just good patterns to use, but also anti-patterns to avoid.

Some of these will be personal preferences, others will be warnings from hard-won experience. I’ll try to indicate which is which.

First up: concat.

Concat, the lazily-ticking time bomb

concat is a tricky little function. The name suggests a way to combine two collections. And it is, if you have only two collections. But it’s not as general as you might think. It’s not really a collection function at all. It’s a lazy sequence function. The difference can be important.

Here’s an example that I see a lot in the wild. Say you have a loop that builds up some result collection as the concatenation of several intermediate results:1

(defn next-results
  "Placeholder for function which computes some intermediate
  collection of results."
  (range 1 n))

(defn build-result [n]
  (loop [counter 1
         results []]
    (if (< counter n)
      (recur (inc counter)
             (concat results (next-results counter)))

The devilish thing about this function is that it works just fine when n is small.

(take 21 (build-result 100))
;;=> (1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6)

But when n gets sufficiently large,2 suddenly this happens:

(first (build-result 4000))
;; StackOverflowError   clojure.core/seq (core.clj:133)

In the stack trace, we see concat and seq repeated over and over:

(.printStackTrace *e *out*)
;; java.lang.StackOverflowError
;;      at clojure.core$seq.invoke(core.clj:133)
;;      at clojure.core$concat$fn__3955.invoke(core.clj:685)
;;      at clojure.lang.LazySeq.sval(
;;      at clojure.lang.LazySeq.seq(
;;      at clojure.lang.RT.seq(
;;      at clojure.core$seq.invoke(core.clj:133)
;;      at clojure.core$concat$fn__3955.invoke(core.clj:685)
;;      at clojure.lang.LazySeq.sval(
;;      at clojure.lang.LazySeq.seq(
;;      at clojure.lang.RT.seq(
;;      at clojure.core$seq.invoke(core.clj:133)
;;      at clojure.core$concat$fn__3955.invoke(core.clj:685)
;;      at clojure.lang.LazySeq.sval(
;;      at clojure.lang.LazySeq.seq(
;;      ... hundreds more ...

So we have a stack overflow. But why? We used recur. Our code has no stack-consuming recursion. Or does it? (cue ominous music)

Call the bomb squad

Let’s look at the definition of concat more closely. Leaving out the extra arities and chunked sequence optimizations, it looks like this:

(defn concat [x y]
    (if-let [s (seq x)]
      (cons (first s) (concat (rest s) y))

lazy-seq is a macro that wraps its body in function and then wraps the function in a LazySeq object.

The loop in build-result calls concat on the LazySeq returned by the previous concat, creating a chain of LazySeqs like this:


Calling seq forces the LazySeq to invoke its function to realize its value. Most Clojure sequence functions, such as first, call seq for you automatically. Printing a LazySeq also forces it to be realized.

In the case of our concat chain, each LazySeq’s fn returns another LazySeq. seq has to recurse through them until it finds an actual value. If this recursion goes too deep, it overflows the stack.

Just constructing the sequence doesn’t trigger the error:

(let [r (build-result 4000)]
;;=> nil

It only overflows when we try to realize it:

(let [r (build-result 4000)]
  (seq r)
;; StackOverflowError   clojure.lang.RT.seq (

This is a nasty bug in production code, because it could occur far away from its source, and the accumulated stack frames of seq prevent us from seeing where the error originated.

Don’t concat

The fix is to avoid concat in the first place. Our loop is building up a result collection immediately, not lazily, so we can use a vector and call into to accumulate the results:

(defn build-result-2 [n]
  (loop [counter 1
         results []]
    (if (< counter n)
      (recur (inc counter)
             (into results (next-results counter)))

This works, at the cost of realizing the entire collection up front:

(time (doall (take 21 (build-result-2 4000))))
;; "Elapsed time: 830.66655 msecs"
;;=> (1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6)

This specific example could also be written as a proper lazy sequence like this:

(defn build-result-3 [n]
  (mapcat #(range 1 %) (range 1 n)))

Which avoids building the whole sequence in advance:

(time (doall (take 21 (build-result-3 4000))))
;; "Elapsed time: 0.075421 msecs"
;;=> (1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6)

Don’t mix lazy and strict

There’s a more general principle here:
Don’t use lazy sequence operations in a non-lazy loop.

If you’re using lazy sequences, make sure everything is truly lazy (or small). If you’re in a non-lazy loop, don’t build up a lazy result.

There are many variations of this bug, such as:

(first (reduce concat (map next-results (range 1 4000))))
;; StackOverflowError   clojure.core/seq (core.clj:133)
(nth (iterate #(concat % [1 2 3]) [1 2 3]) 4000)
;; StackOverflowError   clojure.core/seq (core.clj:133)
(first (:a (apply merge-with concat
                  (map (fn [n] {:a (range 1 n)})
                       (range 1 4000)))))
;; StackOverflowError   clojure.core/seq (core.clj:133)

It’s not just concat either — any lazy sequence function could potentially cause this. concat is just the most common culprit.



All these examples use Clojure version 1.6.0


Depending on your JVM settings, it may take more or fewer iterations to trigger a StackOverflowError.

Clojure 2014 Year in Review

My unscientific, incomplete, thoroughly biased view of interesting things that happened with Clojure in 2014.

Who’s Using Clojure?

No doubt about it: Clojure is making inroads in big business.

Cisco acquired ThreatGRID, a malware/threat analysis company using Clojure.

There hasn’t been what I’d call an official announcement from Amazon, but it’s clear from tweets and job listings that they’re using Clojure in production.

Also on the sort-of-announced front, WalmartLabs showed their love for Clojure in tweets and job listings.

Puppet Labs announced a big move towards Clojure and released their own framework, Trapperkeeper.

The U.K. Daily Mail reported on how they use Clojure at a Newspaper.

Greenius wrote about their Tech Roots: Clojure and Datomic.

Beanstalk told us that Beanstalk + Clojure = Love (and 20x better performance)

Cognitect published case studies from companies succeeding with Clojure and/or Datomic:

On the education front, Elena Machkasova has started gathering references for Clojure in undergraduate CS curriculum.

Radars & Rankings

Thoughtworks Radar January 2014 (PDF) placed Clojure firmly in the “adopt” category, as did element 84’s Technology Radar 2014.

Also in January, Clojure entered the top 20 in The RedMonk Programming Language Rankings.

By the time of Thoughtworks Radar July 2014 (PDF), the editors didn’t even consider Clojure a question, having moved on to “trial” for core.async and “assess” for Om.

Conferences & Events

We started the year off right with Clojure/West in San Francisco (videos on YouTube). I introduced Component, my not-quite-a-framework. Aaron Bedra threw down the gauntlet for securing Clojure web applications, leading to a flurry of activity making Clojure web frameworks more secure by default.

EuroClojure 2014 came to Krakow, Poland (videos on Vimeo).

At Lambda Jam 2014 in Chicago (videos on YouTube), Rich Hickey introduced Transit (Transit on GitHub).

At Strange Loop 2014 in St. Louis (videos on YouTube), Rich Hickey introduced Transducers. Ramsey Nasser and Tims Gardner introduced Clojure + Unity 3D, now named Arcadia. Ambrose Bonnaire Sergeant talked about Typed Clojure in Practice. Michael Nygard talked about Simulation Testing.

Rich Hickey made an appearance at JavaOne 2014 in San Francisco with Clojure Made Simple (no slides or video online).

In November, Clojure/conj 2014 in Washington, D.C. (videos on YouTube) was the biggest Clojure/conj yet. Over 500 attendees filled the beautiful Warner Theater.

Meanwhile, ClojureBridge held workshops throughout the year in Sydney, San Francisco, Edinburgh, and Minneapolis, just to name a few.

Language Ecosystem

The Clojure language itself continues to feature new and innovative ideas, this time Transducers.

ClojureDocs got a huge update: it now covers Clojure 1.6 and some important libraries like core.async and core.logic. There are also two new additions to the Clojure documentation sphere: Grimoire and CrossClj.

I for one am loving the surge in diversity of Clojure tooling. Cursive for IntelliJ garnered some serious attention, while CIDER and Counterclockwise both got major new releases. Boot is a new build tool with a radically different approach from the still-solid Leiningen.

Generative testing really started to catch on. Quick-check creator John Hughes gave a great keynote (video) at Clojure/west. Ashton Kemerling talked about Generative Integration Tests at Clojure/conj (also blogged). And of course the Clojure library simple-check became test.check, and has grown steadily in both capability and adoption.

Most of the Clojure contrib projects have gotten improvements and new releases.

ClojureScript growth accelerated, with three (and counting) frameworks built on top of Facebook’s React: Om, Quiescent, and Reagent.

Speaking of frameworks, there was a fair amount of activity around Component. No, I haven’t ported it to ClojureScript yet :) but there’s another ClojureScript port. People have started building things on top of Component, including juxt/modular and danielsz/system. uSwitch published some Example Component-based Apps.

Like a Phoenix rising from the ashes, new Pedestal releases appeared with support for fully non-blocking I/O, Transit, and Immutant.

What else is going on? The State of Clojure Survey 2014 analysis gave some insight into what people are thinking about Clojure.

Onward to 2015!

Thanks to Michael Fogus, Lake Denman, Alex Miller, and Paul deGrandis for their help in assembling this post.

Clojure 2013 Year in Review

This is my third Clojure year-in-review post. In 2011 I was excited about Clojure 1.3, ClojureScript, and the second Clojure/conj. By 2012 I was blown away by four Clojure conferences, two O’Reilly books, reducers, Datomic, Immutant, and a partridge in a pear tree.

For 2013, where do I even start? So much has happened this year I can’t even begin to keep track of it all. What follows is my incomplete, highly-biased summary of the significant news for Clojure in 2013.

Growth and the Industry

Maybe I should start right here at my home base, Relevance, which, after years of close collaboration, finally tied the knot with Rich Hickey and Datomic to become Cognitect.

This merger opens up new possibilites with the introduction of enterprise-grade 24/7 support for Clojure, ClojureScript, Datomic, and the rest of the Clojure “stack.” Plenty of big businesses have been waiting for just this kind of safety guarantee before they jump into the Clojure open-source ecosystem, so this means we should be seeing Clojure in more, and bigger, places in 2014. Hear more on the transition episode of the Relevance Podcast, renamed the Cognicast.

In other industry / mindshare news:

Language & Contributed Libraries

Software & Tools

  • The Datomic team released Simulant for simulation testing of large distributed systems. See Stuart Halloway’s Simulant presentation on InfoQ.

  • Relevance/Cognitect released Pedestal, a client-server web toolkit to showcase the possibilities of Clojure on the server and ClojureScript in the browser.

  • nrepl.el became CIDER, the Clojure IDE and REPL for Emacs.

  • Chas Emerick’s Austin made ClojureScript REPLs easier to use.

  • New IDEs dedicated to Clojure appeared: Nightcode and Cursive for IntelliJ.

  • Prismatic released their Plumbing / Graph library as well as Schema for run-time type validation.

  • Immutant, a Clojure application server based on JBoss, made its 1.0 release.

  • Mark Engleberg released Instaparse, a parser generator that understands standard EBNF/ABNF notation.

  • I blogged about My Clojure Workflow, Reloaded, spawning dozens of experimental frameworks for doing dependency injection and modular programming in Clojure, including my own Component.

Blogs and ‘Casts

Tons more interesting stuff happened in 2013. I couldn’t even begin to capture it all in one place. Here are some other good places to look for interesting Clojure news:

Here’s to a great 2014!

Parallel Processing with core.async

Update August 13, 2041: This approach may now be obsolete with the introduction of pipeline in core.async.

Update April 27, 2015: the date 2041 in the previous update is a typo, but updates from the future ties in nicely with the async theme so I decided to leave it in.

✻ ✻ ✻

Say you have a bunch of items to process, and you want to parallelize the work across N threads. Using core.async, one obvious way to do this is to create N go blocks, all reading from the same input channel.

(defn parallel
  "Processes values from input channel in parallel on n 'go' blocks.

  Invokes f on values taken from input channel. Values returned from f
  are written on output channel.

  Returns a channel which will be closed when the input channel is
  closed and all operations have completed.

  Note: the order of outputs may not match the order of inputs."
  [n f input output]
  (let [tasks (doall
               (repeatedly n
                #(go-loop []
                   (let [in (<! input)]
                     (when-not (nil? in)
                       (let [out (f in)]
                         (when-not (nil? out)
                           (>! output out))
    (go (doseq [task tasks]
          (<! task)))))

This might create more go blocks than you need, but inactive go blocks don’t cost much except a little memory.

But this isn’t always ideal: if f is going to block or do I/O, then you might want to create a thread instead of a go. Threads are more expensive. Suppose you don’t know how quickly the inputs will arrive: you might end up creating more threads than you need.

What I typically want is to process things in parallel with as many threads as necessary, but at most N. If the processing with two threads is fast enough to keep up with the input, then we should only create two threads. This applies to other kinds of resources besides threads, network calls for example.

After many attempts, here is what I came up with:

(defn pmax
  "Process messages from input in parallel with at most max concurrent

  Invokes f on values taken from input channel. f must return a
  channel, whose first value (if not closed) will be put on the output

  Returns a channel which will be closed when the input channel is
  closed and all operations have completed.

  Creates new operations lazily: if processing can keep up with input,
  the number of parallel operations may be less than max.

  Note: the order of outputs may not match the order of inputs."
  [max f input output]
  (go-loop [tasks #{input}]
    (when (seq tasks)
      (let [[value task] (alts! (vec tasks))]
        (if (= task input)
          (if (nil? value)
            (recur (disj tasks task))  ; input is closed
            (recur (conj (if (= max (count tasks))  ; max - 1 tasks running
                           (disj tasks input)  ; temporarily stop reading input
                         (f value))))
          ;; one processing task finished: continue reading input
          (do (when-not (nil? value) (>! output value))
              (recur (-> tasks (disj task) (conj input)))))))))

The function f is responsible for both processing the input and creating the response channel. So f could be a go, a thread, or something else that returns a channel, such as an asynchronous I/O operation. There’s a little bit of extra overhead to shuffle around data structures in this go-loop, but I’m assuming that the cost of processing inputs will dominate.

So how to test it? First a few helpers.

We want to make sure that the output channel doesn’t hold up anything else, so we’ll make a helper function to consume everything from it:

(defn sink
  "Returns an atom containing a vector. Consumes values from channel
  ch and conj's them into the atom."
  (let [a (atom [])]
    (go-loop []
      (let [val (<! ch)]
        (when-not (nil? val)
          (swap! a conj val)

What we want to keep track of is how many parallel operations are running at any given time. We can have our “processing” function increment a counter when it starts, wait a random interval of time, then decrement the counter before returning.

My colleague @timbaldridge suggested a watch function to keep track of how high the counter gets. This will produce a record of how many tasks were active at any time during the test.

(defn watch-counter [counter thread-counts]
  (add-watch counter
             (fn [_ _ _ thread-count]
               (swap! thread-counts conj thread-count))))

Here’s the pmax function using a go block:

(deftest t-pmax-go
  (let [input (to-chan (range 50))
        output (chan)
        result (sink output)
        max-threads 5
        counter (atom 0)
        f (fn [x]
             (swap! counter inc)
             (<! (timeout (rand-int 100)))
             (swap! counter dec)
        thread-counts (atom [])]
    (watch-counter counter thread-counts)
    (<!! (pmax max-threads f input output))
    (is (= (set (range 50)) (set @result)))
    (is (every? #(<= % max-threads) @thread-counts))))

And pmax using a thread:

(deftest t-pmax-thread
  (let [input (to-chan (range 50))
        output (chan)
        result (sink output)
        max-threads 5
        counter (atom 0)
        f (fn [x]
             (swap! counter inc)
             (<!! (timeout (rand-int 100)))
             (swap! counter dec)
        thread-counts (atom [])]
    (watch-counter counter thread-counts)
    (<!! (pmax max-threads f input output))
    (is (= (set (range 50)) (set @result)))
    (is (every? #(<= % max-threads) @thread-counts))))

But what we really wanted to know is that pmax won’t create more threads than necessary when the input source is slower than the processing. Here’s that test, with a deliberately slow input channel:

(deftest t-pmax-slow-input
  (let [input (chan)
        output (chan)
        result (sink output)
        max-threads 5
        actual-needed-threads 3
        counter (atom 0)
        f (fn [x]
             (swap! counter inc)
             (<! (timeout (rand-int 100)))
             (swap! counter dec)
        thread-counts (atom [])]
    (watch-counter counter thread-counts)
    ;; Slow input:
    (go-loop [i 0]
      (if (< i 50)
        (do (<! (timeout 50))
            (>! input i)
            (recur (inc i)))
        (close! input)))
    (<!! (pmax max-threads f input output))
    (is (= (set (range 50)) (set @result)))
    (is (every? #(<= % actual-needed-threads) @thread-counts))))

This still isn’t suitable for every scenario: maybe each thread needs some expensive set-up before it can process inputs. Then you would need some more elaborate mechanism to keep track of how many threads you have and whether or not they are keeping up with the input.

I have released the code in this blog post under the MIT License. View the source code on GitHub.

Update January 1, 2014: As my colleague @craigandera pointed out, this code doesn’t do any error handling. It’s easy enough to add once you make a decision about how to handle errors: ignore, log, or abort the whole process.

Command-Line Intransigence

In the early days of Clojure, I was skeptical of Clojure-specific build tools like Lancet, Leiningen, and Cake. Why would Clojure, a part of the Java ecosystem, need its own build tool when there were already so many Java-based tools?

At the time, I thought Maven was the last word in build tooling. Early Leiningen felt like a thin wrapper around Maven for people with an inconsolable allergy to XML. Maven was the serious build tool, with a rich declarative model for describing dependency relationships among software artifacts. That model was imperfect, but it worked well enough to power one of the largest repositories of open-source software on the planet.

But things change. Leiningen has evolved rapidly. Maven has also evolved, but more slowly, and the promised non-XML POM syntax (“polyglot Maven”) has not materialized.

Meanwhile, I learned why everyone eventually hates Maven, through the experience of crafting custom Maven builds for two large-ish projects: the Clojure language and its contributed libraries. It was a challenge to satisfy the (often conflicting) requirements of developers, continuous integration, source repositories, and the public Maven repository network. Even with the help of Maven books from Sonatype, it took months of trial and error and nearly all my “open-source” time to get everything working.

At the end of this process I discovered, to my dismay, that I was the only one who understood it. As my colleague Stuart Halloway put it, “Maven breeds heroes.” For end-users and developers, there’s a nice interface: Clojure-contrib library authors can literally click a button to make a release. But behind that button are so many steps and moving parts (Git, Hudson, Maven, Nexus, GPG, and all the Maven plugins) that even I can barely remember how it all works. I never wanted to be the XML hero.

So I have come around to Leiningen, and even incorporate it into my Clojure development workflow. It’s had some bumps, as one might expect from a fast-moving open-source project with lots of contributors, but most of the time it does what I need and doesn’t get in the way.

What puzzles me, however, is the stubbornness of developers who want to do everything via Leiningen. Some days it seems like every new tool or development utility for Clojure comes wrapped up in a Leiningen plugin so it can be invoked at the command line. I don’t get it. When you have a Clojure REPL, why would you limit yourself to the UNIX shell?

I think this habit comes partly from scripting languages, which were born at the command line, and still live there to a great extent. But it puzzled me a bit even in Ruby: if it takes 3 seconds to for rake to load your 5000-line Rails app, do you really want to use rake for critical administrative tasks like database migrations? IRB is not a REPL in the Lisp sense, but it’s a pretty good interactive shell. I’d rather work with a large Ruby app in IRB than via rake.

Start-up time remains a major concern for Leiningen, and its contributors have gone to great lengths (sometimes too far) to ameliorate it. Why not just avoid the problem altogether? Start Leiningen once and then work at the REPL. Admittedly, this takes some discipline and careful application design, but on my own projects I’ve gotten to the point where I only need to launch Leiningen once a day. Occasionally I make a mistake and get my application into such a borked state that the only remedy is restarting the JVM, but those occasions are rare.

I pretty much use Leiningen for just three things: 1) getting dependencies, 2) building JARs, and 3) launching REPLs. Once I have a REPL I can do my real work: running my application, testing, and profiling. The feedback cycles are faster and the debugging options much richer than what I can get on the command-line.

“Build plugins,” for Leiningen or Maven or any other tool, always suffer from running in a different environment from the code they are building. But isn’t one of the central tenets of Lisp that the compiler is part of your application? There isn’t really a sharp boundary between “build” code and “application” code. It’s all just code.

I used to write little “command-line interfaces” for running tests, builds, deployments, and so on. Now I’m more likely to just put those functions in a Clojure namespace and call them from the REPL. Sometimes I wonder: why not go further? Use Leiningen (or Maven, or Gradle, or whatever) just to download dependencies and bootstrap a REPL, then execute builds and releases from the REPL.

Lifecycle Composition

I’ve been thinking about how to build up software systems out of
stateful components. In past presentations and blog posts I’ve alluded
to a standard interface I use for starting and stopping stateful

(defprotocol Lifecycle
  (start [this] "Begins operation of this component.")
  (stop [this] "Ceases operation of this component."))

Most of my Clojure programs follow the same basic structure: Each
subsystem or service is represented by a record which implements this
protocol. At the top, there is a “system” record which contains all
the other components.

I’ve gone through several versions of this protocol with the same
function names but slightly different semantics.

Side Effects, Mutable State

In the first version, start and stop were side-effecting
procedures which returned a future or promise:

(defprotocol Lifecycle
  (start [this]
    "Begins operation of this component. Asynchronous, returns a
  promise on which the caller can block to wait until the component is
  (stop [this]
    "Ceases operation of this component. Asynchronous, returns a
  promise on which the caller can block to wait until the component is

The calling code could dereference the future to block until the
service had successfully started. For example, a database-access
component might look like this:

(defrecord Database [uri connection-atom]
  (start [_]
    (future (reset! connection-atom (connect uri))))
  (stop [_]
    (.close @connection-atom)
    (future (reset! connection-atom nil))))

(defn database [uri]
  (->Database uri (atom nil)))

My idea was that multiple services could be started in parallel if
they didn’t depend on one another, but in practice I always ended up
blocking on every call to start:

(defrecord System [database scheduler web]
  (start [_]
      @(start database)
      @(start scheduler)
      @(start web)))
  (stop [_]
      @(stop web)
      @(stop scheduler)
      @(stop database))))

(defn system [database-uri]
  (let [database (database database-uri)
        scheduler (scheduler)
        web (web-server database)]
    (->System database scheduler web)))

Second Attempt

I decided to drop the requirement to return a promise from start/stop,
which meant that those functions became synchronous and had no return

(defprotocol Lifecycle
  (start [this]
    "Begins operation of this component. Synchronous, does not return
  until the component is started.")
  (stop [this]
    "Ceases operation of this component. Synchronous, does not return
  until the component is stopped."))

This simplified the code calling start/stop, because I didn’t have to
worry about dereferencing any futures.

(defrecord System [database scheduler web]
  (start [_]
    (start database)
    (start scheduler)
    (start web))
  (stop [_]
    (stop web)
    (stop scheduler)
    (stop database)))

This also made it very clear that I was using start/stop only for
side-effects, forcing all of my components to contain mutable state.

Also, I had to manually place the calls to start/stop in the correct
order, to ensure that components were not started before other
components which depended on them.

Immutable Values

I decided to try to make the component objects more like immutable
values by redefining start/stop to return updated versions of the

(defprotocol Lifecycle
  (start [this]
    "Begins operation of this component. Synchronous, does not return
  until the component is started. Returns an updated version of this
  (stop [this]
    "Ceases operation of this component. Synchronous, does not return
  until the component is stopped. Returns an updated version of this

In this version, start and stop feel more like functions. They are
still not pure functions, because they will have to execute
side-effects such as connecting to a database or opening a web server
port, but at least they return something meaningful.

This version has the added benefit of removing some mutable state, at
the cost of making the start/stop implementations slightly more
complicated. Now these functions have to return a new instance of the
component record:

(defrecord Database [uri connection]
  (start [this]
    (assoc this :connection (connect uri)))
  (stop [this]
    (.close connection)
    (assoc this :connection nil)))

(defn database [uri]
  (->Database uri nil))

One interesting feature of this pattern is that the system record can
reduce over its own keys to start/stop all the components:

(defrecord System [database scheduler web]
  (start [this]
    (reduce (fn [system key]
              (update-in system [key] start))
            ;; Keys are returned in the order they were declared.
            (keys this)))
  (stop [this]
    (reduce (fn [system key]
              (update-in system [key] stop))
            ;; Reverse the order to stop.
            (reverse (keys this)))))

However, this relies on implementation behavior of Clojure records:
the keys function will return the keys in the order they are
declared in the record. I’m reluctant to rely on that undocumented
behavior, so instead I’ll declare the ordering explicitly:

(def component-order
  [:database :scheduler :web])

(defrecord System [database scheduler web]
  (start [this]
    (reduce (fn [system key]
              (update-in system [key] start))
  (stop [this]
    (reduce (fn [system key]
              (update-in system [key] stop))
            ;; Reverse the order to stop.
            (reverse component-order))))

Dependency Order

I still don’t have good solution to specifying the order in which
components must be started/stopped. I’ve tried building a graph of
dependencies and computing the correct order. This would be similar to
what tools.namespace does with namespaces. But I haven’t been able to
come up with a good syntax for representing these relationships that
isn’t more cumbersome than just declaring them in order.

I’ve also tried using Prismatic’s Graph library or my own Flow library
to define the graph of dependency relationships, but neither of those
libraries can produce a structure that remembers the graph after it
has computed its output, so I have no way to recover the dependency
relationships after constructing the system object.

Dependency Injection and State

This technique is a form of dependency injection through constructors.
The choice of whether to make the individual components mutable,
stateful objects has an impact on how I can use them later on. In the
original version of this pattern using mutable objects, each component
gets stable references to other components it depends on. In the later
version using immutable data structures, each component gets
references to the constructed versions of other components it
depends on, but not the started versions of those components,
i.e. the values returned from start.

So far, this has not been a problem in the programs I write. For
example, a Datomic database connection is always recoverable from the
URI, so I don’t need to store it explicitly. But other components,
particularly components which rely on external state to function
properly, might need to be mutable so that their dependents can still
use them via the references the received in their constructors. I
could still have start and stop return new values, but they would
also have to modify some mutable state (such as a Ref or Atom) along
the way. As always, mutable objects muddy the distinction between
values and identities.

I’ve also experimented with variations of start and stop that
pass in other “started” components, but this was cumbersome and hard
to generalize through a single interface.

So I don’t have a perfect system. It works well enough for the
applications I’ve developed with it so far, and it facilitates my
REPL-driven development workflow, but I always have to adapt to
circumstance. Eliminating mutable state is generally a good thing,
but it can also be limiting, especially when you have to deal with
external state.

The Amateur Problem

We have a problem. We are professional software developers who work with open-source software. The problem is that we are in the minority. Most open-source software is written by amateurs.

Every time a hot new technology comes on the scene, developers flock to it like ants to a picnic. Those early adopters are, by definition, people for whom choosing a new technology is less risky. Which means, mostly, that their work doesn’t really matter. Students, hobbyists, “personal” projects: nobody’s life or career is on the line. It doesn’t matter if the program is entirely correct, efficient, or scalable. It doesn’t matter if it ignores lots of edge cases.

I’ve been one of those amateurs. It’s fun. New technologies need amateurs. But as a technology matures, it attracts professionals with real jobs who do care about those details. And those professionals are immediately confronted with a world of open-source software written by amateurs.

I used to write code for myself. Since I started getting paid to write code for other people, I’ve become wary of code written by people writing for themselves. Every time I see a README that begins “X is a dead simple way to do Y,” I shudder. Nothing in software is simple. “Dead simple” tells me the author has “simplified” by “deadening” vast swaths of the problem space, either by making unfounded assumptions or by ignoring them completely.

We like to carp about “bloated” APIs in “mainstream” languages like Java. Truly, lots of APIs are more complicated than they need to be. But just because an API is big doesn’t mean it’s bloated. I like big APIs: they show me that someone has thought about, and probably encountered, all of the edges and corners in the problem space.

Simplifying assumptions do not belong in libraries; they belong in applications, where you know the boundaries of the problem space. On rare occasions, the ground of one problem is trod often enough to warrant a framework. Emphasis on rare. A framework is almost always unnecessary, and, in these days of rapidly-changing technological capabilities, likely to be obsolete before it’s finished.

Frameworks written by amateurs are the worst of the worst: brittle constructs that assume everything in service of one or two “dead simple” demos but collapse under the weight of a real-world application.

I don’t want to be a code snob. Let’s be amateurs. Let’s have fun. Explore. Learn. Publish code as we go. Show a solution to a problem without assuming it’s the solution. Be cognizant of and vocal about what assumptions we’re making. Don’t call something a library unless it really attempts to reach every nook and cranny of the problem space.

And don’t write frameworks. Ever. ;)

Update August 8, 2013: Based on the comments, I feel like too many people have gotten hung up on the words amateur and professional. Those were just convenient labels which I found amusing. The important difference is between “easy” single-purpose code and thorough, general-purpose code.