How ’bout that start-up time?

How long does Clojure start-up really take? Let’s find out.

Get yourself a Clojure project. Download the dependencies and pre-generate the classpath:

lein deps
lein classpath > cp.txt

This lets us run “raw” Clojure, without any tooling. Assuming a Bash-like shell:

time java -cp "$(cat cp.txt)" clojure.main -e '(System/exit 0)'

Now add Leiningen:

time lein run -m clojure.main -e '(System/exit 0)'

Next, add the Leiningen REPL:

time lein repl <<< "(exit)"

If you’re a fan of Emacs and CIDER, start Emacs and paste this into a scratch buffer:

(require 'cider)

(defvar cider-jack-in-start-time nil)

(defun start-timing-cider-jack-in (&rest args)
  (setq cider-jack-in-start-time (current-time)))

(defun elapsed-time-cider-jack-in (&rest args)
  (when cider-jack-in-start-time
    (prog1 (format "%.3f seconds"
                    (time-since cider-jack-in-start-time)))
      (setq cider-jack-in-start-time nil))))

(add-function :before
              (symbol-function 'cider-jack-in)
(setq cider-connection-message-fn

Evaluate that Elisp code with M-x eval-buffer, then open up your project.clj and run cider-jack-in.

Run each of these examples a few times to warm up the circuits.

How long does it take? On an empty project with just Clojure 1.8, I get:

java -cp … clojure.main 0.8 seconds
lein run -m clojure.main 2.2 seconds
lein repl 4.2 seconds
cider-jack-in 11.5 seconds

Yes, Clojure start-up could be faster, but make sure you know where the time is really going.

My environment: Leiningen 2.7.1, Oracle JDK 1.8.0_92, OS X

Clojure Don’ts: Non-Polymorphism

Polymorphism is a powerful feature. The purpose of polymorphism is to provide a single, consistent interface to a caller. There may be multiple ways to carry out that behavior, but the caller doesn’t need to know that. When you call a polymorphic function, you remain blissfully ignorant of (and therefore decoupled from) which method will actually run.

Don’t use polymorphism where it doesn’t exist.

All too often, I see protocols or multimethods used in cases where the caller does know which method is going to be called; where it is completely, 100% unambiguous, at every call site, which method will run.

As a contrived example, say we have this protocol with two record implementations:

(defprotocol Blerg
  (blerg [this]))

(defrecord Foo []
  (blerg [this]
    ;; ... do Foo stuff ...

(defrecord Bar []
  (blerg [this]
    ;; ... do Bar stuff ...

Then, elsewhere in the code, we have some uses of that protocol:

(defn process-foo [x]
  ;; ...
  (blerg x)  ; I know x is always a Foo
  ;; ...
(defn process-bar [x]
  ;; ...
  (blerg x)  ; I know x is always a Bar
  ;; ...

If you know which method will be called, it’s easy to fall into the trap of depending on that specific behavior. Now you’ve broken the abstraction barrier the protocol was meant to provide.

(defn process-bar [x]
  ;; ...
  (blerg x)  ; I know x is always a Bar

  ;; ... do something that relies on
  ;;     Bar's blerg having been called ...

Code like this is already tightly coupled, which isn’t necessarily a problem. The problem is that the coupling is hidden behind the implied decoupling of a protocol. That’s going to lead to bugs sooner or later.

Instead, write ordinary functions with distinct names and let the caller use the appropriate one.

(defn blerg-foo [foo]
  ;; ... do foo stuff ...

(defn blerg-bar [bar]
  ;; ... do bar stuff ...

(defn process-foo [x]
  ;; ...
  (blerg-foo x)
  ;; ...

(defn process-bar [x]
  ;; ...
  (blerg-bar x)
  ;; ...

Remember the Liskov Substitution Principle: If you cannot substitute one implementation of a protocol for another, it’s not a good abstraction.

This post is part of my Clojure Do’s and Don’ts series.

How to ns

Quick link: Stuart’s ns Style Guide

Everyone has their own personal quirks when it comes to syntax. No matter how hard you try to lock it down with code review, IDEs, scripts, or check-in hooks, individual differences will emerge.

In Clojure the situation is generally pretty stable: most people follow the same general patterns, which are implemented fairly consistently across editors and IDEs.

With one exception: the ns macro at the top of every file.

The original implementation of the ns macro in Clojure was short, simple, and effective. It was also spectacularly over-generalized. ns will take almost any combination of symbols, keywords, vectors, and lists and find something to evaluate.

There’s a spec of sorts in the docstring, but of course nobody reads that.

The laxness of the ns implementation was a constant thorn in my side as I worked on tools.namespace. Now it’s causing more headaches as macro specs introduced in Clojure 1.9.0-alpha11 uncover a bevy of bad syntax in libraries.

I’ll admit to having my own syntactic quirks when it comes to ns, but I make an effort to be consistent. After years of collecting preferences, I finally decided to write it all down.

So now you can read Stuart’s Opinionated Style Guide for Clojure Namespace Declarations and link to it during your next syntactic flamewar.

Apathy of the Commons

Eight years ago, I filed a bug on an open-source project.

HADOOP-3733 appeared to be a minor problem with special characters in URLs. I hadn’t bothered to examine the source code, but I assumed it would be an easy fix. Who knows, maybe it would even give some eager young programmer the opportunity to make their first contribution to open-source.

I moved on; I wasn’t using Hadoop day-to-day anymore. About once a year, though, I got a reminder email from JIRA when someone else stumbled across the bug and chimed in. Three patches were submitted, with a brief discussion around each, but the bug remained unresolved. A clumsy workaround was suggested.

Linus’s Law decrees that Given enough eyeballs, all bugs are shallow. But there’s a correlary: Given enough hands, all bugs are trivial. Which is not the same as easy.

The bug I reported clearly affected other people: It accumulated nine votes, making it the fourth-most-voted-on Hadoop ticket. And it seems like something easy to fix: just a simple character-escaping problem, a missed edge case. A beginning Java programmer should be able to fix it, right?

Perhaps that’s why no one wanted to fix it. HADOOP-3733 is not going to give anyone the opportunity to flex their algorithmic muscles or show off to their peers. It’s exactly the kind of tedious, persistent bug that programmers hate. It’s boring. And hey, there’s an easy workaround. Somebody else will fix it, right?

Eventually it was fixed. The final patch touched 12 files and added 724 lines: clearly non-trivial work requiring knowledge of Hadoop internals, a “deep” bug rather than a shallow one.

One day later, someone reported a second bug for the same issue with a different special character.

If there’s a lesson to draw from this, it’s that programming is not just hard, it’s often slow, tedious, and boring. It’s work. When programmers express a desire to contribute to open-source software, we think of grand designs, flashy new tools, and cheering crowds at conferences.

A reward system based on ego satisfaction and reputation optimizes for interesting, novel work. Everyone wants to be the master architect of the groundbreaking new framework in the hip new language. No one wants to dig through dozens of Java files for a years-old parsing bug.

But sometimes that’s the work that needs to be done.

* * *

Edit 2016-07-19: The author of the final patch, Steve Loughran, wrote up his analysis of the problem and its solution: Gardening the Commons. He deserves a lot of credit for being willing to take the (considerable) time needed to dig into the details of such an old bug and then work out a solution that addresses the root cause.

Fixtures as Caches

I am responsible — for better or for worse — for the library which eventually became clojure.test. It has remained largely the same since it was first added to the language distribution back in the pre-1.0 days. While there are many things about clojure.test which I would do differently now — dynamic binding, var metadata, side effects — it has held up remarkably well.

I consider fixtures to be one of the less-well-thought-out features of clojure.test. A clojure.test fixture is a function which wraps a test function, typically for the purpose of setting up and tearing down the environment in which the test should run. Because test functions do not take arguments, the only way for a fixture to pass state to the test function is through dynamic binding. A typical fixture looks like this:

 (ns fixtures-example
   (:require [clojure.test :as test :refer [deftest is]]))
 (def ^:dynamic *fix*)
 (defn my-fixture [test-fn]
   (println "Set up *fix*")
   (binding [*fix* 42]
   (println "Tear down *fix*"))
 (test/use-fixtures :each my-fixture)
 (deftest t1
   (println "Do test t1")
   (is (= *fix* 42)))
 (deftest t2
   (println "Do test t2")
   (is (= *fix* (* 7 6))))

There are two kinds of fixtures in clojure.test:

:each fixtures run once per test, for every test in the namespace.

:once fixtures run once per namespace, wrapped around all tests in that namespace.

I think the design of fixtures has a lot of problems. Firstly, attaching them to namespaces was a bad idea, since namespaces typically contain many different tests, only some of which actually need the fixture. This increases the likelihood of unintended coupling between fixtures and test code.

Secondly, :each fixtures are redundant. If you need to wrap every test in some piece of shared code, all you need to do is put the shared code in a function or macro and call it in the body of each test function. There’s a small amount of duplication, but you gain flexibility to add tests which do not use the same shared code.

(Another common complaint about fixtures is that they make it difficult to execute single tests in isolation, although the addition of test-vars in Clojure 1.6 ameliorated that problem.)

So :once fixtures are the only ones that matter. But if you want true isolation between your tests then they should not share any state at all. The only reason for sharing fixtures across tests is when the fixture does something expensive or time-consuming. Here again, namespaces are often the wrong level of granularity. If some resource is expensive to prepare, you may only want to pay the cost of preparing it once for all tests in your project, not once per namespace.

So the purpose of :once fixtures is to cache their initialized state in between tests. What if we were to use fixtures only for caching? It might look something like this:

 (ns caching-example
   (:require [clojure.test :refer [deftest is]]))
 (def ^:dynamic ^:private *fix* nil)
 (defn new-fix
   "Computes a new 'fix' value for tests."
   (println "Computing fixed value")
 (defn fix
   "Returns the current 'fix' value for
   tests, creating one if needed."
   (or *fix* (new-fix)))
 (defn fix-fixture
   "A fixture function to provide a reusable
   'fix' value for all tests in a namespace."
   (binding [*fix* (new-fix)]
 (clojure.test/use-fixtures :once fix-fixture)
 (deftest t1
   (is (= (fix) 42)))
 (deftest t2
   (is (= (fix) (* 7 6))))

This still avoids repeated computation of the fix value, but clearly shows exactly which tests use it. The :once fixture is just an optimization: You could remove it and the tests would still work, perhaps more slowly. Best of all, you can run the individual test functions in the REPL without any additional setup.

The same idea works even if the fixture requires tear-down after tests are finished:

 (ns resource-example
   (:require [clojure.test :refer [deftest is]]))
 (defn acquire-resource []
   (println "Acquiring resource")
 (defn release-resource [resource]
   (println "Releasing resource"))
 (def ^:dynamic ^:private *resource* nil)
 (defmacro with-resource
   "Acquires resource and binds it locally to
   symbol while executing body. Ensures resource
   is released after body completes. If called in
   a dynamic context in which *resource* is
   already bound, reuses the existing resource and
   does not release it."
   [symbol & body]
   `(let [~symbol (or *resource*
      (try ~@body
             (when-not *resource*
               (release-resource ~symbol))))))
 (defn resource-fixture
   "Fixture function to acquire a resource for all
   tests in a namespace."
   (with-resource r
     (binding [*resource* r]
 (clojure.test/use-fixtures :once resource-fixture)
 (deftest t1
   (with-resource r
     (is (keyword? r))))
 (deftest t2
   (with-resource r
     (is (= "the-resource" (name r)))))
 (deftest t3
   (with-resource r
     (is (nil? (namespace r)))))

Again, each of these tests can be run individually at the REPL with no extra ceremony. If you don’t want to keep paying the resource-setup cost in the REPL, you could temporarily redefine the *resource* var in its initialized state.

The key in both cases is that the “fixtures” are designed to nest without duplicating effort. Each test function specifies exactly what state or resources it needs, but only creates them if they do not already exist. Some of those resources may be shared among multiple tests, but that fact is hidden from the individual tests.

With this in mind, it becomes possible to share a resource across all tests in a project, not just within a namespace. All you need is an “entry point” which kicks off all the tests. clojure.test provides run-tests for specifying individual namespaces and run-all-tests to search for namespaces by regex. All you have to do is make sure your test namespaces are loaded, either via direct require or a utility such as tools.namespace. Then you can run a full test suite that only executes the expensive setup/teardown code once:

 (ns main-test
   (:require [clojure.test :as test]
 (defn -main [& _]
       ;;; ... more fixture wrappers ...
       (test/run-all-tests #"^my\.app\..+-test$"))))

Open-source Bundling

Cast your mind back to the halcyon days of the late ’90s. Windows 95/98. Internet Explorer 4. Before you laugh, consider that IE4 included some pretty cutting-edge technology for the time: Dynamic HTML, TLS 1.0, single sign-on, streaming media, and “Channels” before RSS. IE4 even pioneered — unsuccessfully — the idea of “web browser as operating system” a decade before Google Apps.

But if you remember anything about IE in the ’90s, it’s probably the word bundling. United States v. Microsoft centered on the tight integration of IE with Windows. If you had Windows, you had to have IE. By the time the lawsuit reached a settlement, IE was entrenched as the dominant browser.

Fast forward to the present. What an enlightened age we live in. Open-source has won and the browser market has fragmented. Firefox broke the IE hegemony, and Chrome killed it. The web browser really is an operating system.

But if you look around at software today, “bundling” is still with us, even in open-source software, that champion of choice and touchstone of tinkering.

To take an example (and to get the taste of IE out of your brain) let’s look at Hystrix, a Java fault-tolerance framework written at Netflix. First let me say that Hystrix is a fantastic piece of engineering. Netflix has given a great gift to the open-source community by releasing, for free, an essential part of their software infrastructure. I’ve learned a lot by studying the Hystrix documentation and source code.

But if you want to use Hystrix in your application, you have to use RxJava and Netflix’s Archaius configuration management framework. Via transitive dependencies, you also have to use Google’s Guava, the Jackson JSON processor, SLF4J, and Apache’s Commons Configuration, Commons Lang, and Commons Logging. For those of you keeping score at home, that’s two different logging APIs, two configuration APIs, and two grab-bag “utility” libraries.

There’s nothing wrong with these library choices. They may be suitable for your application or they may not. But either way, you don’t get a choice. If you want Hystrix, you have to have RxJava and all the rest. Even if you choose to ignore, say, Archaius, it’s still there, linked into your application code, with whatever bugs and security holes it might carry.

I don’t mean to pick on Netflix here either. As I said, Hystrix is a fantastic piece of engineering, and I’m very happy that Netflix released it. But it points to a mismatch between the goals of “internal-use” software and “open-source” software.

If you’re developing a tool or library for internal use within an organization, it makes sense to integrate closely with other software internal to that organization. It saves time, reduces development effort, and makes the software organization more efficient. When software is tightly integrated, each new tool or library multiplies the value of all the other software which came before it. That’s how technology companies like Netflix or Google can deliver consistently high-quality products and rapid innovation at scale.

The downside to this approach, from the open-source point of view, is that each new tool or library released by a software organization tends to be tightly coupled to the software which preceded it. More dependencies mean more opportunities for bugs, security holes, and misconfiguration. For the application developer using open-source libraries, each new dependency multiplies the cost of development and maintenance.

It’s not just corporate-sponsored open source that suffers from this problem — just look at the dependency tree of any Apache project.

The root problem is that great, hairy Minotaur which stalks the labyrinthine passages of any large code base: cross-cutting concerns. Almost any piece of code in an application will need, at some point, to deal with at least some of:

  • Logging
  • Configuration
  • Error handling & recovery
  • Process/thread management
  • Resource management
  • Startup/shutdown
  • Network communication
  • Filesystems
  • Data persistence
  • (De)serialization
  • Caching
  • Internationalization/translation
  • Build/provisioning/deployment

It’s much easier to write code if you know how each of these cross-cutting concerns will be handled. So when you’re developing something in-house, obviously you use the tools and libraries your organization has standardized on. Even if you’re writing something which you plan to make open-source, it’s easier to rely on the tools and patterns you already know.

It’s difficult to avoid coupling library code to one or more of these concerns. Take logging, for example. Java has had a built-in logging framework since 1.4. But many developers preferred Log4j or one of a handful of others. To avoid coupling libraries to a single logging framework, there is Apache Commons Logging, which tries to abstract over different logging frameworks with clever class-loading tricks. That turned out to be a brittle solution, so we got SLF4J, which puts responsibility for linking the correct logging APIs back in the hands of the application developer. But no one wants to take an entire day to slog through the SLF4J manual in the middle of building an application. Throw in the mysterious interactions of transitive dependencies in Maven-style build tools, and it’s no wonder every Java app starts up with an error message about logging. And logging is the easy case — most programmers could probably agree on what, broadly speaking, a logging framework needs to do. But still we have half a dozen widely-used, slightly-different logging APIs.

Developing a library which avoids making decisions about cross-cutting concerns is possible, but it takes painstaking attention to detail, with lots of extra extension points. (See Chris Houser’s talk on Exception Handling for an example.) Unfortunately, the resulting library is often less-than-satisfying to potential users because it has so many “holes” that need to be filled in. Who wants to spend half a day writing “glue” code and callbacks before you can even try out a new library? Busy application developers have an incentive to choose libraries that work “out of the box,” so library creators have an incentive to make arbitrary decisions about cross-cutting concerns. We justify this with the oxymoron “sensible defaults.”

The conclusion I draw from all this is that modern programming languages have succeeded at making software out of reusable parts, but have largely failed at making software out of interchangeable parts. You cannot just “swap in,” say, a different thread-management library. Hystrix itself exists to solve a problem with libraries and cross-cutting concerns in a services architecture. Quoting from the Hystrix docs:

Applications in complex distributed architectures have dozens of dependencies, each of which will inevitably fail at some point. If the host application is not isolated from these external failures, it risks being taken down with them.

These issues are exacerbated when network access is performed through a third-party client — a “black box” where implementation details are hidden and can change at any time, and network or resource configurations are different for each client library and often difficult to monitor and change.

Even worse are transitive dependencies that perform potentially expensive or fault-prone network calls without being explicitly invoked by the application.

Netflix has so many “API client” libraries, each making their own network calls with unpredictable behavior, that to make their systems robust they have to isolate each library in its own thread pool. Again, this is amazing engineering, but it was necessary precisely because too many libraries came bundled with their own networking, error handling, and resource management decisions.

A robust solution would seem to require everyone to agree on standards for every possible cross-cutting concern. That will obviously never happen. Even a so-called batteries-included language cannot keep the same batteries forever. This is a hard problem, and like all truly hard problems in software, it’s more about people than code.

I wish I had a perfect solution, but the best I can offer is some guidance. If you’re writing an open-source library, do everything in your power to avoid dependencies. Use only the features of the core language, and use those conservatively. Don’t pull in a library that deals with some cross-cutting concern just because it might be more convenient for your users. Build your API around plain functions and standard data structures.

Some examples, specific to Clojure:

  • Don’t depend on a logging framework unless it’s SLF4J.

  • Don’t use an error-handling framework: Throw ex-info with enough data for a handler to decide what to do.

  • If you need to do something asynchronous, use callbacks instead of core.async. Callbacks are easily integrated with core.async if that’s what the user wants to do. Likewise, if you need some kind of inversion of control, use function callbacks or protocols.

  • Don’t depend on any state-management framework or “ambient” state. Pass everything needed by an API function in its arguments. Provide operations for resource initialization and termination as part of your API. Same for configuration: pass a Clojure map as an argument.

  • Network communication and serialization: these are, admittedly, almost impossible to avoid if you’re writing a library for some network API. But you can at least give users the option of controlling their own networking by providing APIs to prepare requests and parse responses independently of making the actual network calls.

On the other hand, some “libraries” really are more like “embeddable services,” with their own internal state. Large frameworks like Hystrix fall into this category, as do a few sophisticated “client” libraries. These libraries might be expected to manage their own resources and state “under the hood.” That’s a reasonable design choice, but at least be clear about which goal you’re pursuing and what trade-offs you’re making. In most language runtimes, the behavior and dependencies of these libraries cannot be fully isolated from the rest of the code. As an application developer, I might be willing to invest time and effort arranging my code to accommodate one or two embedded services that offer significant power in exchange for the added complexity. For everything else, when I need a library, just give me some ordinary functions.

How to Name Clojure Functions

This is a guide on naming Clojure functions. There are exceptions to every rule. When you’re defining something based on natural language, there are more exceptions than rules. I break these rules more often than I follow them. This guide is just a starting point for thinking about how to name things.

Pure functions

Pure functions which return values are named with nouns describing the value they return.

If I have a function to compute a user’s age based on their birthdate, it is called age, not calculate-age or get-age.

Think of the definition: a pure function is one which can be replaced with its value without affecting the result. So why not make that evident in the name?

This is particularly good for constructors and accessors. No need to clutter up your function names with meaningless prefixes like get- and make-.

Don’t repeat the name of the namespace

Function names should not repeat the name of the namespace.

(ns products)

;; Bad, redundant:
(defn product-price [product]
  ;; ...

;; Good:
(defn price [product]
  ;; ...

Assume that consumers of a function will use it with a namespace alias.

Conversions and coercions

I don’t much like -> arrows in function names, and I try to avoid them.

If the function is a coercion, that is, it is meant to convert any of several input types into the desired output type, then name it for the output type. For example, in the functions file, reader, and writer are all coercions.

If there are different functions for different input types, then each one is a conversion. In that case, use input-type->output-type names.

Functions with side effects

Functions which have side-effects are named with verbs describing what they do.

Constructor functions with side-effects, such as adding a record to a database, have names starting with create-. (I borrowed this idea from Stuart Halloway’s Datomic tutorials.)

Functions which perform side-effects to retrieve some information (e.g. query a web service) have names starting with get-.

For words which could be either nouns or verbs, assume noun by default then add words to make verb phrases. E.g. message constructs a new object representing a message, send-message transmits it.

I don’t use the exclamation-mark convention (e.g. swap!) much. Different people use it to mean different things (side effect, state change, transaction-unsafe) so the meaning is vague at best. If I do use an exclamation mark, it’s to signal a change to a mutable reference, not other side-effects such as I/O.

Local name clashes

One problem I find is in let blocks, when the obvious name for a local is the same as the function which computes it. If you’re not careful, this can lead to clashes:

(defn shipping-label
  "Returns a new label to ship product to customer."
  [customer product]
  (let [address (address customer)
        weight (weight product)
        supplier (supplier product)]
    {:from (address supplier)  ; oops, 'address' clashes!
     :to address
     :weight weight}))

This is less of a problem when the functions are defined in a different namespace and referenced via an alias:

(defn shipping-label
  "Returns a new label to ship product to customer."
  [customer product]
  (let [address (mailing/address customer)
        weight (product/weight product)
        supplier (product/supplier product)]
    {:from (mailing/address supplier)  ; OK
     :to address
     :weight weight}))

If name-clashes become a problem, add prefixes to the function names, new- for constructors and get- for accessors. If you are bothered that this contradicts the previous section, re-read the first paragraph of this article.

Function returning functions

In general, I try to avoid defining top-level functions which return functions if I can write make the intent clearer using anonymous functions instead.

For example, writing something like this makes me feel clever:

(defn foo
  "Returns a function to compute foo of value."
  (fn [value]
    ;;... do stuff with value ...

(defn computation
  "Does stuff with values."
  [option values]
  (->> values
       (map (foo option))  ; look at me!
       ;; ...

But it’s easier for someone else to read when the closure is created close to where it’s used:

(defn foo
  "Returns the foo of value"
  [value option]
  ;; ... 

(defn computation [option values]
  (->> values
       (map #(foo % option))  ; I see what this does
       ;; ...

I allow an exception to this rule when returning functions is part of a repeated pattern. For example, the transducer versions of map, filter, and other sequence functions all return functions, but that’s a standard part of the language since Clojure 1.7 so users can be expected to know about it. Occasionally I discover a similar pattern in my own code.

When functions returning functions are not part of a repeated pattern but for some reason I want them anyway, I call them out with a suffix -fn, like:

(defn foo-fn
  "Returns a function to compute the foo of a value."
  (fn [value]
    ;; ...

Clojure 2015 Year in Review

Another year, another year-in-review post. To be honest, I feel like any attempt I make to summarize what happened in the Clojure world this year is largely moot. Clojure has gotten so big, so — dare I say it? — mainstream that I can’t even begin to keep up with all the interesting things that are happening. But it’s a tradition, so I’ll stick to it. Once again, here is my incomplete, thoroughly-biased list of notable Clojurey things this year.

As I said of JVM Clojure in 2012, I think I can safely say that 2015 was the year ClojureScript grew up. It got a real release number, improved REPL support, and the ability to compile itself. But you don’t have to take my word for it: David Nolen has written his own ClojureScript Year in Review.

Clojure in the World

We already knew Clojure was being used at big companies like Walmart and Amazon. Based on public job postings, we’ve also seen places like Reuters, Capital One, and Oracle interested in Clojure developers.

Big corporations tend to be cagey about their technology choices, but Walmart’s Anthony Marcar came to Clojure/West to talk about how they do Clojure at Scale.

In other big-tech news, Facebook acquired, a Clojure startup that released an open-source library to parse structured data from text. Clojure early-adopter Prismatic pivoted away from its popular news-recommendation app to focus full-time on the A.I. business as well.

Language, Tools, and Libraries

Clojure 1.7 was released, bringing Transducers and the much-anticipated Reader Conditionals to support mixed Clojure-ClojureScript projects. Writing cross-platform libraries suddenly got easier. A bunch of popular Clojure libraries were ported to ClojureScript, including test.check, tools.reader, and my Component.

core.async got a major new release, with the added features promise-chan, offer!, and poll!.

The big news on the tooling front was the 1.0 release of Cursive, the first commercial IDE for Clojure. On the open-source side, both Light Table and CIDER got major new releases.

In the ClojureScript tooling world, Figwheel and Devcards really took off this year.

Clojars started getting financial support from the community, and CLJSJS started offering JavaScript libraries conveniently packaged for ClojureScript and the Google Closure Compiler.

Books and Docs went open-source for contributions from the community.

New books: Clojure Applied (my review), Clojure for the Brave and True in print, Living Clojure, Clojure Recipes, and many more.

Events and Community

The Clojurians Slack community rocketed from just an idea to over four thousand members. If you don’t care for Slack, the #clojure IRC channel on Freenode is still going.

The Clojure mailing list hit ten thousand members.

At Clojure/conj this year, we had the first-ever Datomic conference. You can binge-watch Clojure conference videos (Clojure/conj, EuroClojure, and Clojure/West) on the ClojureTV YouTube channel. Also check out Clojure eXchange and :clojureD.

Clojure is attracting some interest from academic computer science, including a new paper on optimizing immutable hash maps.


There’s not much more to say. Or rather, there is very much more to say than what I can capture in a single post. Clojure is here to stay. Let’s enjoy it.

Thanks to David Nolen, Alex Miller, Timothy Baldridge, Carin Meier, and Daemian Mack for their help preparing this post.