The folks at Philly Lambda, a Philadelphia functional-programming group, were kind enough to invite me down to talk about Clojure last night. No video/audio recording, but here are my slides [PDF].
The folks at Philly Lambda, a Philadelphia functional-programming group, were kind enough to invite me down to talk about Clojure last night. No video/audio recording, but here are my slides [PDF].
Reading Predictably Irrational by Dan Ariely, I came across this passage:
“One of my friends spent three months selecting a digital camera from two nearly identical models. When he finally made his selection, I asked him how many photo opportunities he had missed, how much of his valuable time he had spent making the selection, and how much he would have paid to have digital pictures of his family and friends documenting the last three months. More than the price of the camera, he said. Has something like this ever happened to you?”
Yes, it has. Not so much in the context of buying a digital camera, but very often in the process of choosing an implementation strategy for particular problem. I get caught up in examining all the possible ways of doing something, and spend hours making notes and lists and diagrams weighing the pros and cons of each. Meanwhile, the problem remains unsolved and I don’t get any closer to solving it. I am not paying attention to what Professor Ariely labels the consequences of not deciding. In point of fact, almost any approach that I consider could be made to work, and none of them is going to be a silver bullet that makes everything else easy. This also leads to a lot of anxiety. What if I choose wrong? How much time will I have wasted?
This is where having a strong interest in new and developing technologies is a liability more than an asset. I start thinking about how I could use the latest cool thing that I just read about on the blog. On the one hand, I’m not one of those programmers who limits himself to a single tool, e.g. Java, and tries to fit everything into that model. On the other hand, my obsession with finding the right tool for the job often leads me in some very unproductive directions. Limitations, in other words, can be useful.
I expect this was less of a problem 5 or 10 years ago. There just weren’t as many options. If you were storing data, you are almost certainly going to use a relational database. If you were building a web app, you were very likely using an embedded scripting language, PHP/JSP/ASP. But today we have half a dozen languages, a dozen or more frameworks, several non-relational database models, and hundreds of different deployment configurations. The truth is, none of these are going to make or break a project. It’s kind of an arbitrary choice, you just pick one and then you figure out how to make it work. Almost any project ends up being cobbled together from different sources. It’s all very postmodern.
I have, to my chagrin, recently discovered Twitter. I was at a conference at which the attendees twittered (tweeted?) every presentation as it happened. One speaker accidentally/deliberately left his Twitter client running during his presentation, resulting in a stream of jokes and off-color comments in the corner of his PowerPoint slides. Maybe every presentation should do this. That way you’d know if you were boring your audience.
I haven’t posted in a while — look for more later this summer.
But in the mean time, I have a question: How do you structure data such that you can efficiently manipulate it on both a large scale and a small scale at the same time?
By large scale, I mean applying a transformation or analysis efficiently over every record in a multi-gigabyte collection. Hadoop is very good at this, but it achieves its efficiency by working with collections in large chunks, typically 64 MB and up.
What you can’t do with Hadoop — at least, not efficiently — is retrieve a single record. Systems layered on top of Hadoop, like HBase, attempt to mitigate the problem, but they are still slower than, say, a relational database.
In fact, considering the history of storage and data access technologies, most of them have been geared toward efficient random access to individual records — RAM, filesystems, hard disks, RDBMS’s, etc. But as Hadoop demonstrates, random-access systems tend to be inefficient for handling very large data collections in the aggregate.
This is not merely theoreticaly musing — it’s a problem I’m trying to solve with AltLaw. I can use Hadoop to process millions of small records. The results come out in large Hadoop SequenceFiles. But then I want to provide random-access to those records via the web site. So I have to somehow “expand” the contents of those SequenceFiles into individual records and store those records in some format that provides efficient random access.
Right now, I use two very blunt instruments — Lucene indexes and plain old files. In the final stage of my processing chain, metadata and searchable text get written to a Lucene index, and the pre-rendered HTML content of each page gets written to a file on an XFS filesystem. This works, but it ends up being one of the slower parts of the process. Building multiple Lucene indexes and merging them into one big (~6 GB) index takes an hour; writing all the files to the XFS filesystem takes about 20 minutes. There is no interesting data manipulation going on here, I’m just moving data around.
Update: Slides and video available at LispNYC.
Ok, it’s really happening this time:
Stuart Sierra presents: Implementing AltLaw.org in Clojure
This talk demonstrates the power of combining Clojure with large Java frameworks, such as:
- Hadoop – distributed map/reduce processing
- Solr – text indexing/searching
- Restlet – REST-oriented web framework
- Jets3t – Amazon S3
Join us from 7:00 – 9:00 at Trinity Church in the heart of the East Village. Afterward the discussion will continue at the Sunburnt Cow on 9th and C.
We had to cancel my talk for tomorrow night, due to problems with the venue. LispNYC will still meet at the Sunburnt Cow, 137 Avenue C, for drinks and discussion. My presentation has been postponed to the June meeting.
My favorite programming language has made a 1.0 release! [announcement]
CANCELED: My presentation is canceled. LispNYC will still meet at the Sunburnt Cow, 137 Avenue C, and I’ll be there to talk about Clojure. But no slides, no video, etc. My presentation is postponed to the June meeting.
I’ll be talking about my work with Clojure at LispNYC on the evening of Tuesday, May 12. Time and location to be announced. Slides and (hopefully) video available after the fact.
Possible topics:
Official announcement with time & location:
from LispNYC.org
Stuart Sierra presents: Implementing AltLaw.org in Clojure
This talk demonstrates the power of combining Clojure with large Java frameworks, such as:
Join us from 7:00 – 9:00 at Trinity Church in the heart of the East Village. Afterward the discussion will continue at the Sunburnt Cow on 9th and C.
Directions to Trinity:
Trinity Lutheran
602 E. 9th St. & Ave B., on Tompkins Square Park
http://trinitylowereastside.org/
From N,R,Q,W (8th Street NYU Stop) and the 4,5 (Astor Street
Stop):
Walk East 4 blocks on St. Marks, cross Tompkins Square Park.
From F&V (2nd Ave Stop):
Walk E one or two blocks, turn north for 8 short blocks
From L (1st Ave Stop):
Walk E one block, turn sounth for 5 short blocks
The M9 bus line drops you off at the doorstep and the M15 is near get
off on St. Marks & 1st)
To get there by car, take the FDR (East River Drive) to Houston then go NW till you’re at 9th & B. Week-night parking isn’t bad at all, but if you’re paranoid about your Caddy or in a hurry, there is a parking garage on 9th between 1st and 3rd Ave.
There’s a big ‘ol thread going on down at comp.lang.lisp about Clojure vs. Common Lisp. I’m biased, of course, but I have to say that Clojure and Rich Hickey are holding their own against some of the top c.l.l. flamers.
But all the arguments about functional programming, software transactional memory, and reader macros miss what was, for me, the biggest reason to switch to Clojure. It’s about the libraries, stupid. Building on the JVM and providing direct access to Java classes/methods was the best decision in Clojure’s design. ‘Cause if it’s ever been done, anywhere, by anyone, someone’s done it in Java. Twice.
A few years ago, I tried to solve the Common Lisp library problem by writing a bridge from CL to Perl 5, and was laughed out of town. Rich Hickey, I’m told, spent years trying to bridge CL to Java, and never got very far. But Clojure works, and it works great. It’s a Lisp with a squintillion libraries. Who else can claim that?
So, if I wanted Lisp with Java libraries, why not use Kawa or ABCL … or heck, JRuby? Those are all fine projects, but they all suffer from mismatches between the “source” language (Scheme, CL, Ruby) and the “host” language (Java). There is never a one-to-one mapping between types in the source language and types in the host language. So you end up needing conversions like jclass/jmethod/jcall (ABCL) or primitive-static-method (Kawa). (JRuby is slightly better, but only because Ruby is closer to Java than CL/Scheme.)
Clojure doesn’t have this problem because it was designed from the ground up for the JVM. Clojure strings are java.lang.String, Clojure maps are java.util.Map, even Clojure functions are java.lang.Runnable (and java.lang.Callable). This makes it supremely easy to mix-n-match Clojure code with Java libraries and vice-versa. I know, because every day I use Clojure with complex Java libraries like Hadoop, Restlet, Lucene, and Solr. Everything just works. I don’t have to write any foreign-function interfaces or bridge code. In fact, using Java libraries in Clojure is often easier than using them in Java!
Clojure may not be a programming language for the next hundred years, as Arc aspires to be. But it’s a great language if you want to get stuff done right now.
Just a little self-promotion: I’ll be presenting at the New York Hadoop User Group on Tuesday, February 10 at 6:30. I’ll talk about how I use Hadoop for AltLaw.org, including citation linking, distributed indexing, and using Clojure with Hadoop.
Update 2/28: My slides from this presentation are available from the Meetup group files.