Let's Write a Transducer!

For me, Rich Hickey's original post on transducers raised more questions than it answered. Stian Eikeland wrote a good guide on how to use them, but it didn't really answer the questions I had. However, there's an early release of Clojure 1.7, so I thought I'd take a look.

Let's start with a simple example using an existing transducer:

(def z [1 2 3 4 5 6])
(sequence (filter odd?) z)
;;; (1 3 5)

Okay, so far so good, we understand how to use an existing transducer to create a sequence.

Now, is identity a transducer?

(sequence identity z)
;;; (1 2 3 4 5 6) 

Perfect. Now let's try doing it ourselves. We'll write a transducer that preserves all its input.

Arity Island

Rich says the type of a transducer is (x->b->x)->(x->a->x). In practice, arity matters in Clojure, so it's really (x->b-x)->(x,a)->x. So let's write my-identity

(defn my-identity [yield] (fn [x b] (yield x b)))
(sequence my-identity z)
;;; ArityException Wrong number of args (1) passed to: 
;;; user/my-identity/fn--1347  clojure.lang.AFn.throwArity (AFn.java:429)

Wait, it's only expecting one argument? Let's try one

(defn my-identity [yield] (fn [x] (yield x)))
(sequence my-identity z)
;;; ArityException Wrong number of args (2) passed to: 
;;; user/my-identity/fn--1342  clojure.lang.AFn.throwArity (AFn.java:429)

Unsurprising. Let's combine the two.

(defn my-identity [yield] (fn ([x b] (yield x b)) ([x] (yield x))))
(sequence my-identity z)
;;; (1 2 3 4 5 6) 

OK. So, a transducer is actually two functions. What the heck are these functions being passed?

(defn my-identity [yield] 
  (fn ([x b] (println "Arity2:  " x) (yield x b)) 
      ([x] (println "Arity1:  " x) (yield x))))
(sequence my-identity z)
;;; StackOverflowError   clojure.lang.RT.boundedLength (RT.java:1697)

Oh dear. Maybe we can see the class instead:

(defn my-identity [yield] 
  (fn ([x b] (println "2A " (class x)) (yield x b)) 
      ([x] (println "1A " (class x)) (yield x))))
(sequence my-identity [5 7 9])
(2A  clojure.lang.LazyTransformer
2A  clojure.lang.LazyTransformer
5 2A  clojure.lang.LazyTransformer
7 1A  clojure.lang.LazyTransformer
9)

Well, that's a bit of a mess, but we can see the 5, 7 and 9 streaming out. Weirdly, they seem to be coming out slightly too late. And the arity-1 function is called at the end. It's not clear what you can usefully do with it's parameter other than pass it through since it's not fixed, has no guaranteed protocols and in the case of LazyTransformer, blows up if you try to evaluate it.

If you take a look at actual transducers, you'll see there's a third, zero-arity function declared as well. I haven't discovered what that's for yet.

State of Play

So what's that arity-1 function for, then? Well, the doc string for drop gives us a clanger of a clue:

Returns a stateful transducer when no collection is provided.

Transducers can have state. They start when the yield function is passed them, and finish when the arity-1 function is called, and you can clean up resources when it ends. This start/reduce/finish lifecycle is actually vital to making drop and other reducers work.

OK, this is starting to look an awful lot like the IObserver interface in C#. (The Subcribe method corresponds to the initial start step.) That suggests the arity zero function is for some form of error handling, but I haven't managed to trigger it.

Bad Reputation

Okay, now let's try something a bit harder. Let's repeat our input.

(defn duplicate [yield] 
  (fn ([x b] (yield x b) (yield x b)) ([x] (yield x))))
(sequence duplicate [1 2 3 4 5 6])
;;; (1)

What the heck happened there? We ignored the result of the first call to yield. Let's fix that.

(defn duplicate [yield] 
  (fn ([x b] (yield (yield x b) b)) ([x] (yield x))))
(sequence duplicate [1 2 3 4 5 6])
(1 1 2 2 3 3 4 4 5 5 6 6)

Perfect! It's a mystery to me how exactly it failed, but we've gained a bit more insight: you can only do calls to yield by passing the result of one into the first parameter of the next.

So, here's what we've learned:

  • A transducer is a function that takes one parameter and returns a "function"
  • Said function is actually 2/3 other functions, using arity overloading
  • It has a start/reduce/finish lifecycle. The finish step can't transform the result further.
  • It can have state.
  • Calls to yield in the reduce step have to be well-behaved.

I'd like to write some more, but this is easily enough for this post.

Not a Haskell Monad Tutorial: Applicatives

Functors are all very well, but they only allow you to map with a function that takes only one parameter. But there's plenty of functions that take more than one parameter, including useful ones like add and multiply. So how do we want to multiply to work on nullable integers?

  • 2 times 3 should be 6
  • 2 times null should be null
  • null times 3 should be null
  • null times null should be null

There's something else we need to do. What if 2 is just an integer, not a nullable integer? Really, we need to be able to promote an integer to a nullable integer. The more parameters a function has, the more likely one of them isn't in exactly the right format. Haskell calls this function pure. (+)

Now let's get a bit more complicated. What about multiplying two lists together? Multiplying [2] and [3] should obviously give [6]. But what happens if you're multiplying [2,3] and [5,7]? Turns out there's at least three sensible answers:

  • Multiply the pairs in sequence: [10,21]
  • Multiply the pairs like a cross join: [10,14,15,21]
  • Actually, you could also iterate the first sequence first [10,15,14,21

More than one way to skin a list

Let's just concentrate on the first two. How are they going to deal with lists of different length?

  • [2] * [1,3] should be [2] OR
  • [2] * [1,3] should be [2,6]

But what if the first parameter isn't a list. What should that look like? Well, 2 * [1,3] should definitely be [2,6]. But that means that, depending on how we generalise multiplication, we also need to generalise turning a number into a list.

  • To multiply like a cross join, 2 can just become [2]
  • To multiply the pairs in sequence 2 needs to be [2,2,2,2,2,...], an infinite sequence of 2s.

So, generalizing multiple-arity functions to functor contexts isn't as obvious as it is for single-arity functions. What on earth do we do about this? Well, the approach Haskell goes with is "pick an answer and stick with it". In particular, for most purposes, it picks the cross join. But if you want the other behaviour, you just wrap the list in a type called ZipList and then ZipLists do the pairwise behaviour.

Back to the Functor

So, how should we handle the various examples of functors that we covered in the first part? We've already dealt with nullables and lists and sets are a dead loss because of language limitations.

Multiplying two 1d6 distributions just gives you the distribution given by rolling two dice and multiplying the result. Promoting a value e.g. 3 to a random number is just a distribution that has a 100% chance of being 3.

You can multiply two functions returning integer values by creating a function that plugs its input into both functions and then returns the product of the results. You can promote the value 3 to a function that ignores its input and returns 3.

How about records in general? Well, here's the thing: you can't promote a record without having a default value for every field. And that isn't possible in general. So, while you can undoubtedly make some specific datastructures into applicatives, you can't even turn the abstract pair (a,b) (where you're mapping over a) into an applicative without knowing something about b.

We could make the mapping work for pair if we were actually supplied with a value. But that doesn't make sense, does it? How about, instead of (a,b) we work on functions b -> (a,b). Now we can map a, on single and multiple-arity functions, and just leave the b input and output values well alone. It turns out this concept is rather useful: it's usually called the State Monad.

Would you like Curry with your Applicative?

Up until now, I've mostly talked about pairwise functions on integers. It's pretty obvious how you'd generalize the argument to arbitrary tuples of arbitrary input times. However, it turns out that the formulation I've used isn't really that useful for actual coding, partly because constructing the tuples is a real mess. So let's look at it a different way.

Let's go back to multiplying integers. You can use the normal fmap mapping on the first parameter to get partially applied functions. So our [2,3] * [5,7] example gives up [2*,3*] and [5,7]. Now we just need a way of "applying" the functions in the list. We'll call that <*>. It needs to do the same thing as before and the promotion function, pure is unchanged.

It turns out that once you've got that, further applications just need you to do <*> again, so if you've got a function f and you'd normally write f a b c to call it, you can instead write

f <$> a <*> pure b <*> c

Assuming a and c are already in the correct type and b isn't. This is equivalent to

pure f <*> a <*> pure b <*> c

but in practice people tend to write the dollar-star-star form. Finally, you can also write

(liftA3 f) a (pure b) c

which is much more useful when you're going pointfree.

And finally...

So, here's the quick version:

  • a functor that can "lift" functions with multiple parameters is termed an "applicative functor", "idiom" or just "applicative"
  • a functor is uniquely defined by the data type you're mapping to(*)
  • some data structures like list, however, give rise to multiple possible implementations of Applicative

Functors have been well understood for a long time, and monads provided the big conceptual breakthrough that made Haskell a "useful" language. The appreciative of applicative functors as an abstraction that occupies a power level between the two is a more recent development. When going around the Haskell libraries you'll often discover two versions of a function, one of which is designed for applicatives and one for monads but they're the same function. It's just that the monad version was implemented first. With time, the monad versions will be phased out, but it's going to take a long tuime. You can read more about the progress of this on the Haskell wiki.

If you want a much more rigorous approach to what I've been talking about here, read Brent Yorgey's excellent lecture notes.

(+) and return, for historical reasons.

(*) Indeed, and this is awesome, Haskell will just automatically generate
the Functor fmap function for you.

Functors: Programming Language Limitations

What does a functor actually look like in various programming languages? We already said it's something you can use to map, so let's take a look at some language's mapping functions:

  • Clojure has map, mapv and fnil
  • Haskell has map and fmap. (It also has <$>, which is the same as fmap)
  • C# has Select
  • Java's streams library has map and mapToInt

So, are any of these functor mapping operations? Well, it won't take a genius to guess that fmap does the right thing. map in Haskell does the same thing, but only works for Lists. The definition of fmap for list is map, so it's pretty much a wash. (*)

Land of Compromises

The others? Well, kind of, but you tend to need to squint a bit. The problem is that if you map over an identity function, (e.g. x => x in C#) you should get the same type back as you put in. And actually, that's very rarely true. map in Clojure can be called on a vector and will return a lazy list. mapv can be called on a lazy list and get back a vector. map in Java and Select in C# send an interface type to the same interface type, but rarely return the exact same type as you were expecting.

Moreover, there isn't a general mapping interface that lots of functor-like things implement and there isn't any way to make one. This isn't a problem for list comprehension, but it horribly breaks functors as a general model of computation. (#) You can still use the concepts, but you'll end up with a lot of code duplication. Indeed, you'll probably already have this code duplication and be thinking of it as a pattern. As is all to often the case, low level programming patterns reveal deficiencies in your programming language.

There's good reasons for these mapping functions not behaving exactly like a functor, though: performance. The haskell compiler has everything, not just lists, as lazy and can optimize types away. Clojure, C# and Java can't and treat them as hard optimization boundaries.

Haskell ain't perfect either

We already established in the previous article that there are plenty of functors that have nothing to do with types. Haskell's Functor type class is therefore only a functor on the category of Haskell types (usually referred to as Hask). This seems good enough, but actually it isn't.

Consider a set of values. You can easily define a mapping function that satisfies the functor rules. Sadly, Set in Haskell isn't a Haskell Functor. This is because Set imposes a condition on its values that they be sortable. Whilst this isn't a problem for real Functor's, it's a problem for Haskell functors because type classes don't admit restrictions on their type parameters. To put it another way, Functor in Haskell is a functor over the whole of Hask, never a subcategory. For that matter, you can't do the (*2) functor that I described last time in any sensible way because you can't restrict its action to integers.

It turns out this problem is fixable, with Rank-2 typeclasses, but don't hold your breath for that to land in Prelude any time soon. In the meantime, you can't use Functor to represent functors with domain type restrictions.

(*) Many smart Haskellers believe (and I agree with them(+)) map should work on all functors and fmap should be retired. There's a general theme here: the standard libraries are showing their age and need work to simplify them.

(+) If you've never seen Alan Rickman play Obadiah Slope, you're missing out.

(#) If you're prepared to lose information, all you really need is reduce/Foldable anyway.

Functors: Category Theory Stuff

Functors : Category Theory Stuff

If you ever want to talk the same language as smart Haskellers, you need to know a bit of category theory. Here's some notes on how I understand category theory right now.

The first thing to to appreciate is a list isn't a functor, "list" is a functor. In particular, it's a mapping from one type to another e.g. int to list of int. Furthermore, it's a mapping that preserves the structure of int, in that performing "map" works.

Considered this way, there's no such thing as a "higher order type", there's just functions from one type to another. Types with more than one type parameter in Java/C# are just multiple arity functions on types.

Some other things that are worth considering: you can make a list of any type, even a list. Not only that, but if a and b are different types, list of a and list of b are different as well. So, in maths terms, it's an injection of the type space onto a subsection of the same type space.

What the heck is a category?

Now, let's go back to the start and talk terminology. A category is a bunch of "objects" and "arrows" between them. They behave basically like values and functions. Indeed, values and functions form a category. The only real requirement is that arrows compose like functions and that there's an identity map that does nothing.

In the context of type theory, the objects are the types themselves. The arrows are higher order type constructors. Just like normal functions, they're not reversible. Now let's make a bit weirder. Just the lists and the functions between lists and other lists form a category too.

The next bit may or may not make sense if you don't have a maths background. Mathematically, a functor isn't anything to do with types at all, it's just a mapping between one category and another that preserves some structure.

Wait what?

Let's think of a really simple category. Let's have the objects be integers and the arrows be rotations of integers e.g. add three, subtract two. And "add zero" is an identity map.

Now let's have another one which is the same, only all of the numbers and rotations are even. Then "times two" maps objects and functions between the two categories. So 3 becomes 6 and "add 3" maps to "add 6". And finally, "add zero" becomes... "add zero". So "times two" is a perfectly valid functor that is absolutely nothing to do type theory at all.

Finally, a small note, if you're just looking at category theory for the purposes of understanding Haskell you'll come across the phrase "locally small" a lot. Every last category you are ever going to worry about is locally small, so don't sweat it.

Not a Haskell Monad Tutorial: Functors

One of the things that people new to Haskell may not appreciate is that academia's love affair with monads has been waning for some time. In its place is a more nuanced hierarchy of Functor, Applicative, Monad. (*)

So what the heck is a functor? Well, really it's just something you can map over and it makes sense. "Makes sense" has a specific mathematical meaning, but I'm going to gloss over it and keep going.

Let's talk about some things you can map over:

  • A list, [a]
  • A set, Set a
  • A nullable value, Maybe a
  • A random value, Rand StdGen a
  • A function returning a value, (->) a
  • A program returning a value, IO a (We'll ignore this one from now on, I'm just mentioning it because IO is kind of important.)
  • A record, where we only map over one of the fields. e.g. imagine a pair (x,y) where x and y are different types

So, if you were mapping "add one" you'd get

  • A list where all of the values were one larger e.g. [2,4] becomes [3,5]
  • A set where all of the values were one larger e.g. #(2,4) becomes #(3,5).
  • A value one larger, or null. So null becomes null, and 3 becomes 4.
  • A random integer value between 1 and 6 becomes a random integer value between 2 and 7.
  • A function f(x) becomes a function g(x) where g(x) = f(x) + 1
  • The pair (x,y) becomes (f(x),y)

I've found that while getting your head around the concept, it's best to just concentrate on nullable values and lists. In particular, if you're familiar with Clojure or LINQ, it's about understanding that things like nil-punning and fnil are exactly the same concept as map and behave the same way.

Just to complicate matters, the set example doesn't actually work in Haskell, but I'll get to that in a later post.

My Head is Hurting

If you want to learn the stuff I'm saying properly, go do Brent Yorgey's Introductory Haskell course and do the exercises. It's a significant time investment, but well worth it.

(*) This is a gross over-simplification, so sue me.

Evaluating Clojure Libraries

My last post on Clojure template libraries needs updating already, but before I do that, I'd like to jot down some notes on how I try to evaluate libraries. Ultimately, it's actually hard to accurately evaluate a library without using it in anger. Sadly, by that point you tend to be committed. Try to find someone who uses Rails heavily to tell you that it's not fit for purpose.

There's already ten or so HTML template libraries. You're not going to use them all before committing, so you need a way to choose which you're going to try first.

  1. The first question, of course, is: does it do what you need? It's usually worth browsing the documentation beyond the first couple of paragraphs of the readme. What you mean by "rest middleware" could be very different from the author's understanding of the term.
  2. Is it correct? Clojure is still in its amateur phase. When I looked at node postgres libraries, only one was actually capable of querying the first system table I tried. Unit tests are a good signal. (This is a problem with REPL driven development: there's no evidence after the fact you did it.)
  3. Is it simple/composable? Can you vary what it does? Macros, sadly, are a bad sign. If no part of the API takes a function as a parameter, it's probably not that flexible. Calling yourself a framework is a red flag.
  4. Is it fast? Given the choice between two otherwise identical libraries, you'll always prefer the faster. Usually, all you can really tell from the documentation is whether or not the author cares about speed.
  5. Does it have a community? Large numbers of committers, even large numbers of issues on github are usually a good sign. These make it more likely that the library will continue to evolve as your requirements change.
  6. Can I change it? This one is horribly underrated. At this point, there's still a good chance that you'll find something you want to submit as a pull request. Take a browse of the code and see if it's a code base you can work with. Watch out for libraries that are really Java libraries, like mustache.clj.

Clojure Web Stack: Server Side HTML Generation

I'm going to try to outline your current choices when generating HTML in Clojure.  To enable you to skip the entire article: there are no bad libraries, but they're very different and suitable for different things.  If you need to, there's nothing stopping you using multiple technologies in different parts of the program.

I'd be lying if I said I was an expert on any of these technologies, so please feel free to correct me. 

Hiccup

If there's a default stack for Clojure, it's the work done by James Reeves.  Hiccup is an HTML DSL for Clojure, like HAML but everything is valid Clojure.  The principal advantage of doing thing is this way is the ability to build things on top of it: if you want to create an API for standardized HTML components e.g. a form library, Hiccup's your friend.

With Hiccup, you're going to be ready to roll in about ten seconds after you've read the documentation, and it’s extremely fast.  (I’ll let someone else run the micro-benchmarks.)  However, it's a bit fiddly, precisely because it's very low level and macro-friendly.

Enlive

If James Reeves is the Beatles, Christophe Grande is the Rolling Stones of Clojure Web Development.*  Enlive is the most insanely fully featured project here.  For instance, the latest version contains a helper with the entire functionality of Hiccup.

Enlive is properly an HTML parsing and transformation engine based on JSoup, but it contains a capable templating solution.  Here you provide external files that are valid HTML with no additional markup It then performs transformations on nodes you identify using a CSS-like syntax.  The transformation you're going to be using most often, of course, is to insert some text or HTML, but you can do arbitrarily smart things here.

Enlive does have an acknowledged weakness: the documentation.  There's some good introductions, but Enlive is ridiculously deep.  However, if you the time in, you'll find it's incredibly powerful and elegant.

Laser

Laser is a new project by the talented Anthony Grimes. It aims to take the best parts of enlive and put them into a simpler package.  The syntax is more verbose, but fully composable (everything's a function).

It also has the ability to use unparsed HTML in its output, which Enlive discourages.

Fleet

Neither Hiccup nor Enlive are very close to a traditional templating engine as you'd expect to find in other languages.  It seems like node has as many templating solutions as it has developers.  Clojure, at the time of writing, basically has two.

Fleet is a classic "mix code with your markup" templating language.  You can insert arbitrary functions into your HTML, like class ASP and ERB.  It also has a host of support functions, including the ability to create namespaces on the basis of a directory of templates.

Clostache

Clostache, on the other hand, is a mustache implementation.  It is programmable, but only in limited and well-defined manners.  Whilst Fleet has an extensive API, Clostache declares only two methods: render and render-resource.

Choosing an Engine

When choosing between node libraries, the principal question was "does it work?".  And the answer was, typically, "not after you've been using it for half an hour".  Clojure libraries just aren't like that.  All of them are good at what they do.

The biggest question here is: what's your approach to templating?  If you believe in pure HTML templates, Enlive or Laser is for you.  If you want lots of code in your HTML, you're going to want Hiccup or Fleet.  If you're looking for somewhere in between, Clostache is worth a look.  And, as I said before, you can always use one solution for one part of your system and another elsewhere.  That said, the different systems aren't composable with one another, so be very clear as to what you're using each for.  (I'm pretty sure this is why Enlive has added hiccup-style generation.)

For what it's worth, my current project uses Clostache for straight forward page serving and Enlive for HTML reprocessing, although I'm thinking about Laser.  To date, I've found it pretty easy to change my mind about which libraries to use.  I've never found that with any other platform.  I’m ascribing that to the design of Clojure and the aesthetic of the community, and it’s a huge win.

(NB For some reason my blog won't take comments from Chrome. I promise I'm working on it, but it's taking a while.  A long while.)

Clojure has a Problem with Async

Clojure, like node.js, is a very opinionated platform.  The funny thing is that almost every opinion is different. 

Clojure embraces Java as a platform. 

  • Originally, every declared identifier was overrideable on a per-thread basis. 
  • There’s many features (e.g. Futures and Reducers) that allow you embrace multi-threading at a high level.
  • Data is immutable.
  • Data is globally shared between threads.
  • It adds STM to Java’s already extensive thread-synchronization primitives.
  • Everything’s a function. 

Node, conversely embraces Javascript

  • It’s aggressively single thread and asynchronous.
  • If you want another thread, you’ll have to start another process.
  • Everything’s mutable, even class definitions.
  • Share data between processes?  I hope you like memory mapping.
  • Synchronization barriers?  You don’t need them. 
  • Everything’s an event with a callback.

Clojure and Node have completely different sweet spots: clojure is truly excellent at computation, node at IO.  Like it or not, multiple threads aren’t really a good solution to blocking IO solutions.  Which is a pity, because all the main Clojure libraries feature blocking IO (e.g. clojure.java.jdbc, ring).  That’s not to say there isn’t some amazing stuff being done in Clojure, just that it could be even better.

JDBC is an interesting case because it’s a Java problem that works its way through to Clojure.  Node.js made a virtue of being the only API on a new platform.  However, it introduces a couple of oddities of its own.  For instance, the jdbc library can only have one open database connection at once.  Usually the case, but sometimes undesirable (try performing a reconciliation of a million records between two databases).  To some extent, this is a hangover of Clojure being envisaged as an application language that used libraries written in Java. 

There’s nothing stopping you from writing Clojure code in a node-like style, as long as you’re prepared to write your own web-server (Webbit, Aleph) and DB libraries (er… no-one).  Equally, implementing a feature like co-routines wouldn’t actually be that hard, but you’d lose bindings, which is a problem for any library that assumes that they work.  And you’d still need all of your libraries to be async.

For all these reasons, I don’t think we’re going to be seeing a proper Clojure async solution any time soon.  Ironically, I think it’s the complete absence of async DB libraries that is really holding it back.  Without that, solving most of the other things isn’t really that useful.

WebForms In Retrospect

I’m not much for technology recommendations.  Most technology choices should probably be driven by familiarity and cost.  For instance, Ruby on Rails is easily the most mature web stack out there, but if you’re a Python programmer with no familiarity with Ruby, Flask or Django is likely to be a better choice.  It’s not helped by the fact that a number of “technology change” stories turn out to be stories of replacing bad code in one stack with good code in another.  The rest tend to be stories about reducing running costs by moving off Windows or Heroku.  Most technology choices, especially in the web space, tend to be “good enough”.

There’s one huge exception to this rule: Asp.Net WebForms.  It’s poison for productivity and it’s poison for production.  It took me a long time to accept this.  I had been developing uSwitch.com for nearly four years.  We had developed the majority of the site in WebForms (this was before MVC was even an option).  Except, weirdly, for the most profitable parts.  They were in an unholy mix of ASP, XSLT and C# for the business logic.  Although a pain to work with, I’m no longer convinced it was actually worse than WebForms where it counts.

The Choice That Destroyed a Company

Now, rather than the implementation success stories that glut the internet, let me tell you about a failure.  uSwitch is now a successful project run by ForwardTek.  However, it’s shocking what happened to uSwitch in the last years I was there.  The firm was bought for $366 million in March 2006 with big plans for expansion.  In July 2007, these plans were dust and there were huge layoffs (I was already gone).  Within two years, with no expansion in sight, February 2008 saw the purchaser effectively write down the firm to zero.  Now, the reasons for this are always many and varied (buy me a beer sometime), but a fair proportion of this horrible crash can be ascribed to one project, and a fair proportion of that to its technology choice.  I’m not going to claim that there weren’t bad project management decisions, or that all of the code we delivered was perfect, but choosing WebForms was a mistake, and a big one.

The truth is, I thought the writing was on the wall by the start of 2007.  The project known as the redesign, which began at about the same time as the buyout, had been live for a couple of months.  It was a disaster:

  • It had frozen the entire website for over six months, allowing competitors to eat our lunch.
  • Our flagship energy product was actually slower at the end of it than at the beginning.
  • The new structure wasn’t actually flexible enough to handle the changes we were then asked to implement.
  • It took over 50 developers, including quite a few contractors, about seven months to deliver.  We had to pay contractors to work weekends.  When the project was over, so little leave had been taken the firm had to institute buy-back.  The cost of the project was just plain more than the firm could take.
  • And, worst of all, the uptick in conversion the project promised just plain didn’t happen.

(There is an irony associated with that last point.  The original project proposal a 10% improvement (don’t quote me on the exact figures after all this time).  Some quick-win patches we instituted in February garnered about half of that.  This left us with a large project to gather the other 5%.  With that in mind, it’s not clear the project was worth doing even at the start.  That and the decision that we couldn’t have an inconsistent look on the site really, really hurt us.  I did say there were other factors.)

This post is getting extremely long, but hopefully I’ve got your attention.  I distinctly remember one day in July, the fourth month of this project.  I had just finished a two hour debugging session with another developer on some issue to do with dynamically generated content, one of the many issues that plagues WebForms development, when it occurred to me that after three years of working with the technology, I was far from convinced the productivity benefits of learning it were there at all.  It’s pretty tough accepting that you’ve spent the last three years driving in the wrong direction, but the longer I thought about it, the more solid my opinion became.

Wait, What Has This Got To Do With WebForms?

You’ll note that I haven’t said the code was bad.  It wasn’t.  It had a fair number of automated tests against it, some nice automated health checks in Watir.  A little while after release, it even gained an automated deployment system.

WebForms has a number of things that just make it a horribly inappropriate technology for a dotcom website.

  • It wants to generate all of your HTML.  This is a serious problem if you’re trying to produce a skinnable application.  Yes, I’m fully aware of the skinning technology in WebForms; it’s completely inappropriate for the kind of work dotcoms actually want to do.
  • It generates huge hidden state fields.  This means your programmers end up fighting WebForms every time they need a page to be fast.  Sadly, this is all of the time.
  • It generates large numbers of complex munged IDs.  In addition to making your page slow, it obstructs debugging.
  • The standard model is to post back to the same page and then redirect to the next.  This involves reconstructing the state of the previous page in order to process the events.  This is way too slow at a computational level and involves too many round trips.
  • Specifying your own URLs was generally regarded as deep magic until 2010.  Seriously.

Seriously, there’s no other web stack I can think of that makes it so hard to just deliver some HTML in a form and get a post back.  Now, if you’re a WebForms expert, you’ll be taking a look at the previous list and thinking “but there’s ways around all of these issues”.  You’d be right, but that’s missing the point.  There’s no other web stack that requires expertise to get these things right.  But even if you can crack these issues, your problems are just beginning.

For instance, have you heard of the “off by one” issue?  Let’s say you have a form that changes the number of controls on the page under certain circumstances.  It’s quite easy to get into a situation in which it’s showing the wrong stuff on the page.  If you add a dummy button, pressing it will correct the page.  Debugging issues like this is a nightmare.  Things get worse if you have data driven controls.  While we’re on the subject, I remember watching one developer trying to replicate the functionality of repeater.  Even after decompiling the original code, he couldn’t get it to work with the declarative syntax, leading us to believe that the repeater is actually magic.  The kind of magic that comes out of Neville Longbottom’s wand.  Oh yes, and you’ve got to understand the ASP.NET event model, a construct more complex than Cloud Atlas and significantly less fun to read.

Oh yes, and I haven’t even mentioned theses issues yet:

  • there’s no support for any kind of CSS or JS asset pipeline.  If you want some, you’ll have to roll your own build system.  Rails comes with this stuff baked in.
  • testing?  You have two options: Selenium or clicking on a web browser.

WebForms: Just Say No

WebForms is productivity poison; in fact, it’s so bad it’s probably not just hurt little firms like mine, it’s a technology (again, part of a larger picture) that killed Microsoft as a cloud platform.  Someone reading this undoubtedly believes WebForms is better in 4.5.  It probably is, but it remains the wrong idea in the first place.  It’s quite hard to fix that.  I don’t write that many websites these days, but even for internal stuff I never, ever, touch WebForms. 

Of course, it’s worth pointing out there are other sites on the internet that used WebForms at the same time.  Not many, but it’s worth considering that the two I can think of are Orkut and MySpace.  They were both technical disasters and unable to match the pace of a firm like Facebook that had delivered its entire functionality in PHP, a technology that’s closest relative in the Microsoft world is classic ASP. 

Technorati Tags: ,,

New Coding Standard: Don’t Be A Jerk

Unlike many, I do actually believe that code aesthetics matter.  On the other hand, I’ll also admit to being as much of an arrogant elitist about code as I am about music.  This is why I like CoffeeScript and Anton Webern.  2 spaces vs 4 spaces?  Spaces vs Tabs?  Indent case statements within a switch?  I’ve got an opinion.  The fact remains that it’s much more important that everyone uses the same conventions than exactly what that convention is.  So I keep my stylistic tics to myself and listen to Polly Harvey on headphones where the noise doesn’t frighten my wife.

One funny thing is that people don’t seem to extend this logic out of their team or project.  There’s usually a dominant answer to these stylistic questions for whatever programming language you’re using.  Don’t like it?  Write your own programming language while listening to Laurie Anderson (or whatever you’re into).

For the most part, just doing what everyone else does answers most aesthetic coding standards questions that may arise.  It reduces stupid fatiguing stylistic discontinuities and lets you get on with actually reading the code.

Standards Creep

So, do you need a coding standards document?  Well, that’s really going to depend on how many violations you’re seeing.  Bear in mind that beginners aren’t going to perceive the patterns as easily as the competent.  However, here’s where the spectre of best practices rears its head again.  Once you start to document what people’s code looks like, it’s very tempting to move into what their code actually does.  Again, you end up creating a document that, at best, annoys your best programmers.  At worst, it creates a culture in which your best programmers are regarded as loose cannons and mediocrity is regarded as the highest goal.

So here’s my own contribution to “best practice” coding for readability: don’t be a jerk.  At the risk of being prescriptive and contradicting everything I’ve already said:

  • Don’t write bad confusing code and then a long comment explaining it.  Write better code.
  • It’s OK to have one character identifiers with tight scope.  Especially if you’re going to use it repeatedly.
  • When have wide scope, write words out in full rather than using a contraction, unless it’s a very well understood one.  Searching for stuff is hard when you have to guess.
  • And needless to say, spell things correctly.  If you’re not sure, type the word into Google.  I do.
  • If jargon doesn’t make things more concise, don’t use it.  That said, use commonly accepted terminology.
  • Don’t reuse identifiers to mean different things at different times.
  • Don’t be vague.  isActive and shouldBeActive are different concepts.  It’s amazing how many people just call the identifier “active” is both circumstances.*
  • Identify things by what you’re doing with them, not what they are.

Unless, of course, following these rules to the letter would make you act like a jerk.

*To get to nitty-gritty style point, the standard in C# is that you would identity “is active” with “active”.  Still, don’t ever identity “should be active” with “active”.  Equally, if you’re in lisp, “should be active” shouldn’t be named “active?”.