Micro-fiddling with erlhive

Every once in a while, my thoughts touch upon erlhive. While it may seem that the project isn’t going anywhere, I’m still pretty comfortable with it. It’s not as if I started it for the money, anyway, and the problems it addresses are real and interesting.

Someone mentioned to me that erlhive had been dismissed in a discussion with the argument the benefits are not worth a throughput of 50 pages per second.

I’m not sure where the 50 pages per second comes from, and in honesty, I’ve cared so little about performance that I haven’t benchmarked it myself. So, just to satisfy my curiosity, I ran a little speed test…

I chose to benchmark only the blog back-end, partly because erlhive (oddly enough for a “web framework”) doesn’t really have any web front-end yet. However, the back-end does have a functional API, and the front-end part can be made arbitrarily fast or slow, depending on context anyway.

Here is my little benchmark program:

fetch(N) when N > 0 ->
    fun(M) ->
       Id = M:apply(erlhive.blog.flexiblog,
       [_|_] = M:apply(erlhive.blog.flexiblog,
                      read_article, [Id])
fetch(_) ->

Clocking this with timer:tc() in coLinux/Vista on my 2.17GHz dual-core Pentium laptop (not using SMP erlang, though), I achieve 1450 article fetches/second. That’s a lot better than I had expected. Erlhive, apparently, is not so slow after all.

To explain what the above does, I assume that the surrounding web framework has authenticated the user (<<“user1”>> in this case), and that there exists an article 1 in blog 1 of the user. All objects in erlhive can be given user-defined properties, so coming up with more creative names is perfectly possible. However, the blog example uses the following naming convention:

  • [blogs,User,N] to identify a blog
  • [ba,User,BlogN,ArticleN] to identify an article
  • [bc,User,Blog,ArticleN,CommentN] to identify a comment

The call, erlhive:with_user(User, Fun) starts a database transaction, verifies that User is a known erlhive user, and creates an “access record” in the form of a parameterized module M. The user-provided fun is called with Fun(M), and through the module M, the whole erlhive API is available. Any function called through M (e.g. using M:apply(Mod,Func,Args)) is subject to erlhive’s access control. Of course, anything else in Fun/1 is still ordinary Erlang, and can be as “dirty” as you like. M is the entry point to erlhive, and erlhive:with_user/2 creates the transaction context.

The first call, to erlhive.blog.flexiblog:string_to_id(“ba/user1/1/1”), simply translates a web-friendly identifier to the internal form, [ba,<<“user1”>>,1,1]. The second call, to erlhive.blog.flexiblog:read_article(Id), performs the following steps:

  • Checks that the blog, [blogs,<<“user1”>>,1] exists and that the current user (<<“user1”>> in this case) has read access to it.
  • Reads the properties of the article, and verifies that the current user is authorized to perform the requested operation.
  • Returns the property list associated with the article. The actual article content is also a property, tagged ‘content’.

The blog example is reasonably clever for a blog module. It allows users to be categorized as ‘admin’, ‘author’, ‘reviewer’ and ‘reader’. A reviewer can read and comment on an article before it has been published. You can list blogs and articles by property – all properties are automatically indexed, and you can also clone an article, as a form of revision handling. It is also possible to view previous/next in a revision succession.

All this in less than 1000 lines of code. I do feel pretty comfortable with that.

There is also a short text file that illustrates the blog interface through the interactive erlhive shell. Not exactly stellar documentation, but if I get some indication that there are potential users out there, I promise that it will improve.

BTW, in the course of this experiment, I noticed that the interactive shell was broken, and that the read_article/1 function actually didn’t exist. Bug fixes have been committed to the repository. I never did promise that it was production-quality, did I?