Django and asynchronous jobs

I’m going to focus on RabbitMQ & Celery, what it buys you and why and when to use it. Eric’s blog always seems to be a couple months ahead of mine, so instead of re-hashing what he wrote, I’ll let you read his blog and then come back and finish my complimentary post. Go there now. I’ll wait.

Here’s the mini-rehash for those who didn’t actually go read it. Basically, RabbitMQ is a general purpose messaging queue system based on the AMQP protocol. When you need something to get executed, you simply queue up a message with RabbitMQ. It won’t get lost if Rabbit restarts, you can be pretty confident that your tasks will eventually get executed or you’ll be able to find out why they didn’t. Celery (and django-celery) is a Python library for queuing up tasks to be executed and for actually executing these tasks asynchronously. It gets its tasks from RabbitMQ although it does work with other queues as well.

Why RabbitMQ & Celery?

Not everyone needs to execute asynchronous tasks. If you’re reading this and asking yourself “why would I ever need that,” you probably don’t need it. My project involves data analysis that takes minutes for each item in the database. In addition, I periodically re-analyze items in the database. At first, I built a cronjob (using Django management commands) which would wake up and check for new items queued in the database every minute. Then, I built a second cronjob for re-analysis. As the system grew, this became hokey and had issues anytime the database crashed during this analysis. Anytime your Django view connects to some external service that may or may not be up (source control, web services) or may take a while (shell commands, massive database queries, etc.), it’s probably best done asynchronously. It’s also great for anything that needs to be executed periodically at a particular interval (caching, expensive calculations). Celery also provides built-in ways to retry failed tasks a specified number of times at a specified interval which is particularly useful for ensuring that things actually get done.

Setup & integration with Django

My Django views didn’t change much after I switched to Celery. Instead of adding an item to the database when I want to queue up an item for analysis, I simply queue up the execution of a @task with django-celery. The particular tasks that your application can handle are simply placed in a tasks.py file in your Django applications. To process the tasks, you simply run python manage.py celeryd. It can also be setup to run at boot time using init.d (Ubuntu/Debian, Redhat/Fedora). Celeryd clients can connect to Rabbit from the same server as RabbitMQ or a different server or servers in order to distribute the load.

The nitty gritty

One useful bit that I had to dig to find in the Celery documentation was on locking a particular task so that two different workers don’t work on the same thing at the same time. This could happen in my application if two users queued up analysis on the same item for example. The trick is to use Django’s cache framework and lock a particular item in the cache.

A word on Djangocon

Unlike last year, I convinced work to send me to Djangocon in Portland next week. I’ll probably do some live blogging on some of the interesting topics. If you’ll also be there, say hi!

Piston Looks Good, But I’m Not Using It

Edit (November 2, 2012): This is horribly outdated. Use class-based views or tastypie.

Firstly, I’ve been missing in action for a few months and I apologize to you, my loyal reader, for that. Without making excuses (here comes the excuses), work has been picking up, my girlfriend moved from about 15 miles away to only about 8 blocks away and Starcraft II is in beta. Regardless, I’m back in the Python action. WoooHooo!

REST interfaces & Django

This post is somewhat of a follow-up on my post on RESTful Django web services because I didn’t really talk in my previous post about Piston. Piston (sometimes django-piston) is a library for creating RESTful services in Django and it supports some of the features that I spoke about in my previous post such as good caching support with Django’s cache framework, different output formats (eg. XML & JSON) via what Piston calls emitters, and the ability but not the requirement to use Django models as REST resources. I don’t know how I missed Piston before, but people blog (*) about it and it has made the rounds on the Django User’s list. However, even after looking closely at it, I decided not to go with it. In this post I’m going to talk about what I did and did not like and why I rolled my own REST micro-framework. That almost sounds like I’m giving myself too much credit given that my micro-framework is only ~30 lines.

(*) BTW, Despite the fact that Eric updates his blog somewhat infrequently (sounds familiar) it is well worth a read.

Piston: the good

Piston ships with quite a bit of good documentation and allegedly is used to power some of BitBucket’s services — lending to its credibility. Specifically, I liked the fact that it plugged directly into Django models. You simply write a short Handler for your model explaining what fields to expose and you’re mostly done.

It effectively wraps up your handler and does all the JSON/XML/YAML serialization for you while still giving you the ability to customize it. On top of this, it plugs in nicely with Django’s form validation and allows you to do some other nice features like throttling requests based on which user does it.

Piston: the bad & the ugly

I started to look at Piston, but because I wasn’t using throttling, using OAuth, outputting anything other than JSON and I wasn’t tying to models I didn’t think that Piston bought me anything. In reality, it wasn’t doing anything my me other than properly returning HttpResponseNotAllowed. My other issue is that this project involved different outputs based on HTTP headers. For example, a GET on a certain URL would return JSON formatted data (a read in the CRUD world) if an HTTP header was present and an HTML page presenting that data if it wasn’t. Piston uses different emitters based on a request parameter format (eg. /path/resource/?format=JSON). Piston gets you up and running quickly, but it didn’t fit my use case.

Also, this is a little nitpicky, but when I see something like:

I cringe a little bit considering that status code 403 is the correct status code for Forbidden. There’s a ticket for this already. Why did Piston define constants for returning various status codes anyway when that functionality is already built into Django. Is rc.DELETED so much easier than HttpResponse(status_code=204)? Perhaps it’s a little clearer and Django really should have HttpResponse subclasses for even the less common responses, but I think this definitely involves repeating yourself (and Django’s mantra is don’t repeat yourself).

The solution

I always wondered why Django didn’t allow for routing URLs based on the HTTP method: It seems like such a common use case. The developers discussed it back in 2006, but in the end it was decided that building only the simple case was best as it yielded a relatively clean urls.py. Building off of that thread, the example in the Django book (search for “method_splitter”) and another blog post, I rolled a little framework to meet my needs instead of using something like Piston.

I found this to be a much simpler and easily extensible. The argument against this is that urls.py becomes bigger, but in a lot of ways I found this to be clearer. From reading the urlpatterns, I can quickly tell exactly what gets called in each case. In addition, routing differently based on HTTP headers, cookies, the source or anything else becomes as simple as adding a parameter and a little code to service_dispatcher.

In the end, it’s wasn’t that I didn’t like Piston, it’s just that I didn’t need it.

Updates March 2010 Edition

This post is mainly going to be an update on what I am thinking and what I’ve been working on the past few weeks.

Work

At the beginning of this year, I took a new position (same company) in a security group. Our primary focus is to ensure that the company is shipping secure, OSS compliant, legally compliant code. However, my specific role in that is to develop tools (with Django) to help in making sure that happens. This is an exceptionally interesting project and involves pulling in vast amounts of data (terabytes) from many sources (multiple VCS, multiple databases) and presenting it in a comprehensive manner. This project and my work has led to some good problems:

Some of our databases are MSSql databases. This is a problem since we’re a Linux shop. Pyodbc works great for connecting to MSSql from Linux, but unfortunately, there are some incompatibilities with django-pyodbc. In addition, the project doesn’t seem to be that widely used so it isn’t supported or documented as well as it could be. We are considering sqlalchemy/elixir as well, but I’ve been able to patch up django-pyodbc to get it (mostly) working with the Django trunk. I also have some concerns about the django-pyodbc project as a whole. I’m considering working on this project pretty heavily.

Also, as part of my work, a coworker and I detailed a security flaw we found with urllib2. It resulted in basic authentication credentials being sent to sites that did not request it (and weren’t running SSL).

Future of RPC4Django

I have been considering moving RPC4Django from my personal subversion repository to Google Code or Github. I feel that there are a few advantages of this:

  • It is easier for others to contribute and get involved.
  • A public bug tracker that would let other people easily raise issues instead of emailing me directly. This way we have public archives and the information can be found by anyone interested in RPC4Django.
  • If I were hit by a bus, some one could easily take it over

I might make a mailing list as well. Are there any strong opinions on this?

Django Scripting and the Crontab

Sometimes, you need part of your Django application to run from the command line. These scripts can be caching jobs that run periodically to speed up performance or data collection jobs that pull information from various sources into your application. Although James Bennett has a great article on writing standalone Django scripts, I just wanted to update it with changes that have happened since 2007. I ran into this problem about the same time on both a work project and on a personal project.

The problem with standalone scripts

The basic problem is that you want your script to run in the context of your application. You want to access your databases in the normal way and you want to import modules by the same paths. In general, you can do all of this work yourself by making sure your PYTHONPATH is correct and DJANGO_SETTINGS_MODULE is set properly, but it is so much easier to just create a custom management command. This is especially convenient since it makes your script portable (Windows, Mac & Linux — crontab & schedule tasks) as well easily distributable with your application.

Custom management commands

A custom management command is a command that can be run from manage.py. Essentially, Django requires that you create the following type of structure under your application:

Then, in mycommand.py you must subclass django.core.management.base.BaseCommand like so:

This will allow you to run (mycommand is named for mycommand.py):

or create a crontab entry as follows:

Conclusion

This provides the cleanest, most flexible way to build and support standalone scripts that can be used outside of the web server. It will run in exactly the same environment as your application and the same modules used in your application can be used here. The only problem I ran into was that custom management commands must be run from the same directory as manage.py.

Edit: I added thorough documentation on management commands to ticket #9170

Update: The 1.2 documentation contains the changes from my patch.

Good Documentation Makes a Good Project

Perhaps it is a platitude but it’s true that good documentation makes a project successful. Recently, Jacob Kaplan-Moss of Django fame started a series of articles on documentation — perhaps one of the most underrated aspects of a software. Although I may not agree with everything (technical documentation following MLA?! Just couldn’t get away from that lit degree huh Jacob?), the series as a whole is truly readable and he puts words to some of my own vague thoughts on documentation. With all that in mind, I want to focus on what led to Django’s documentation and some concrete examples of successful and not so successful documentation.

The evolution of software documentation

Back in the dark ages, all documentation was done by hand. Considering that developers are pretty bad about documentation now, I don’t even want to imagine what it was like back then. When I first got on the programming scene, Javadoc was a pretty new development. Javadoc was great in that it allowed you to write documentation only once and at the same time you wrote the code. Then, you used your code files to generate pretty HTML. This was amazing for API reference documentation and I still think that Java has the most well documented standard library API of any programming language. After seeing that you could auto-generate documentation, a host of clones like POD and Doxygen followed. Unfortunately, too many people decided that having Javadoc means that your project is documented (I’m looking at you SNMP4J!).

The next stage in the evolution came from PHP’s excellent documentation of all places. The PHP folks started including formatted examples, suggestions and caveats with their reference API. They also started putting together guides about particular aspects of the language such as security or working with Oracle. On top of this, they had the seemingly ingenious idea of allowing users to post comments on the documentation. At first, this worked well. Years ago, I posted a couple steps I took to get PHP working with Oracle which is still a pain to this day. However, over time, the comments got clogged with snippets of useless code, cries for help and other drivel. It’s no small wonder that Django removed the ability to comment on their documentation.

Django embodies the current incarnation of documentation and although I think it is the best documented project I’ve worked with, I’m not convinced it is ideal. Django combines the PHP-style guides with tutorials. This works exceptionally well for getting new users off the ground quickly and bringing users familiar with the project up to speed in a new area. However, I feel that Django lacks somewhat in the reference API area and part of this has to do with the fact that they are writing all of the documentation by hand instead of generating it from docstrings. The method references are usually an afterthought (because they don’t happen at the same time the code is written) and don’t contain the level of detail that the PHP or Java documentation does. Without looking at the actual code, how else would you figure out that the markdown filter can accept extra parameters.

A success story

Despite the trash I just talked about Django (blasphemy!), it is a success and the documentation is superb. It gets you up and running very quickly and has detailed documentation on virtually every aspect of the project. I attempt to model both my open source projects’ and my work projects’ documentation after Django’s — imitation is definitely a form of flattery. Django has maintained its documentation by enforcing some rules and good practices. The Django project maintains all the reST formatted documentation files in the code line and requires that patches include updates to the relevant documentation. This ensures that the docs are up-to-date with the code — a problem lots of projects suffer from. Django uses Sphinx to generate their documentation periodically from the code line — I don’t know how often, but fairly often — and make it available as the official Django documentation. Inaccuracies and problems are caught quickly.

What not to do

Early Linux suffered from some serious documentation fail. I remember being familiar with installing Windows and Mac system 8-9 from scratch and figured that I could install Linux. Perhaps I made the mistake of trying Slackware, but I can remember even after having a mostly successful install having to compile all sorts of packages from scratch. It was a great hobby for CS student back in 2001, but hardly a well documented and easy to use project. The fact that the code is great and the system is stable doesn’t help if you can’t get the thing running (insert car analogy about a supercharged engine with no diagrams for putting it into an actual car).

In the office, we suffer from documentation issues because nobody wants to write it themselves and nobody can agree on a uniform way of doing it. We ended up with multiple wikis all over the place none of which are complete. Some small teams have design documents sitting in source control or in eRoom. We have requirements sitting in various Word documents and designs in Powerpoint presentations. There’s not even a tutorial telling new employees where we store our bugs, requirements, designs, or wikis or how new bugs should be filed, new requirements introduced or how wikis should be updated. In general, we have fragmented, tribal knowledge where nobody knows the whole story.

Making it better

At some point, somebody needs to lay down the law and start creating tutorials, walkthroughs and other documentation for a project. I am only one person in a huge division of an even larger corporation, but I already have a reputation for writing documentation. Django has their benevolent dictators declaring that all patches shall come with documentation. The Ubuntu community (and the Redhat and Mandrake guys before them) has taken Linux from having an arcane install process to being as easy as Windows — and it really shows.