A couple months ago, I posted about using asynchronous jobs with RabbitMQ and Celery. This is a follow-up with some lessons I learned the hard way.
Celery settings for good performance
Do not run millions of jobs with DEBUG = True
. You will run out of memory — even if you have 48GB of it. On top of that, you might want to consider the celeryd option –maxtasksperchild.
Be extra careful with CELERY_SEND_TASK_ERROR_EMAILS = True
. I sent 9000 emails to myself in a couple minutes. My phone which syncs my email really didn’t like it. I’m running with CELERY_STORE_ERRORS_EVEN_IF_IGNORED = True
and I’m looking to get a dashboard view of it with django-sentry. I think I’m almost there.
Persistence & disk space
RabbitMQ stores messages intelligently so you don’t have to keep track of them. It’s very good at this. However, problems can arise when you’re queuing tasks faster than you’re processing them. Use rabbitmqctl which ships with RabbitMQ. If you see things like this:
1 2 3 4 5 6 |
% /usr/sbin/rabbitmqctl list_queues Listing queues ... celery 9958124 celeryevent 6841 ...done. |
There’s probably going to be some issues. Ten million messages have to be stored somewhere. By default on CentOS, they’re stored in /var. RabbitMQ really doesn’t like it when you run out of disk space for it to write persistent messages so be careful.
The new persistence engine in RabbitMQ 2.x handles this much better than before. In 1.x, the persistence log has to copy itself over every so often and copying multi-GB files all the time really slows the queue to a halt and adds to the problem of not processing tasks fast enough. On top of this, RabbitMQ writes a ton of logs, which is a good thing, but can backfire when disk runs out.
Task sets
Celery’s task sets work like magic. Instead of this:
1 2 3 4 |
from tasks import process_item for item in items: process_item.delay(item) |
Use this:
1 2 3 4 5 |
from celery.task.sets import TaskSet from tasks import process_item job = TaskSet(tasks=[process_item.subtask((item,)) for item in items]) job.apply_async() |
Note: the first parameter to subtask
is a tuple of arguments to process_item
.
General tips
- If you can make your tasks re-entrant — meaning they can be run with the same parameters multiple times without any side effects — your life will be a lot easier. Django’s get_or_create works wonders.
- Try to break tasks into smaller subtasks. Instead of one 45 minute task, break it into 2,000 tasks that take a second or two.
- If you are clever with your logging, debugging things will be a lot easier. This is generally always true, but it becomes much more apparent with celery’s concurrency.
I have celery reporting to django-sentry via logging.
[…] This post was mentioned on Twitter by Ask Solem, Evident Software. Evident Software said: RT @asksol: Lessons learned with RabbitMQ and Celery: http://t.co/7nWz8zk [davidfischer.name] […]
Hi,
I am going to dive into Celery so thank you very much for this precious feedback and tips 🙂
cheers