Jens Krämer

Running Rufus Scheduler in a Unicorn Rails app

 |  devops, unicorn, ruby, rails

The rufus-scheduler is a neat Ruby gem for scheduling regular or one time jobs. To quote the docs,

It understands running a job AT a certain time, IN a certain time, EVERY x time or simply via a CRON statement.

It also provides fine grained control over which jobs are allowed to run in parallel and whichnot.

I’ve been running it inside Passenger Rails apps for a while now, using an initialization script based on one I found in the rufus github issues.

The tricky part is that in a multi process server application you only want one instance of the scheduler to be running, so you have to take care that only one of your processes will actually launch the scheduling thread. Passenger conveniently provides :starting_worker_process and :stopping_worker_process hooks which the script uses to write / remove a pid file that serves as a lock to ensure only one of the Apache/Rails processes is running the scheduler. Unicorn provides at least an :after_fork hook, so let’s see what we can do with that.

Here’s the scheduler initialization script that I came up with for my recent Unicorn setup:

This script handles the lock file / pid file stuff and initializes the actual jobs. It is not really unicorn specific and might also be used in an Apache/mod_rails setup. Place this into lib/scheduler.rb or wherever you see fit. The conditional scheduler initialization is run from config/unicorn.rb in an :after_fork hook like this:

Fire up Unicorn and you should see it working.

Handling zero downtime deployments

Now comes the tricky part. A major feature of Unicorn is it’s support for zero downtime deployments. It does this by first spawning a bunch of new workers that start handling new requests, and then telling the old workers to finish their current requests and exit after that. So what happens is that the old workers (one of which is running the scheduler) will still be around while the new workers come up - which effectively prevents any of the new workers from spawning a new scheduler thread. So after the old workers are gone we are left without a running scheduler. It would be started up again if we did the whole ‘respawn new workers, let the old ones die’ dance again, but that would be quite ugly.

Turns out there’s a better way. Unicorn allows you to change the number of running worker processes using signals. The TTIN signal which will cause the Unicorn master process to spawn another worker, TTOU will do the opposite. We will make use of TTIN to spawn another worker process after the old processes are all gone, making it the one that will finally launch the scheduler thread.

So all you have to do is decrement the number of your worker processes in config/unicorn.rb by one (so you end up with the same number of total workers), and spawn that additional worker after all the old workers are gone by issuing TTIN to Unicorn’s master process. Sounds complicated but is actually very easy: just add sig TTIN to the end of the upgrade command in Unicorn’s init script: