Saturday, June 24, 2017

scheduling - Prevent duplicate cron jobs running



I have scheduled a cron job to run every minute but sometimes the script takes more than a minute to finish and I don't want the jobs to start "stacking up" over each other. I guess this is a concurrency problem - i.e. the script execution needs to be mutually exclusive.




To solve the problem I made the script look for the existence of a particular file ("lockfile.txt") and exit if it exists or touch it if it doesn't. But this is a pretty lousy semaphore! Is there a best practice that I should know about? Should I have written a daemon instead?


Answer



There are a couple of programs that automate this feature, take away the annoyance and potential bugs from doing this yourself, and avoid the stale lock problem by using flock behind the scenes, too (which is a risk if you're just using touch). I've used lockrun and lckdo in the past, but now there's flock(1) (in newish versions of util-linux) which is great. It's really easy to use:



* * * * * /usr/bin/flock -n /tmp/fcj.lockfile /usr/local/bin/frequent_cron_job

No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...