English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I have a BASH script that is kicked off via CRON. It checks to see if any new file has arrived. If so, it unzips the file via gunzip.

The problem is that if the file is very large (i.e. 5gb+) the script will not finish before cron launches a new instance. Now I have two instances trying to unzip the same file.

So, my solution is to touch .file.ext.lock and then try to unzip. When the unzip is done, I rm -f .file.ext.lock. That way, if another instance wakes up and finds the lock file, it will just echo that a lock file exists and exit.

I was wondering if this is the best way to do it though. Do you see any flaws in this and if so, how would you go about resolving the flaws.

2006-10-11 17:56:58 · 2 answers · asked by thepinky 3 in Computers & Internet Programming & Design

2 answers

It sounds like a good solution. There's no chance of a race condition, since you create the lock file before unzipping the archive. Any other solution would have to be logically equivalent. But since unix file systems don't directly support file locking, I think that what you've got is about as good as it's going to get. Nice work.

2006-10-11 18:03:22 · answer #1 · answered by arbeit 4 · 0 0

A very late answer for posterity: When starting from cron, the non-atomic locking is not an issue, so file based locking is probably OK. The problem you will occasionally hit though is that if your script is killed for some reason it leaves your lock file behind, requiring manual intervention. An improvement would be to echo your PID into the lock file, and if you find a lock, check that the PID matches up with a process with the expected command line. for the cron job. If it doesn t match, clear the lock and proceed.

2015-09-07 21:37:01 · answer #2 · answered by Andrew M 2 · 0 0

fedest.com, questions and answers