Saturday, September 24, 2016

linux - bash script returns "out of memory" in cron, but not in shell

I'm running a nightly bash script to sync a remote folder (source) with a local folder (target). I've tested this script, based on rsync, and it works fine in a root shell. It takes time since there are hundred of gigs to copy but it works.



Once I use it in crontab my server runs out of memory.



My server has 8GB of RAM, 4GB of swap, and as I said, the script never goes OOM when manually ran from a shell. It's a default Centos 5.5 installation. I could split the load and sync all the 2nd level dirs in a find/for script, but I'd like to keep it simple and only sync the top level directories.



I cannot make too many tests since this server is used to host websites and other services and I cannot afford to hang it just for testing purpose. Do you know a setting that could allow cron to finish this job normally ?



#!/bin/bash


BACKUP_PATH="/root/scripts/backup"

rsync -av --delete /net/hostname/source/ /export/target/ > $BACKUP_PATH/backup_results_ok 2> $BACKUP_PATH/backup_results_error


edit: cron configuration is default, as /etc/security/limits.conf, which is all commented out.

No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...