The backup script currently has some questionable redundancies which end up resulting in major disk space consumption, as well as some missing information.
- The danger database is not being backed up
- The click database is not being backed up
- A copy of wiki uploads is being tarballed and added to each auto-generated backup directory
- Once backups are generated, another tarball is generated of the backup directory, thus resulting in an additional 40gb of disk usage each time the backup script is run (every 6 hours)
Since the backup script runs every 6 hours, the total disk usage is 20gb, and the total disk size is 100gb, this means that after 12 hours the disk will completely fill up.
The backup script should simply send files to s3, it should not tarball them. This would result in the backup script executing much faster and giving us much more room to store database backups.
The database backups should be automatically wiped at some interval, maybe once per week? Once per month? It shouldn't really matter because they are being sent to s3 anyway.
The backup script currently has some questionable redundancies which end up resulting in major disk space consumption, as well as some missing information.
Since the backup script runs every 6 hours, the total disk usage is 20gb, and the total disk size is 100gb, this means that after 12 hours the disk will completely fill up.
The backup script should simply send files to s3, it should not tarball them. This would result in the backup script executing much faster and giving us much more room to store database backups.
The database backups should be automatically wiped at some interval, maybe once per week? Once per month? It shouldn't really matter because they are being sent to s3 anyway.