Automatic File Deletion in Amazon S3 Revisited

Back in October I wrote about automating my Apache/MySQL backups to Amazon S3, I then spend considerable time working out Automatic File deletion in S3, a slightly complicated process, but one that works well once I had it configured.

This week Amazon rolled out Object Expiration for S3 allowing you to automatically expire files in a bucket based on name and time criteria.  Amazon has an excellent guide to configuring Object Expiration via the AWS Console.

To test this I set one of my buckets up for 5 day expiration of any files named ‘backup’ (my existing scripts all maintain 7 days of backups)

so much easier then Python and Boto

 

 

 

 

 

 

When I checked this morning there were 5 backups remaining

still easier then Boto and Python

 

 

 

 

You can verify which rule applies to an object by selecting the object and examining the properties.  One word of caution, Object Expiration rules are for a bucket, so even if you have Folders with in a bucket the rule is global, make sure you understand what objects you are expiring.

Why expire objects in S3? In S3 (and all of the AWS services) you pay for what you use, by managing the number of files (in this case backups) stored at any given time my costs are kept to a minimum.  For December so far my S3 charges are 0.17 (yes that is 17 cents to store my backups for 3 websites and a number of MySQL databases).

Links

Amazon S3 announces Object Expiration

Amazon S3 Developer Guide: Object Expiration

Managing Lifecycle Configuration

Sorry, comments are closed for this post.