Created Friday 30 December 2022 last updated Wednesday 8 February 2023


Amazon Web Services have great tools. I use their S3 buckets (online folders) to back-up data as my "off-site" solution.

I use S3Browser as a visual tool to look through my folders (set as read only access), and rclone (open source command-line sync software) to send encrypted documents / pictures to the buckets.

How it works?

Data are uploaded as objects through an API or the AWS web interface. There are different storage classes to choose from:

The default storage option is "Standard" and this option costs the most. It's designed for hot storage, for example files you need instantly and are working with. However, my use case is backup, so I opt for "Glacier Deep Archive", which is the cheapest but takes longer to retrive data and costs more to retrive data. Think of "Deep Archive" when you require long-lived data (more than 180 days) that may be accessed once or twice, if ever, per year. If you do not need redendency through AWS data centres, "Standard One Zone" is useful - data are stored in one data centre only. More on storage classes.


This can be complex and varies between storage classes. I will try and summarise it: In short, Glacier are cheaper per GB/month but costs more to restore, Standard costs more to store than Glacier but cheaper to read/download. There are costs for listing buckets and putting data into buckets, so costs varies. I store around 310 GB and most data are static (I do not add much per month) costing me between 0.87 - 1.10 GBP per month. This increases if I add lots of data, such as annual archive. "Deep Glacier Archive" costs $0.00099 per GB to store and $0.02 per GB to retrieve, whereas "Standard" costs $0.023 per GB to store and $0.00 to retrieve.

Backlinks: Computing:Backup Journal:2023:01:03 Journal:2023:09:23