Related to my offsite-backup with restic project I recently wanted to choose an cloud-provider as backup target. But that seems not an easy decision. Besides the costs for storage and traffic things getting complicated when it comes to restore: Pricing differs here from $10 to $1300 for a single restore of 500GB!
Although I use mainly AWS for my business projects I consider to use google cloud storage for my private backup. Main reason for me is that the restore options for AWS S3 Glacier are not easy to predict and the costs could be very different.
Some notes on the calculation
General
- To calculate the different tiers it is important to know the average archive size. Restic i.e. uses about 5MB chunks – therefore AWS Glacier with expedited access could be very, very expensive – on the other hand bulk access might be cheap but could be very slow on restore. It would make sense with bigger archive sizes but not with backup tools like restic.
- There might be some upload fees (number of requests) but they’re not considered here. Upload traffic is almost free at all providers
AWS S3
- In the scope are AWS S3 with the tiers Standard, IA (infrequent access) and Glacier
- Glacier has different download request times: Bulk (5-12 hours), Standard (3-5 hours), Expedited (1-5 Minutes)
- IA has a minimum storage time of 30 days, Glacier has 90 days (normally no problem with backups)
Backblaze B2
- almost the “cheapest” cloud storage provider, but as they’re based in the US the latency from/to Germany might be high (not tested).
Google Cloud Storage
- In the scope are the tiers Regional, Nearline and Coldline
- Nearline has a minimum storage time of 30 days, Cold-line has 90 days (normally no problem with backups)
- Like AWS S3 Glacier, there are extra fees for requesting Nearline and Coldline data, but there are no “waiting times” on access!
Calculation
I’ve made a spreadsheet for the calculation. You can find an online version here (or embedded below). Or Download as Excel.
All information is supplied without guarantee and for my own private purpose. There might be additional fees that are not included here. Use at your own risk! Let me know if something might be wrong.
Update: Some months with running a weekly restic backup…
Monthly Google Invoice:
Restic cronjob output:
Files: 318 new, 3 changed, 242632 unmodified Dirs: 0 new, 2 changed, 0 unmodified Added to the repo: 1.333 GiB processed 242953 files, 411.892 GiB in 21:29 snapshot abfef780 saved Applying Policy: keep the last 24 hourly, 7 daily, 4 weekly, 12 monthly, 3 yearly snapshots snapshots for (host [enterprise], paths [/data/data0/data]): keep 29 snapshots: ID Time Host Tags Reasons Paths ----------------------------------------------------------------------------------------- 1f95712e 2018-01-29 02:01:15 enterprise monthly snapshot /data/data0/data 8e12732d 2018-02-26 02:01:15 enterprise monthly snapshot /data/data0/data fbd49a81 2018-03-26 02:01:14 enterprise monthly snapshot /data/data0/data cfb98595 2018-04-23 02:01:14 enterprise monthly snapshot /data/data0/data e54a1ef7 2018-05-28 02:01:12 enterprise monthly snapshot /data/data0/data e01d934b 2018-06-11 02:01:14 enterprise hourly snapshot /data/data0/data bbf7e884 2018-06-16 17:32:40 enterprise hourly snapshot /data/data0/data 38430fb6 2018-06-18 02:01:01 enterprise hourly snapshot /data/data0/data a7923811 2018-06-25 02:01:01 enterprise hourly snapshot /data/data0/data monthly snapshot 47266d58 2018-07-02 02:01:01 enterprise hourly snapshot /data/data0/data f7dee0c1 2018-07-09 02:01:02 enterprise hourly snapshot /data/data0/data 50eb4d73 2018-07-16 02:01:01 enterprise hourly snapshot /data/data0/data b56838a2 2018-07-23 02:01:01 enterprise hourly snapshot /data/data0/data 69e6e24b 2018-07-30 02:01:02 enterprise hourly snapshot /data/data0/data monthly snapshot eb319f8e 2018-08-06 02:01:01 enterprise hourly snapshot /data/data0/data 4c617720 2018-08-13 02:01:01 enterprise hourly snapshot /data/data0/data f2700244 2018-08-20 02:01:01 enterprise hourly snapshot /data/data0/data c1831d23 2018-08-27 02:01:01 enterprise hourly snapshot /data/data0/data monthly snapshot 8dd873c6 2018-09-03 02:01:01 enterprise hourly snapshot /data/data0/data e0590b89 2018-09-10 02:01:01 enterprise hourly snapshot /data/data0/data c2c780d1 2018-09-17 02:01:01 enterprise hourly snapshot /data/data0/data a3c2d6f4 2018-09-24 02:01:01 enterprise hourly snapshot /data/data0/data monthly snapshot be9c14ad 2018-10-01 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot a9af7981 2018-10-08 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot 074e76a4 2018-10-15 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot b56e1785 2018-10-22 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot weekly snapshot de62c698 2018-10-29 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot weekly snapshot monthly snapshot 8f6d26d5 2018-11-05 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot weekly snapshot abfef780 2018-11-12 02:01:01 enterprise hourly snapshot /data/data0/data daily snapshot weekly snapshot monthly snapshot yearly snapshot ----------------------------------------------------------------------------------------- 29 snapshots remove 1 snapshots: ID Time Host Tags Paths ----------------------------------------------------------------------- 6151d891 2018-06-04 02:01:32 enterprise /data/data0/data ----------------------------------------------------------------------- 1 snapshots 1 snapshots have been removed, running prune counting files in repo building new index for repo [43:15] 100.00% 86295 / 86295 packs repository contains 86295 packs (512873 blobs) with 410.911 GiB processed 512873 blobs: 0 duplicate blobs, 0B duplicate load all snapshots find data that is still in use for 29 snapshots [0:12] 100.00% 29 / 29 snapshots found 512860 of 512873 data blobs still in use, removing 13 blobs will remove 0 invalid files will delete 1 packs and rewrite 2 packs, this frees 2.688 MiB [0:04] 100.00% 2 / 2 packs rewritten counting files in repo [33:14] 100.00% 86294 / 86294 packs finding old index files saved new indexes as [d36b4c0b bb72972f f6e803e2 e257d59d f2ae6af7 cbcf8f7a cd97a693 355bb778 5bf44fc7 87a01dc8 1b332542 e4dfb758 a592039d 1264aebb 169914b8 25b30161 8dc9eb92 51e7084e 22ce0c6a 7f5f498a c4169ede e98278a5 a7955efd 212c1e73 51f77666 b2682981 ada18575 244bb4ae 8ce840fd] remove 33 old index files [0:00] 100.00% 3 / 3 packs deleted done
2 replies on “Cloud storage cost calculation (backup/restore)”
Thanks for the overview.
What is your experience on Class A/B operations caused by rsync / restic. Do you have data on how many files per GB would destroy the coldline pricing advantage, assuming a daily run of rsync / restic?
with my regualar (weekly) restic backup I can’t see a really increasing invoice – but to be honest currently there are not much purge operations. Will try to share details here later…