Try dmraid, it’s been designed to take over various formats of hardware RAID cards.
Try dmraid, it’s been designed to take over various formats of hardware RAID cards.
Yep, it’s EU. File transfer shouldn’t be bad if your files are large, though it’s best if you tested it first—it might depend on your ISP’s peering and your prefered transfer protocols/tooling. Whether it’s reputable for your purpose, you probably have to do your own research. Also, remember that the offer I mentioned would only be equivalent in durability to a single-box RAID5 for your purposes, so not exactly equivalent to Google’s.
There’s Jottacloud with unlimited storage for 10 EUR/month, but they gradually slow down after first 5 TB. 30 TB might be a bit too much. There’s Hetzner with their dedicated 4×10TB machines for ~52 EUR, you could do RAID5 and have somewhat redundant 30 TB, at the cost of self-managing a dedicated machine. There are several providers doing regular S3 (which you can take advantage of with tools like rclone) with decent redundancy for 4-5 USD/TB + egress. For high-value data you should be probably spending more than 100 USD/month for 30TB in the cloud, or invest in actual hardware. Do you need hot access to this dataset, or is a cold storage archive enough?
We found that flakiness of e2e tests is usually caused by using libraries, frameworks and devops tools that were not designed for being integrated in e2e tests. So we try to get rid of them, or at least wrap them with devops magic. This requires a skilled devops team and buy-in from management.
At some point we were also solving the issue by having dedicated human reviews of e2e failures, it’s easy to train a junior QA engineer to have most false positives quickly retried.
But we would never give up on e2e.
So far I’ve been following recommendations from this person: https://old.reddit.com/r/NewMaxx/comments/16xhbi5/ssd_guides_resources_ssd_help_post_your_questions/