Do NOT trust cloud storage. Doing so is extremely foolish; I have never found a cloud company that will take legally-enforceable financial responsibility for their failure to protect the data you "entrust" to them -- whether the failure is one of improper disclosure or loss.
As a guy who used to run an ISP and has been intimately involved in computer hardware since the early 1980s (think Winchester 5MB -- not GB, MB drives along with the old DEC RL series cartridge disks!) I am somewhat of an anal SOB when it comes to this subject.
First, all disks eventually fail. Every last one. In addition there are occasionally other problems beyond your control, such as a fire at your location. You need to think of this when it comes to data and figure out how to protect against it. This means more than one copy, always, with at least one stored off-site.
In addition whatever you use for backup you need to TEST those backups and make sure they're readable. I have been called in more than once by a company to help recover ("cold", never having been called before) and discovered that their "backups" are unable to be restored AND ARE WORTHLESS. This usually happens two or three or five years after the program is put in place and EVERYTHING is, as a consequence, GONE. In a non-trivial percentage of these cases the result is literal business failure.
With that said, my RECENT (last few year) experience is that Seagate, Hitachi and WD drives are all about equally likely to fail. The most frequent times for them to fail are right out of the box within the first month or two and then they start to degrade around the third to fifth year of use.
In addition the hotter they run the more likely they are to puke. Therefore, heat management is important. If it's uncomfortably warm to touch when running it's too damn hot!
2TB drives are cheap these days. On Black Friday I managed to grab a pair of 3TB drives for $100 EACH. That's ridiculously cheap. I have an automated backup system here at my location that runs nightly, weekly and monthly backups automatically off the network and allows me to recover any failure within the last year with that granularity (daily within the week, weekly with in the month, monthly within the year) and rotate the secondary drive out every couple of weeks to a safe deposit box at the bank. Thus if the building burns I can lose up to 2 weeks of data, but no more. My "home" network consists of of nearly 2TB of "volatile" (subject to change) and ~3TB of "archival" (read-mostly) stored data that MUST NOT be lost.
It all comes down to how you define acceptable risk. When I ran my ISP the definition was "no more than one week" and therefore the tapes (which was what we used at the time) went to the safe deposit box once weekly. If you're really ridiculous about "no failures" you run a real-time journaled filesystem across a network to a server in a different physical location and then back THAT up, which now makes you disaster proof to a large degree as well.
Here's another example -- I run a pretty-popular blog and trading forum focusing on the markets and politics. The "primary" machine is at a colocation site in North Carolina. The entire data repository for that forum and blog is stored in a Postgres database. The database is then "hot" mirrored to my home network in Niceville over a SSL-encrypted data link where a second copy is synchronously stored to a completely separate second machine in a rack in my utility room. That database server can be brought online as the "main" in literal seconds, losing at worst the last comment posted to the forum at the time. If the colo burns to the ground or is hit by a tornado I am offline for mere seconds and the DNS can be swapped in under an hour.
But that's not good enough because a software failure or a malicious intruder could give a command to wipe the database, which would be dutifully echoed down to Niceville and destroyed there too! So to cover that possibility I can timeline restore the database and also physically back up the disks on both ends, and those backups go into the vault rotation. That's "safe enough" for my uses.
Incidentally about a year and a half ago I got to test this "for real"; the RAID controller in the system at the colo went insane and scribbled on all the disks, crashing the machine and destroying all the data that was there. So much for RAID's redundancy!
No data was lost.
Remember that backups are not just about failures but also mistakes (e.g. accidentally deleting a directory full of shots you didn't really mean to kill!)