Approve the Cookies
This website uses cookies to improve your user experience. By using this site, you agree to our use of cookies and our Privacy Policy.
OK
Index  •   • New posts  •   • RTAT  •   • 'Best of'  •   • Gallery  •   • Gear  •   • Reviews
Guest
New posts  •   • RTAT  •   • 'Best of'  •   • Gallery  •   • Gear  •   • Reviews
Register to forums    Log in

 
FORUMS Canon Cameras, Lenses & Accessories Canon Accessories 
Thread started 04 Jan 2014 (Saturday) 14:12
Search threadPrev/next
sponsored links
(this ad will go away when you log in as a registered member)

1,2,3,4 TB drives?

 
CBClicks
Hatchling
5 posts
Joined Feb 2013
Location: Alaska, USA
     
Jan 11, 2014 10:14 |  #76

If you want to go with a NAS setup, I'd recommend looking at the QNAP NAS units. They can be a little spend, but they are easily configured. I'm a Server Administrator for work and use these at many of my client sites for their data backups.

You can get one of the 2-drive units and set it up as a RAID 1 which will write the same data to both drives at the same time so if one drive fails, you replace the drive and your data is replicated back to the new drive for redundancy. I do not recommend going with a software RAID setup (RAID done by your OS instead of a dedicated RAID controller card).

As far as brand/model of which drives to use. I'd never recommend a WD Blue or Green, they are designed more towards home users for cheap desktops and lower power consumption. I use only WD Black (enterprise level) or Hitachi HDD's in my NAS units.


5DMarkIII | 7D | 17-40mm F4L | 24-105mm F4L | 100mm F2.8L | 50mm 1.4 |

  
  LOG IN TO REPLY
sponsored links
(this ad will go away when you log in as a registered member)
notgoodatusernames
Hatchling
3 posts
Joined Jan 2014
     
Jan 11, 2014 10:58 |  #77

I'm not really sure what job description has to do with posts. Either your argument is week and pointless or factual. If it means anything, I work for a global aerospace, defense, security, and advanced technology company.

mike_d wrote in post #16588097 (external link)
Hard drives tend to have good and bad batches. If you have two identical drives -- purchased at the same time -- and one fails, the probability of the other failing quickly thereafter are pretty high. Getting drives from two different makers minimizes the risk of near-simultaneous failures.

If the user needs as much space as possible, then 3TB drives are not the best choice unless they go to a RAID system.

This is complete nonsense... Hard drives do not have bad batches like state of the art motherboards do, however, they can be treated poorly during shipping. The chance of two drives dying at the exact same time after you have stress tested the drives for a reasonable period of time are extremely slim. You can buy drives from separate shops but they can be grouped in the same shipment. You can also run into bother when buying different brands of HDDs. You should always stress test hardware but it is foolish to advise someone to buy different brands of HDDs due to "good and bad batches."

You should buy HDDs that are the highest quality, the cheapest, the fastest, the most reliable and obviously the most suitable for you. Everything is a compromise, however, finding one HDD you really like and then finding an inferior HDD and buying that purely because it's not from the "same batch" seems ridiculous.

mike_d wrote in post #16588109 (external link)
All SATA drives hot-swappable as long as the controller is configured for it. I've hot swapped non-enterprise SATA hard drives in my Synology. I've hot-swapped SATA drives in Windows boxes by taking the drive off-line in Disk Manager first and refreshing after.

Again, nonsense.

P51Mstg wrote in post #16590624 (external link)
For those who have raised the "RAID IS TOO MUCH FOR HOME USE"....

A quick thought... I have a lot of Data, well over 20TB of photos and about 15TB of video; maybe another 10TB of stuff that if I lost it (like junk in the basement and garage I'd be happier in the long run)....

I've got 3 8 Drive Thecus 8800 PROs (and more drives for backup) which have 8 3TB or 8 2TB (the one with junk on it) drives in them. I only turn them on when I need them to save power and drive wear (like with 1Million hours MTBF per drive failures will be rare)...

I set them up as RAIDS with 7 useable drives each set up as RAID 5 (can lose one drive and it still works, and can use it as it rebuilds).

I did that for ONE Reason. If the system goes down as a RAID 0 (all drives put together as a single BIG drive... Lose one, lose all the data), the time to recopy the data back on the server is about 2 days. Its a PAIN to do, even hooking up another NAS similar to the first to it to reload it.

One other thing you'll see in my posts in the past is there are speed issues on RAID drives.

My NAS are relatively expensive ($1600 new without hard drives; cheaper used on EBAY), BUT they are FAST, easily runs 90MB/Sec copying. CONSUMER NAS (4TB for $300 with drives or such from Seagate etc) maybe 25 MB/Sec IF you are lucky... THAT ALSO IS USING THE SAME HARD DRIVES INSIDE, its the NAS Box that doesn't have the power to move the data.

Bandwith... PRO drives 3 users (or copying from 3 sources) no real change in speed. Consumer... Try that and the drive stops. 1 user OK, 2 slow. 3 about stopped. 4 Forget it... SPEED is related to NAS horsepower (processor) more than it is to RAM in the NAS. My drives with INTEL CORE 2 DUO processors (and I have a later 8900 which has an i3 Processor) absolutely kick the butts of the processors that come out of a CASIO wristwatch that they put in consumer NAS boxes. I've tried changing RAM (which is tough to do, since its very specific) on a lower level NAS I had and it didn't make a performance difference at all. Watch tests on the web for the NAS you are going to buy, their version of "its blazing fast" is RELATIVE. In a field of SMARTCARS, its may be a BIT faster, but its hardly a FORMULA ONE RACE CAR.... The slowest Formula One on the block is WAY faster than the fastest SMARTCAR or pickup truck....

Making your own RAID on your PC... My computer has tons of power and is RAID capable. I wouldn't trust a RAID built on it since it has a Chinese (or whatever) built mother board, with directions written in pig english and if something fails, its pretty much gone....

Get a RAID from a raid company.... I guess DROBO is OK. Simple drives for simple people. The offsite backup function looks awesome. But too many people complain. Really to ME, a lot of the problem has to come from the drives. Drobo USED to (and may still) advertise that you can use any combination of drives you have in their box. So you use an old 250GB drive, a 1 TB and a 2TB, with 3 different manufacturers....

THATS BEGGING for trouble as far as I'm concerned... (well past ASKING for trouble)... When a RAID writes, it wants to lay down its stripes of data in the same time frame. So a fast drive mixed with a slow drive is going t cause problems...

I'm sure results would be better if all the drives MATCHED.... Trust me all MY DRIVES in each NAS box are the same down to the model number and version of the firmware. So they should all perform the same. Never had a problem I could trace to that issue...

Last issue is as a RAID gets full (at 90% or more), they get what the manufacturers term "UNSTABLE" which means they do strange things without reason (like teenagers)... They drop off line, Raids don't get recognized, etc.... It sucks, but it happens..,

Still bottom line. I keep the real data on the NAS. If the computer gets a virus or Windows stops working or whatever at least I know the data is good and its safe. Never had a virus issue on the RAID yet... I don't leave anything on the desktop I can't make it without....

Got carried away there, but I hope someone learned something from this...

Mark H

It sounds more like the speed issues you're getting are from the ethernet port and not the drives themselves. Your write rates sound EXTREMELY slow and you can copy 40TB of data extremely quickly without RAID.

Most drives now write faster than the speeds you quoted. 10, 4TB drives copying data to 10, 4TB drives would obviously shift data quite fast. Do remember that if you need to backup all drives as per this rare hypothetical instance, the drives would take less than a day to copy 40TB of data. Regardless, this is a rare situation and you should be using a UPS. Drives you don't access should always be taken out of the computer.

mike1812 wrote in post #16591294 (external link)
I do something completely different, thought it's because it's a multi-purpose system (media server, pc backup, other). I run a WHS2011 server with approximately 75TB (20 x 4tb, which turns out to about 75 usable after formatting). Certain folders are replicated, meaning the files are copied to other drives in the pool. Those files also get backed up to external drives on an irregular basis (depends on how much I've shot lately and added to the folder). All of those files are secondary to the original data residing on my pc drive.

As for drives, after 2 home servers, I gave up WD as a brand altogether - just way too many failures. I strictly buy HGST (formerly Hitachi) now. And yes, I know WD bought the company, but I still think the HGST drives are made at the Hitachi plant, and they have been (knock wood) rock solid reliable for me.

Change your operating system. With that much data and utility, you should be using virtualisation anyway.

CBClicks wrote in post #16596580 (external link)
If you want to go with a NAS setup, I'd recommend looking at the QNAP NAS units. They can be a little spend, but they are easily configured. I'm a Server Administrator for work and use these at many of my client sites for their data backups.

You can get one of the 2-drive units and set it up as a RAID 1 which will write the same data to both drives at the same time so if one drive fails, you replace the drive and your data is replicated back to the new drive for redundancy. I do not recommend going with a software RAID setup (RAID done by your OS instead of a dedicated RAID controller card).

As far as brand/model of which drives to use. I'd never recommend a WD Blue or Green, they are designed more towards home users for cheap desktops and lower power consumption. I use only WD Black (enterprise level) or Hitachi HDD's in my NAS units.

Enterprise level blacks?

http://www.brightsideo​fnews.com …-black-vs-enterprise.aspx (external link)

For what you guys need, tools like these can be useful:

http://www.amazon.co.u​k …TF8&qid=1389459​421&sr=1-8 (external link)

http://www.amazon.co.u​k …-26&keywords=5%22+hdd+b​ay (external link)




  
  LOG IN TO REPLY
mike_d
Cream of the Crop
Avatar
5,219 posts
Gallery: 1 photo
Likes: 450
Joined Aug 2009
     
Jan 11, 2014 11:38 |  #78

notgoodatusernames wrote in post #16596682 (external link)
This is complete nonsense... Hard drives do not have bad batches like state of the art motherboards do, however, they can be treated poorly during shipping. The chance of two drives dying at the exact same time after you have stress tested the drives for a reasonable period of time are extremely slim. You can buy drives from separate shops but they can be grouped in the same shipment. You can also run into bother when buying different brands of HDDs. You should always stress test hardware but it is foolish to advise someone to buy different brands of HDDs due to "good and bad batches."

You should buy HDDs that are the highest quality, the cheapest, the fastest, the most reliable and obviously the most suitable for you. Everything is a compromise, however, finding one HDD you really like and then finding an inferior HDD and buying that purely because it's not from the "same batch" seems ridiculous.

The Deathstar 75GXP was already mentioned. Those things were dying right and left. Seagate had a run of 7200.9's (?) with faulty firmware that was killing drives by the thousands. This isn't 1986. Drives don't need to be identical. The problem with buying the "highest quality" drives is that you later find out you bought a bunch of junk when they all start failing. I'll spread my risk a bit by not having identical drives and finding out two years later they're part of the worst run of hard drives ever made.

notgoodatusernames wrote in post #16596682 (external link)
Again, nonsense.

Are you calling me a liar? You're telling me I haven't swapped all kinds of SATA drives in my Synology and Windows machines? SATA = hot swappable, provided the host is capable.




  
  LOG IN TO REPLY
CBClicks
Hatchling
5 posts
Joined Feb 2013
Location: Alaska, USA
     
Jan 11, 2014 20:04 |  #79

mike_d wrote in post #16596739 (external link)
You're telling me I haven't swapped all kinds of SATA drives in my Synology and Windows machines? SATA = hot swappable, provided the host is capable.

You're correct. It's the same idea as having an external drive reader. While I'm not familiar with Synology, the basics of it are simple. The host only has to be able to eject the disk from the OS and then disconnect power to the drive bay.

Many servers come with the feature of having hot-swappable drives so they aren't required to be powered off, therefor shutting down services to a network, to replace a hard drive.


5DMarkIII | 7D | 17-40mm F4L | 24-105mm F4L | 100mm F2.8L | 50mm 1.4 |

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 2
Joined May 2007
Location: Sweden
     
Jan 11, 2014 20:28 |  #80

ImCBParker wrote in post #16578671 (external link)
It is not technically a back up, but by definition it creates a redundancy that helps in the event of a single drive failure.

Exactly - what RAID does is improve availability. I.e. the ability to continue to access the data and work despite one (or sometimes more) disks being broken.

But since there is (to the computer) only one copy of every file, a single oops will overwrite that file with a broken copy. So even RAID-1 (mirror) means a single overwrite kills whatever was overwritten.

I have heard very good things about the Synology devices as well.

Synology have decent prices, and lots of people seems to be very happy with them.

Just as quite a lot of people seems to have seen very interesting failures with Drobo.

If going with home-cooked hardware , then unRAID (external link) is a route that gives RAID-5 security while making it easy to add more disks or replace individual disks with larger disks. And without the ability of a dual-disk failure to kill all the data. Each data disk can directly read by any standard Linux machine, which isn't true of most RAID-based NAS solutions.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 2
Joined May 2007
Location: Sweden
     
Jan 11, 2014 20:50 |  #81

RileyNZL wrote in post #16582742 (external link)
There is also a roughly 40% chance of a 4x 2TB raid array failing to rebuild due to an unrecoverable read error. Not to mention all the other freak things that can go wrong, like the power going out during a rebuild etc. RAID 5 also doesn't have snapshots or silent error detection making it even worse for archival.

Sounds a bit like you might have read this article:
http://www.zdnet.com …stops-working-in-2009/162 (external link)

Well, I have a huge number of TB of data that gets regularly scanned. I just haven't seen these huge issues with unrecoverable read errors. 1 in 10^14 per bit read means one bit broken in every 3 billion 4kB reads or one bit broken in every 12.5 TB read.

I scan about 50TB every two months, so the last year I scanned about 300 TB. Zero failed sha-256 computed. No failed scan 2012 either, even if I only scanned about 200 TB that year. But most probably that note in the HDD datasheets of "less than 1 bit in 1E14 read" is probably quite a bit less than 1 bit in 1E14.

But it is relevant to remember that we are getting hard disk sizes where that unrecoverable read error figure starts to be important, which is why we may not put ourselves in a situation where the data is only available on a single disk. If we started with two disks and one fails, then we just might have such a read error on the remaining disk. And if we started with RAID-5 and one disk fails, then we just might have such a bit error on one of the remaining disks, failing a successful recovery. And while a RAID-6 should survive one disk failing, it just might fail to recover from two failed disks because one of the non-failing disks has a one-bit unrecoverable error.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 2
Joined May 2007
Location: Sweden
     
Jan 11, 2014 21:12 |  #82

techhelp wrote in post #16587992 (external link)
This is bad advice; there's no logic in buying 4TB drives of different brands, 3TB drives are more reliable and it's better to stick with the same brand.

If building one RAID machine, you want identical disks in the RAID.

But for backup purposes, it's better to mix both Seagate and WD disks than to go for a single brand. This makes sure that any serious manufacturing or firmware oops doesn't hurt both disks. Like that little Seagate firmware oops that could accidentally make the disks brick in an incorrectly decided attempt to protect the hardware - but locking the customers out of their data at the same time. This is especially important when having both a work disk and one or more backup disks that are all powered up and so aging at the same pace. Buy multiple disks with close serial numbers and they have quite similar life expectancy - if one fails after 12 months, there is a large probability that one more disk will die within a short time span because it suffers from the same manufacturing problem that killed the first disk. Lots of people have had more disks pop in their RAID enclosures while trying to recover from the first disk failure.

Western digital greens should not be recommended for RAID anything.

Lots of people would like to rewrite that sentence to "should not be recommended for anything" - too many people suffers from the short head retract time, but doesn't know about it (or how to change this time) until it's too late.

NAS/RAID systems are overrated and used for simplicity. The RAID systems they use are poor and even if you put the most reliable HDDs in them, it won't matter because a few years down the line when you can no longer buy the same NAS model, you'll find it EXTREMELY difficult to swap your RAID array to another RAID card.

That's a simplification. As the hardware speeds have improved, lots of them are starting to use software instead of having a hardware RAID card. A number of boxes uses standard Linux software RAID, or some other opensource solution like ZFS or some FreeBSD solution. Or unRAID that uses standard Linux file systems with zero RAID-specific on the data disks + a parity disk.

They're fast to set up but in every other aspect they are inferior to building a small server.

The majority of the customers doesn't know how to build a small server. So a small server that can't be built would be inferior.

You're not going to get the benefits of uptime with a home computer because the HDDs you'll use are likely to be non hot swappable HDDs.

Note that small companies doesn't need 100% uptime. But they might need to be able to decide when they get the downtime. I.e. to be able to make a copy of some important files before they turn off the non-hotplug hardware and replace the broken disk. And if their NAS did run with a hot spare, then they can wait until after the RAID was done synchronizing with powering it down and replace the broken disk.

So uptime and availability are not the same thing. Planned maintenance is way better than unscheduled maintenance.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 2
Joined May 2007
Location: Sweden
     
Jan 11, 2014 21:17 |  #83

RTPVid wrote in post #16588482 (external link)
Interesting. And, counter-intuitive since WD Greens are supposedly designed for lower power consumption and have a lower spin rate (RPM).

I have a couple of 3TB red and 3TB green in a server machine, i.e. mounted close together with same cooling. The greens are 2°C warmer than the reds, and the blacks are 8°C warmer than the reds.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
pwm2
"Sorry for being a noob"
Avatar
8,626 posts
Likes: 2
Joined May 2007
Location: Sweden
     
Jan 11, 2014 21:40 |  #84

Wilt wrote in post #16593987 (external link)
Hitachi nickname found often on the web...'DeathStar'

The name came from a limited range of old IBM disks that had a firmware error where the head didn't regularly move on idle disks - so after a while enough oil could pool up around the head because of the wind and make the head drown. Everyone assumed the models had bad hardware, but the people who kept them spinning and busy never saw any issues with them.


5DMk2 + BG-E6 | 40D + BG-E2N | 350D + BG-E3 + RC-1 | Elan 7E | Minolta Dimage 7U | (Gear thread)
10-22 | 16-35/2.8 L II | 20-35 | 24-105 L IS | 28-135 IS | 40/2.8 | 50/1.8 II | 70-200/2.8 L IS | 100/2.8 L IS | 100-400 L IS | Sigma 18-200DC
Speedlite 420EZ | Speedlite 580EX | EF 1.4x II | EF 2x II

  
  LOG IN TO REPLY
Daphatty
Senior Member
490 posts
Gallery: 1 photo
Likes: 13
Joined Jan 2012
     
Jan 12, 2014 00:59 |  #85

ImCBParker wrote in post #16578646 (external link)
Well aware of Kelby's issues, he is one guy. Go to any review site and look at the positive reviews vs. negative. Overwhelmingly positive, more so than most external devices. I certainly get the skepticism if Kelby was my only resource.

I have not had one drive go bad in either of my Drobos. Ditto for the handful of other photographers I know that use them. Any hard drive failures can be terrible. There are plenty of RAID/NAS devices besides Drobos. Given their prices are in consumer reach, is truly is the best scalable option for large data collections. Individual hard drives are fine up to their capacity, but like all external drives, good luck when either their boards or drives fail.

I've been working with Drobos since 2008. Every Drobo I've ever dealt with (5 unique models so far) have failed to provide the redundancy they tout. All of these were purchased for a corporate environment and every single one failed to detect drive failures until it was too late. The most recent failure cost my company nearly 3k in data restoration costs.

Discredit the nay-sayers all you want. Drobos are utter crap and I will never recommend them to anyone.

That said, if the OP is thinking of going the NAS route, might as well get the largest drives you can afford. The obvious benefit is capacity but keep in mind that it will be a long time before you'll have to upgrade your drives again.


Canon 5D III | EF 24-105mm f/4L IS | EF 70-200mm f/2.8L IS II | EF 40mm f/2.8
Nest NT-6295c Tripod | Bendo IB2 Ballhead

  
  LOG IN TO REPLY
P51Mstg
Goldmember
Avatar
1,336 posts
Likes: 2
Joined Feb 2007
Location: Mt. Carmel, TN
     
Jan 12, 2014 08:16 as a reply to  @ Daphatty's post |  #86

Daphatty... Utter and true words of wisdom....

To me Drobos have been toys. Friends who have had them have had problems with them. Can't think of one that worked well...


Too Much Camera Stuff......

  
  LOG IN TO REPLY
Luckless
Goldmember
3,063 posts
Likes: 186
Joined Mar 2012
Location: PEI, Canada
     
Jan 12, 2014 09:52 |  #87

Daphatty wrote in post #16598489 (external link)
That said, if the OP is thinking of going the NAS route, might as well get the largest drives you can afford. The obvious benefit is capacity but keep in mind that it will be a long time before you'll have to upgrade your drives again.

Before you go ahead and just get 'the largest drives you can afford', I would say step back and think about what kind of data you are producing and expect to produce.

If you are very conservative with your data production rates, and think you might have issues filling even 500GB of image data in even a few years, then even if you can afford to stock your NAS with a series of 4TB drives you are probably better off sticking with far cheaper 1 and 2 TB drives for now. You're not going to be running out of room in the next year or so on 2TB drives for a low data producer, and by the time you are getting close densities will have increased again, and maybe even SSDs will have dropped in price to the point of replacing platter drives completely.

Spending more money on todays technology to try and future proof yourself is generally a 'very bad idea' as they say.


Canon EOS 7D | EF 28 f/1.8 | EF 85 f/1.8 | EF 70-200 f/4L | EF-S 17-55 | Sigma 150-500
Flickr: Real-Luckless (external link)

  
  LOG IN TO REPLY
RTPVid
Goldmember
3,365 posts
Likes: 3
Joined Aug 2010
Location: MN
     
Jan 12, 2014 10:08 |  #88

CBClicks wrote in post #16596580 (external link)
...I use only WD Black (enterprise level) ...

fyi... WD Black are not marketing by WD as enterprise level.


Tom

  
  LOG IN TO REPLY
RTPVid
Goldmember
3,365 posts
Likes: 3
Joined Aug 2010
Location: MN
     
Jan 12, 2014 10:15 as a reply to  @ RTPVid's post |  #89

notgoodatusernames wrote in post #16596682 (external link)
...This is complete nonsense... Hard drives do not have bad batches like state of the art motherboards do....

mike_d wrote in post #16596739 (external link)
The Deathstar 75GXP was already mentioned. Those things were dying right and left. Seagate had a run of 7200.9's (?) with faulty firmware that was killing drives by the thousands....

Well, there's this (external link), and there's this (external link). Sounds like a "bad batch" to me (or close enough).


Tom

  
  LOG IN TO REPLY
nekrosoft13
Goldmember
Avatar
4,087 posts
Gallery: 185 photos
Likes: 683
Joined Jun 2010
     
Jan 12, 2014 10:46 |  #90
bannedPermanent ban

CBClicks wrote in post #16596580 (external link)
I use only WD Black (enterprise level) or Hitachi HDD's in my NAS units.

WD Black is consumer hard drives, WD enterprise hard drives are XE (performance series), RE (reliability series) and SE (scalability series)


Gear List

  
  LOG IN TO REPLY
sponsored links
(this ad will go away when you log in as a registered member)

10,125 views & 0 likes for this thread
1,2,3,4 TB drives?
FORUMS Canon Cameras, Lenses & Accessories Canon Accessories 
AAA
x 1600
y 1600

Jump to forum...   •  Rules   •  Index   •  New posts   •  RTAT   •  'Best of'   •  Gallery   •  Gear   •  Reviews   •  Member list   •  Polls   •  Image rules   •  Search   •  Password reset

Not a member yet?
Register to forums
Registered members may log in to forums and access all the features: full search, image upload, follow forums, own gear list and ratings, likes, more forums, private messaging, thread follow, notifications, own gallery, all settings, view hosted photos, own reviews, see more and do more... and all is free. Don't be a stranger - register now and start posting!


COOKIES DISCLAIMER: This website uses cookies to improve your user experience. By using this site, you agree to our use of cookies and to our privacy policy.
Privacy policy and cookie usage info.


POWERED BY AMASS forum software 2.1forum software
version 2.1 /
code and design
by Pekka Saarinen ©
for photography-on-the.net

Latest registered member is nmbugs
773 guests, 371 members online
Simultaneous users record so far is 15144, that happened on Nov 22, 2018

Photography-on-the.net Digital Photography Forums is the website for photographers and all who love great photos, camera and post processing techniques, gear talk, discussion and sharing. Professionals, hobbyists, newbies and those who don't even own a camera -- all are welcome regardless of skill, favourite brand, gear, gender or age. Registering and usage is free.