That's typical for a lot of large RAID rebuilds. In the clusters that I maintain (big enterprise RAID boxes), I don't run the same manufacturer/make/model for all drives as it's common for drives of a batch to fail around the same time due to a flaw, design issue, or other issue. RAID 5 is well known for your issue - a second or subsequent drive failing during rebuild of another. The stress of the hard, constant read to rebuild the parity stripe can stress out other weak drives and cause them to fail. Many a R5 has been lost over the years because of the issue. Back in the day when drives were < 1TB, R5 was ok. With the larger drives and extended rebuild times, nobody runs R5 anymore - it's too risky. It's at least an R6 but typically R50+1 or R60+1 for true data integrity. For backups, it's mostly just large R1 with big drives.
Western Death is well known too for their consistent drive failures after one fails, especially for the cheaper consumer drives, like the Red.
The other issue is that Synology and other consumer NAS boxes usually don't live test the drives, but rely on SMART data to calculate/predict impending failure. Many times, SMART doesn't record any major issues until it's been shut down. Then, during startup, the SMART data is checked and that's when it gets noticed. There's also a grey area where the firmware predicts possible failure inbetween the "normal" and "failing" states. That threshold isn't always reported and errors in that grey area don't always trip alarms. When the drive massively exceeds errors or has numerous in a short period of time, it then trips the alarm.
In your situation, it may have have run for months more before the warning picked up on that first drive. Or, the second drive may have run for months more if it didn't need to rebuild and drove the drive hard. It's actually safer nowadays to run 2x 20TB in R1 (with one being a different brand than the other) than it is to run 5x 5TB in R5. You're 1) less likely to encounter a rebuild error resulting in total array failure, 2) get faster performance, and 3) have faster rebuild times as it simply has to copy to the new drive instead of rebuilding from parity striping.
Synology, Areca, QNAP, etc are all good devices. It's the drive quality and inherent redundancy technology that usually bites you. We don't run "NAS grade" drives - we use enterprise grade drives that are designed to be driven hard. We also run higher RAID designs to ensure rebuildability. The last thing is to routinely test the drives AND have a preemptive replacement schedule where you roll out old drives before expected failure. That's based on the usage of the cluster and warranty of the drive. Ignore the "MTBF" number - that's a statistical guess on the average drive life. If there's a 3 year warranty on the drive and it's a heavily used cluster, we start to pull drives around 2.5 years in. For a lighter used cluster, we may run those drives up to 4 years, if the system has a +1 or +2 hot standby ready to roll in the bay.