Hi All
I have got a Deco M5 mesh and pretty happy with the setup. Now I got a Seagate plus external hard drive and needs to be connected to a router USB to share it across the network. Unfortunately Deco M5 does not have USB support.Looking at the web suggestions are to use a raspberry pi as NAS .
Is there any other way to get this sorted ? Looking for a reliable solution
Connect Hard Drive to TP-Link Deco M5
Last edited 31/08/2021 - 21:59 by 1 other user
Comments
- Buy a NAS
- Buy a hard drive with wifi
- connect to a PC & share across the network
A Raspberry Pi can be used as a NAS but be warned that the USB protocol isn't renowned for its reliability when it comes to long-term use and data transfer. If you're not too fussed with that, pick up a Pi4 (I'd recommend 2-4GB any more is a waste for NAS use and any less could potentially be a bottleneck for your OS) and install OpenMediaVault (OMV) on an SD card for the Pi. Once the webUI is working, connecting and setting up the USB HDD should be trivial.
Be aware though that OMW doesn't support RAID arrays through USB so if you were planning on adding another HDD and mirroring them, it won't work. Also, I'm not sure exactly what model drive you have but if it isn't externally powered (ie, only uses the USB port for power) and requires more than 1.2A to run, the Pi won't be able to power it. I don't know of an elegant way to solve that.
Just a reminder that RAID is not a backup!
Well, mirroring HDDs is a backup. You're making a copy. RAID5 or 6 I'd agree isn't a backup though.
Yes and no. Mirroring (RAID1) you're making a live copy of the entire drive.
All writes to one drive are replicated on the other. Those includes deletions, overwrites, encryptions, etc.So if you accidentally delete a file, save changes you didn't want to keep, get ransomware, etc; these will all affect the data on both drives and potentially lead to a loss of that data.
Any "backup" system that could potentially cause data loss is a pretty shit backup system, in my opinion.
RAID 1,5,7,9001 - whatever number - only protects you from drive failure, with the higher RAID numbers usually being able to handle more simultaneous failures.
RAID is good, and you should use it; but keep in mind what it's protecting you from.
I presume that the reason you consider RAID 5/6 "less" of a backup than RAID 1 is because outside of the array, the individual disks are unreadable, as data is striped across all the disks; whereas on RAID 1 both disks are (usually) independently readable, since they're just a copy of one another.
If independently readable disks are of specific interest to you, you could consider using something like MergerFS with SnapRAID. https://perfectmediaserver.com/tech-stack/snapraid.html
@Chandler: Yes, but as with most modern NAS systems these days, accidental file deletions, ransomware, encryption, etc. aren't very effective against snapshots. Almost all forms of "soft damage" can be reversed. Physical failure can't be reversed easily. Of course mirrored is not an ideal backup as the drives are in close proximity meaning fire damage, water damage, theft, etc. would destroy the backup at the same time but be that as it may, it's still a backup against the usual way drives fail, spontaneously or with age.
RAID is actually not good. Not anymore at least. In the case of RAID5/6 it's actually not recommended for basically all consumer drives above 1TB due to the likely event of a URE as capacities increase. Not to mention dismal performance and write holes. RAID has been deprecated for a long time.
I personally run 6x16TB (3x(2x16tb)) mirrored vdevs on top of TrueNAS Core on top of a UPS'ed R720 running Proxmox taking hourly snapshots which are saved for a month.
I'm curious what you're running.
I've looked into SnapRAID. Don't like the concept of it tbh. Gives me unRAID vibes.
@Jai: True - but now we're talking about other protections (i.e. snapshots), not RAID.
True, all forms of "soft damage" can be reversed - if you have a backup stored elsewhere, which was my point: RAID is not a backup. And you're correct in that two identical hard drives stored in an identical location under identical operating conditions will likely die at an identical time - it's why some people recommend purchasing drives at different times, vendors & manufacturers: to get different drives (and from different batches, if the same manufacturer).
These operating and environmental (fire/water/theft) concerns are exactly why RAID is not a backup. As I said: RAID is only protecting you from drive failure, and as you rightfully pointed out, running disks in a typical RAID arrangement is increasing their likelihood of failing.
I disagree that RAID is not good - what else can allow you to continue to use your data after a disk failure? RAID gives you a Redundant Array of Independent Disks. One (or many, dependent upon how many failures your chosen RAID supports) disk dies? No worries, I've got other copies of that data in the array - Keep Calm and Carry On!
What other system gives you that? No downtime, no restoring, just keeps going like nothing happened (at least for the user). Yes, if you're array is no longer protected you'll likely want to use it delicately until you've replaced the failed drive and the array is healthy again, but you can use it.
I agree that (in my opinion at least) higher levels of RAID are not beneficial, due to my aforementioned issue with disks from the array being unreadable outside of context.
You also say you use ZFS, whilst still saying RAID is not good, but ZFS is (AFAIK, and I'm massively simplifying) essentially software RAID on steroids (a significant amount of data protections i.e. bitrot)
I learnt about SnapRAID thanks to the Self-Hosted Podcast and their efforts towards a Perfect Media Server. I'm yet to implement any of it (need to get some hardware), but from my experiences so it seemed a good fit for what I want.
My experiences have mostly been with a NetGEAR ReadyNAS Duo I got quite a few years ago. I've had a few drive failures and issues with their firmware that have been a pain to recover from, being (at the time) a non-Linux user. Disks in ext3/4 and thus unreadable in Windows (without something like DiskInternals), and an OS that (from memory) became unstable thanks (I believe) to large log files. At one point I decided to set it up with the disks as RAID0 (striped) for more storage; not a mistake I'll make again.
MergerFS + SnapRAID seems (to me) to give me the benefits of RAID (SnapRAID), and the convenience of one storage point whilst keeping all disks in a readable state (MergerFS pooling and disks with independent file systems: you can pool disks with any file systems: ext3, FAT, NTFS; even ZFS pools can be included)
I'm curious what these "unRAID vibes" are, not being overly knowledgeable about unRAID (or SnapRAID really, for that matter). I do know there are some who hate unRAID and those that love it, but have never looked into it.
In terms of what I'm actually running - nothing substantial at present; I've got:
- some old hardware I'm going to see if I can turn into a functioning server
- the ReadyNAS which once I'm satisfied I've got my data safely stored away I'll probably return to service
- a Raspberry Pi (3b) running Home Assistant (OS; which includes a few services via add-ons (containers))
I do have the Pi running Plex with a handful of frequently consumed media - don't want to push it too hard over concerns for the the SD card (which has been fine so far, touch wood!)
True, all forms of "soft damage" can be reversed - if you have a backup stored elsewhere, which was my point: RAID is not a backup. And you're correct in that two identical hard drives stored in an identical location under identical operating conditions will likely die at an identical time - it's why some people recommend purchasing drives at different times, vendors & manufacturers: to get different drives (and from different batches, if the same manufacturer).
These operating and environmental (fire/water/theft) concerns are exactly why RAID is not a backup. As I said: RAID is only protecting you from drive failure, and as you rightfully pointed out, running disks in a typical RAID arrangement is increasing their likelihood of failing.
Firstly, I don't think RAID 5/6 are viable backup options, as I've previously stated. I think mirrored drives are (sort of) a backup. There are definitely pitfalls but you still technically have two copies of your data which protects against drive failure (most of the time).
I disagree that RAID is not good - what else can allow you to continue to use your data after a disk failure? RAID gives you a Redundant Array of Independent Disks. One (or many, dependent upon how many failures your chosen RAID supports) disk dies? No worries, I've got other copies of that data in the array - Keep Calm and Carry On!
What other system gives you that? No downtime, no restoring, just keeps going like nothing happened (at least for the user). Yes, if you're array is no longer protected you'll likely want to use it delicately until you've replaced the failed drive and the array is healthy again, but you can use it.
Mirrored vdevs (or RAID10) is good and RAID 5/6 are bad for a number of reasons. Firstly, as I don't think you fully understand, rebuilding a RAID 5 array means you need to read all your drives in their entirety. In the event you have even one unrecoverable read error (URE) during this process your data is toast. "NAS" drives have, on average, I'd guess 1 per 10^14 URE's? My enterprise Exos drives have an URE rate of 1 in 10^15 so I think my estimation is pretty generous. Assuming that's correct, with a drive size of 8TB (what I'd say is most common for NAS devices) and say 4 drives in the array, you have a 15% chance of successfully rebuilding your array after a drive failure (this is on average over the lifespan of the device, probably a much higher success rate when the drive is new and rapidly dropping once it leaves the warranty period). RAID 6 is better but not by much, and definitely not within a range I'd be comfortable with.
Mirrored vdevs avoid this issue by having disks put in duplicate pairs. If a drive fails, it's pair holds the information needed and rebuilding is as simple as copying the data from the surviving twin onto a fresh drive. Significantly reduces rebuild time, disk thrashing, and URE chance (to near nothing) while maintaining all the benefits of RAID 5/6.
You also say you use ZFS, whilst still saying RAID is not good, but ZFS is (AFAIK, and I'm massively simplifying) essentially software RAID on steroids (a significant amount of data protections i.e. bitrot)
I use mirrored vdevs which is a storage scheme that utilises the ZFS filesystem. It's fundamentally different with a multitude of superior technologies. I do thing RAIDZ/2/3 are much better than RAID but I'd still choose my approach over it for a few reasons. See this for the differences between RAIDZ and RAID.
My experiences have mostly been with a NetGEAR ReadyNAS Duo I got quite a few years ago. I've had a few drive failures and issues with their firmware that have been a pain to recover from, being (at the time) a non-Linux user. Disks in ext3/4 and thus unreadable in Windows (without something like DiskInternals), and an OS that (from memory) became unstable thanks (I believe) to large log files. At one point I decided to set it up with the disks as RAID0 (striped) for more storage; not a mistake I'll make again.
If I didn't know better I'd assume you were a masochist.
MergerFS + SnapRAID seems (to me) to give me the benefits of RAID (SnapRAID), and the convenience of one storage point whilst keeping all disks in a readable state (MergerFS pooling and disks with independent file systems: you can pool disks with any file systems: ext3, FAT, NTFS; even ZFS pools can be included)
I've looked more into it and it doesn't seem terrible. These solutions obviously always come with the downside of being single drive speed limited and the lack of meaningful snapshots is a dealbreaker for me.
I'm curious what these "unRAID vibes" are, not being overly knowledgeable about unRAID (or SnapRAID really, for that matter). I do know there are some who hate unRAID and those that love it, but have never looked into it.
Same old story. Single drive speed limited, no choice of storage scheme (mirrored, 2/3 drive parity, etc), closed source.
I do have the Pi running Plex with a handful of frequently consumed media - don't want to push it too hard over concerns for the the SD card (which has been fine so far, touch wood!)
I hope you're not implying you're storing your Plex server's media on an SD card.
Firstly, I don't think RAID 5/6 are viable backup options, as I've previously stated. I think mirrored drives are (sort of) a backup. There are definitely pitfalls but you still technically have two copies of your data which protects against drive failure (most of the time).
Mirrored vdevs (or RAID10) is good and RAID 5/6 are bad for a number of reasons. Firstly, as I don't think you fully understand, rebuilding a RAID 5 array means you need to read all your drives in their entirety. In the event you have even one unrecoverable read error (URE) during this process your data is toast. "NAS" drives have, on average, I'd guess 1 per 10^14 URE's? My enterprise Exos drives have an URE rate of 1 in 10^15 so I think my estimation is pretty generous. Assuming that's correct, with a drive size of 8TB (what I'd say is most common for NAS devices) and say 4 drives in the array, you have a 15% chance of successfully rebuilding your array after a drive failure (this is on average over the lifespan of the device, probably a much higher success rate when the drive is new and rapidly dropping once it leaves the warranty period). RAID 6 is better but not by much, and definitely not within a range I'd be comfortable with.
I think we're pretty much agreeing here - due to data striping, higher forms of RAID are not "better" than RAID 1 (mirrored drives). A drive failure means all drives in the array need to have every sector read to do a rebuild, which as you've also said this just decreases the time until the next drive failure. And if you have another failure during a rebuild, bye-bye data (depending on how many failures your array can handle). With mirrored drives/vdevs, if you have more failures than the array can handle, you only lose the data on that set of drives/vdevs, not the entire array/pool.
However even in RAID 1 or with mirrored vdevs, you're not protecting against "bad" writes, in which by "bad" I mean saved modifications to files or encryption (ransomware). Those changes are written to files on both drives, and once those writes are done, they can't be undone. I might add here that I'm not saying you shouldn't use RAID/ZFS because of this - I'm saying that you (and not you personally - you seem to know what you're doing!) need to be aware that if you want to backup your data, it's not just a case of RAID/ZFS and bam, you're done.
Now back to the "bad" writes - ZFS does negate this with snapshotting, but what if you've made other changes that you don't want to lose? Have you kept track of every file you've modified since the last snapshot? Are you willing to reload the entire vdev/pool (not sure if you can just load a vdev) onto another set of drives just to get that one or so file/s you need to recover? This is why RAID and ZFS are not backups. They definitely should be part of a backup system/solution, but they're not the entire answer.
I use mirrored vdevs which is a storage scheme that utilises the ZFS filesystem. It's fundamentally different with a multitude of superior technologies. I do thing RAIDZ/2/3 are much better than RAID but I'd still choose my approach over it for a few reasons. See this for the differences between RAIDZ and RAID.
I think we're on the same page here - striping = bad, which is what RAID 5/6/7/Z1/Z2/Z3 all use.
If I didn't know better I'd assume you were a masochist.
It's been an experience, let me tell you. One positive to come out of it and my research since - I no longer trust drives: they will fail. It's not a matter of if, but when. Which is part of why I am so vehemently against those who say RAID is a backup. It should be part of a backup system/solution, but it is not the whole answer. If you choose to use RAID as a backup - including mirrored disks/vdevs - I just pray you don't have another failure during a rebuild.
I've looked more into it and it doesn't seem terrible. These solutions obviously always come with the downside of being single drive speed limited and the lack of meaningful snapshots is a dealbreaker for me.
I assume here you're issue is with SnapRAID and not MergerFS? (I'm not sure what affect MergerFS has on data transfers, as it's playing the middle-man between the OS and the disks themselves). MergerFS is not really a RAID tool - it's task is simply to pool an array of drives (well, mount points actually) so that they appear as one. These mount points could be ZFS pools or JBOD. It's the reason why you even need to use SnapRAID (and SnapRAID doesn't have to tool you use for RAID, nor does RAID even need to be used - unless you want to use RAID to achieve something). There's a good blog post on using MergerFS with ZFS here.
I hope you're not implying you're storing your Plex server's media on an SD card.
I am. And yes, I know. See also your comment on masochism.
I think we're pretty much agreeing here - due to data striping, higher forms of RAID are not "better" than RAID 1 (mirrored drives). A drive failure means all drives in the array need to have every sector read to do a rebuild, which as you've also said this just decreases the time until the next drive failure. And if you have another failure during a rebuild, bye-bye data (depending on how many failures your array can handle). With mirrored drives/vdevs, if you have more failures than the array can handle, you only lose the data on that set of drives/vdevs, not the entire array/pool.
Not quite. Yes the reading puts more strain on drives which could make them fail earlier, but I was specifically talking about UREs where a tiny section of the platter becomes unreadable and unreconstructable. Always happens eventually with HDDs but as long as the array is healthy it can be repaired. If it happens when the array is degraded though… yeah, everything's gone.
Again, not quite. Mirroring vdevs is equivalent to RAID 10. If a vdev fails the pool it's a part of will fail too.
Now back to the "bad" writes - ZFS does negate this with snapshotting, but what if you've made other changes that you don't want to lose? Have you kept track of every file you've modified since the last snapshot? Are you willing to reload the entire vdev/pool (not sure if you can just load a vdev) onto another set of drives just to get that one or so file/s you need to recover? This is why RAID and ZFS are not backups. They definitely should be part of a backup system/solution, but they're not the entire answer.
Snapshots are viewable. I can go back in time to whatever date (within a month) and retrieve whatever file (with one hour on the hour granularity) I want, no need to clone and restore the snapshot to another set of drives.
It's been an experience, let me tell you. One positive to come out of it and my research since - I no longer trust drives: they will fail. It's not a matter of if, but when. Which is part of why I am so vehemently against those who say RAID is a backup. It should be part of a backup system/solution, but it is not the whole answer. If you choose to use RAID as a backup - including mirrored disks/vdevs - I just pray you don't have another failure during a rebuild.
Yes, I agree. RAID was never designed to backup data, just keep it accessible through occasional drive failures (not catastrophes). I backup important information to the cloud and cold drives. My media collection however is not backed up. I don't care about it that much, can be re-downloaded without too much trouble.
I am. And yes, I know. See also your comment on masochism.
That card will die. SD cards (usually) have f***ing terrible read/write endurance. I wouldn't expect it to last more than a year.
@Jai: I was bundling UREs along with all the other possible failure modes. Actually that raises an interesting point - are reallocated sectors cause by UREs? (as in a the data held in the sector that gets a URE is moved: "reallocated")
Mirroring vdevs is equivalent to RAID 10
So that would be why I've seen single vdev pools recommended - doesn't include the evil, evil striping.
Snapshots are viewable.
Shining! I did not know that about ZFS snapshots. Very handy indeed.
That card will die. SD cards (usually) have f***ing terrible read/write endurance. I wouldn't expect it to last more than a year.
I know. Not sure how long it's been going already, but it's done well so far :)
I was bundling UREs along with all the other possible failure modes. Actually that raises an interesting point - are reallocated sectors cause by UREs? (as in a the data held in the sector that gets a URE is moved: "reallocated")
Yes, in part. If a sector of the drive is deemed unreadable, unstable, or just untrusted in any way by the drive's firmware it will reallocate that space onto the reserve part of the drive which is built for this situation. If the drive later discovers the originally untrusted sector can be used again the sector in the reserve used to replace it will be deallocated and the now 'good' sector will be used again.
So that would be why I've seen single vdev pools recommended - doesn't include the evil, evil striping.
Exactly.
I know. Not sure how long it's been going already, but it's done well so far :)
I wouldn't keep anything you're not ok losing on there, obviously.
A Synology NAS.