ddrescue? Nah.

Storage and data recovery. ddrescue vs. safecopy. It’s yet another one of my useless ramblings largely serving as my own personal notepad.

So if you haven’t been following the saga, I’ll catch you up. I had a 3TB drive fail in a NAS appliance that I had previously backed up to a 3TB USB hard drive….which also failed. In a rush, I bought a pair of 3TB drives and a cheap Synology.

I don’t really have much to say about the Synology. I don’t really need it’s silly desktop-based GUI-like configuration stuff. But for what I needed, a box with some embedded hardware to put storage on the network, it will suffice. I’ve already had to go in and “molest” the configuration files…manually mounting some ntfs drives, making custom samba shares, and then mounting a drive image to a custom samba share for some data recovery.

Yeah…next time I’m buying some cheap embedded x86 based thing with a lot of sATA capability. But now what about the data on that drive. I had access to a lot of it; bits and pieces I didn’t have access to I didn’t need. I still needed to try and recover as much as possible. Most of the data recovery methods online talk about imaging the drive with (g)ddrescue; which I couldn’t attempt to do until I realized I needed storage.

So my first attempt involved plugging the USB drive up to the server, attaching it to a VM, and letting ddrescue write to the Synology. This went from being a 2 or 3 day ordeal to a 60 day ordeal…according to ddrescue. As soon as it hit bad sectors; read speeds tanked. I decided to boot my one system in to Linux and tried ddrescue with USB3; slightly better, but still stupid long times.

Now I had some idea what critical data was good and what critical data was gone…and I assumed the way the drive behaved (basically resetting) that no amount of data recovery would really help. So I decided to skip ddrescue and look in to something else. safecopy.

The idea behind safecopy and ddrescue are largely the same, except safecopy does one thing differently; it actually reads the drive in a way that actually produces a quick initial imaging. When it hits a read error, it marks a whole chunk of blocks as bad, and skips ahead. It doesn’t make a lot of effort in determining how large the section of bad data is. You can do that on the second pass.

So I did a stage 1 with safecopy and after roughly 12 hours, it produced an image of my disk. I started to do a stage2, but that looked like it could take some serious time. I fired up all the usual things to figure out the partition offset and told the Synology to mount the image. It produced a warning about unclean partition, did some things to it, and then I had my data. Well…most of it. A few folders had zero files in there, indicating they were part of a skipped block of data.

I spent the better part of my day copying off the data I absolutely wanted to check out and try to salvage…or see what was lost and needed to try and be replaced. I deleted the stage1 image because I modified it and really couldn’t afford the space. I still have a lot of data to shuffle around and need places to store it, at least temporarily. I do plan, at some point in the future; connecting that drive to a USB3 capable system, doing another stage 1, and then letting it spend however long it wants on stage2. By that time I’ll have maybe found all the corrupted stuff I need to replace.