© 2019 MOTORSPORT NETWORK. All rights reserved.
Sign up to receive latest updates for Ferrari News, Threads, and Classifieds
Discussion in 'Technology' started by Innovativethinker, Aug 9, 2019.
If you have an SSD that is 4-5 years old, you may want to consider replacing them now.
I thought no moving parts so bullit-proof? So old school hard drives with disc and needle prevail? I have had old school things last forever.
Care to elaborate?
We have some archival data on SSDs.
I've been buying SSDs since 2013, and they are now failing. I've had both crucial and intel drives fail.
Not allot of options when they do fail, I've tried this to no avail: https://dfarq.homeip.net/fix-dead-ssd/
The reality is that SSD drives have only so many writes, in 2013, a 250 gig drive was $450, and a 90 gig was $125, so we didn't get big ones. This may have effected the number of writes we had available.
Here is a decent article on lifespan of an SSD: https://www.ontrack.com/blog/2018/02/07/how-long-do-ssds-really-last/
Perhaps "lifespan" is a poor choice, more like "usage limit". When you hit that limit it degrades or stops. Since we had smaller drives, and every drive was used at least 40 hours a week, and just about every article I've read points to 5 years of use, so I have used up the effective life of my drives.
We also never shut off our computers, so whatever stupid stuff windows does while idle, including gigs and gigs of updates, may have contributed to the death of the drives.
Be aware that just about every SSD manufacturer has a utility to monitor the remaining life of your ssd drive. If you are using them I strongly recommend you download and run it.
If you use Intel drives, you can download their SSD toolbox. I ran this tool on my home system, the drives were installed I think in 2016, and they only have 25% remaining life left. See screen shots:
Image Unavailable, Please Login
Image Unavailable, Please Login
If they are stuck in a data safe you probably don't have to worry, although we always create at *least* two copies and place them in separate physical locations.
If they are in a live system, I would create copies and stick those in a data safe.
OK, thanks. We have redundant backups of critical data: SSDs, DDN storage array (I think that's the jargon our system guys uses), hard drives, etc...T
Glad I'm not one of those who implemented a cloud/RAID solution relying exclusively on solid state drives.
Think I've had 2 old school platter type drives fail. One was probably 12 years old, the other was quite a bit older than that.
None of the HDs in any of my systems I'm currently running are LESS than 5 years old, but none of them are SSD, either.
bit of history on this..
I was/am involved with extreme big data analytics (zetabytes) and began using SSD storage for special real time analytic apps involving the largest data intensive companies in the world around 2008.
The storage vendors tried like mad to make enterprise grade SSD have increased write counts, without much success. Side note: there is a reason why you don't see commercial grade SSDs very much, why you mostly see "inexpensive" consumer grade SSDs, and a mean time before failure = 5 yrs on consumer drives.
When an SSD drive failed occurred, as pointed out by Mark, we called that a catastrophic "avalanche" failure. Unrecoverable.
During failure analysis, we observed data write skewing or biasing. Interestingly, some areas of silicon were more prone to "writing" then other areas that were suppose to have an equal chance to "write" data. Hence, we saw quite a few SSDs have pre avalanche failures involving too many writes to a single location while concurrently seeing many locations with little data writes.
Several other industry people noticed this behavior. One company in the Silicon Valley, Violin Memories 2013ish, attempted to preempt excessive write skewing locations by writing algo firmware into the controller in order to programmatically (evenly distribute) writes to all addressable SSD write locations.
They got acquired by EMC. After Dell acquired EMC, most of this research, to my limited knowledge, became shelf-ware.
For large analytic data intensive companies I have worked for, we historically architect storage into hot, warm, and cold where hot (SSD, redundant, expensive) is data that is needed in the moment /near term for high IO and calcs. If data / results are unused after N periods of time, the data is autogroomed /moved to high speed rotating disk (expensive/redundant/safe). After N period of not accessed on high speed disk, data auto groomer / moved to slow speed (inexpensive) redundant disk.
The hot SSD for commercial heavy I/O write began to give way to another approach. UC Berkeley's Amp Lab open sourced the original Ignite, an Apache project...2014ish = move large storage memory / compute intensive data across distributed caches across distributed servers...= cheap and inexpenseive for certain requirements
For home, I use 1TD SSD on my workstation and have Apple's Time Machine backup all SSD data to 10terabyte external drive.
Best of FAST and "inexpensive/redundant/recoverable data".
SSDS SHOULD NOT BE USED FOR ARCHIVAL!!!!!!
OPTANE OR BUST
SPINNING RUST IS FOR THE BOOMER GENERATION
I have a 9 year old one that is running fine but it is in a computer i stopped working on daily about 3 years ago.
As already alluded to, one should always employ a variety of storage media. In the past when media was more expensive per GB, we used to select media type depending on how fast WRITE and READ speeds you require, as well as how often the written data will change.
My comments on this topic only apply to personal use rather than a commercial, multiuser environment.
IMO, the operating system and applications should go on the fastest SSD you can afford. Keep that volume to an optimum size and performance so you can maintain optimum free space to avoid fragmented files, e.g. for page files, updates but keep the physical drive sized to just those types of files. If you cannot afford downtime, backup an image of the SSD to a cheap HD. Thus the SSD is completely expendable if it craps out. All you do is replace the SSD and recover from the image backup of the SSD.
I store all data on fast multiple removable hard drives. Any data left on my SSD are considered short term, temporary or expendable. This allows me to protect important long term data simply by taking them with me or if you want to, you can also mirror them. This also allows me to move, remount the data HDs on different computers with apps or hardware for particular tasks. HDs are extremely cheap and constantly improve in performance. Most data files are relatively small unless you deal a lot with video, very large image or audio files. So you really don’t need or benefit from SSD level performance with most mundane data. It’s really when you start up a new operating system process or fire up a large app that you get huge movements of data on storage media. If performance and recoverability of data are really important I would look at RAID solutions.