Skip to main content
X

Send us a Topic or Tip

Have a suggestion for the blog? Perhaps a topic you'd like us to write about? If so, we'd love to hear from you! Fancy yourself a writer and have a tech tip, handy computer trick, or "how to" to share? Let us know what you'd like to contribute!

Thanks for reaching out!

Do HDDs or SSDs Need ‘Exercise’? The Rocket Yard Investigates

I’ve heard it said that an SSD or hard drive that isn’t used for extended periods of time will likely have performance issues, or worse, actually lose data in the span of a few years. I’ve even heard it said that SSDs could lose their information in less than a year, and in the worst case, within a few days.

Of course, I’ve heard a lot of things, and not all of them bear up well when looked at closely. So, let’s find out if we need to keep exercising our storage devices to maintain information and performance.

Data Retention
The ability of a storage device to keep the data it contains intact is known as the data retention rate. The actual rate cited for various devices is predicated on the storage device being non-powered, undergoing no refresh of the data it contains, and being kept in an ideal storage environment, usually mentioned as around 25 C / 77 F.

Under those ideal conditions, hard drives are predicted to be able to retain their data for 9 to 20 years. The long range is due to the different architectures used in the manufacturing of modern hard drives.

SSDs (Solid State Drives) have a reputation for having a very low data retention rate. Numbers commonly cited suggest one year for consumer grade SSDs, and as low as one week for enterprise class SSDs.

If you believe the reputation is true, then SSDs would need to be exercised at defined intervals to ensure they keep the data stored intact. However, is that reputation valid? We’ll find out in a bit, but first, let’s look at hard drives.

Hard Drive Failure Mechanisms
The length of time your data will be retained on a hard drive in storage, one that isn’t powered and kept in a controlled environment, is based on four primary factors:

Magnetic Field Deterioration: Permanent magnets generally lose their field strength at the rate of 1% per year. After 69 years, the field strength would have dropped by 50 percent. That much field strength loss will likely lead not only to general data corruption of the stored data, but also to the loss of the index tracking marks which tell a drive where a sector starts and stops. So, not only is the stored data lost, but the ability to read the drive may be gone as well.

Magnetic Field Corruption: Magnetic fields external to a stored hard drive can adversely affect the stored data by altering the charge at one or more locations on the drive’s platters. Magnetic disruption can be caused by nearby high power magnets, motors, or even by unusually strong geomagnetic storms caused by solar mass ejections on the sun.

Environmental Conditions: Humidity and temperature ranges for stored hard drives differ by drive manufacturer. Western Digital recommends storing their hard drives between 55 F and 90 F. Extreme high temperatures increase the risk of damaging mechanical components, such as warping heads or platters, while extreme cold temperatures can cause bearing failure, or allow the spindle and motor to become misaligned. (Related: Keep Your Electronics Warm and Safe This Winter)

Mechanical Failure: Even with the proper storage conditions, mechanical failure, such as the platters failing to spin up due to motor failure, or spindle bearing failure, can happen. These types of failures tend to occur when drives are stored for exceptionally long periods of time without ever being powered on.

Mitigating Hard Drive Storage Failures
Of all the possible issues with hard drive storage, two of the most common ones can have their effects mitigated by exercising the drive. In the case of mechanical failure over long time frames, the simple approach is to power on the drive occasionally, ensuring the bearings, motor, and grease are all warmed up, and preventing them from becoming stuck in one location.

Refreshing the stored data can reduce magnetic field deterioration. This would require the drive to be powered on and connected to a computer system. Reading the stored data isn’t enough; to refresh the magnetic charge the data must be read and then rewritten to the drive. An easy way to accomplish this, assuming there’s enough room on the drive, would be to copy the content to a new location on the drive, or create a disk image and copy that to a new location on the drive. Another option would be to clone the drive to another storage device, and then clone the drive back again.

How often you should perform this exercising of a hard drive is difficult to say, but once a year or once every two years would be a good starting point. While a longer time frame is actually possible between exercising a hard drive, the task tends to get overlooked when the time frame becomes longer. It’s much easier to remember a yearly exercise routine than to try to remember to perform this task once every x number of years.

(One option to rewrite disk data is to create a disk image.)

SSD Failure Mechanisms
A few years back, a presentation was made at the JEDEC Standards Committee for solid state drive requirements at which a slide showing expected data retention rates for SSDs in a powered-off stored state was shown. That slide indicated the very poor ability of an SSD to retain data for any length of time when powered off. Specifically, it mentioned the following data retention rates:

Consumer grade SSD: 1 year at a 30 C storage temperature.

Enterprise grade SSD: 3 months at 40 C storage temperature.

In both cases, as the power off storage temperature increases, the data retention rate falls. In the case of consumer grade models, data retention can fall at one month at 50 C, while enterprise class SSDs can see less than one week at 50 C.

Pundits quickly picked up this information and it spread around the Internet, leading to the poor reputation SSDs can have for data retention when powered off. The problem is that it’s simply not true. The information being conveyed in the original presentation pertained to a worst-case scenario, one where the SSD under question has nearly reached its end-of-life, and has had its P/E count (Program/Erase cycle count) reach the point where data cells would start showing write failures. But when the background information was removed and only the information on the slide was presented, a legend, or at least a reputation, was born.

Mitigating Solid State Drive Storage Failures
In addition to backing up your data, the simplest way to avoid data loss is to make sure any SSD that is placed in long-term storage, is powered on and used at least twice a year. There’s no reason to rewrite the data; simply powering the drive on and using it as you normally would for a few minutes should be sufficient to maintain data integrity.

Exercising Your Storage Device to Maintain Performance
So far, we’ve looked at the need to exercise an SSD or hard drive when they’re being used for long-term storage in a powered-off condition. But what about the storage devices we have that aren’t in storage, but may not be used every day; do they need a bit of exercise now and then to remain in tip-top shape?

The answer is mostly no, but there are exceptions. Hard drives that have been spun down, and remain in a sleep state for an extended time frame could exhibit similar issues to a powered-off hard drive whose motor, spindle, or bearings exhibit issues when powered back on. Performing a task that wakes a sleeping drive from time-to-time may be beneficial.

Drives of any type that are in a sleep state will adversely degrade overall computer performance because of the time it takes to wake them up. But once awake, there should be no effect on performance.

Aside from exercising drives you have in long-term storage, the normal usage your drives see in everyday use is more than sufficient to keep data intact and performance on par.

Tom Nelson
the authorTom Nelson
Writer
Tom has been an enthusiastic Mac user since the Mac Plus. He’s also been known to dabble in the dark side, otherwise known as Windows, and has a well-deserved reputation for being able to explain almost anything to anybody. Tom’s background includes more than 30 years as an engineer, programmer, network manager, software tester, software reviewer, database designer, and computer network and systems designer. His online experience includes working as a sysop, forum leader, writer, and software library manager.
Be Sociable, Share This Post!

Leave a Reply

19 Comments

  • i have a wd blue 1tb drive that i used for disc imaging. it spent most of its life in a box. i filled it up with disc images and put it back in its box. it was there for maybe 8 months. when i came to use it, instead of read speed being several 100mb/s, it had dropped catastrophically to 20mb/s. only way to fix it was to copy everything off ( copying 1tb at 20mb/s takes days ), reformat and copy it all back on. i don’t believe this is a fault, but simply the way SSD flash memory works.

  • To refreash your HDD: badblocks -nsv (read-write non-destructive test, to detect badblocks by writting a random patern, re-read it and restoring the previous data on every sectors).

    To detect bitrot: use a modern file system (BTRFS, ZFS, …), with integrated data integrity check.

  • I want to confirm that data does experience bit rot (flux), but on average, it happens when a mechanical hard drive sits for unplugged for over 5 years, in my experience. The chances that you would lost anything within a year, is possible but highly unlikely. I suggest NOT to use Seagate drives for archiving and instead consider Western Digital. Never use SSDs for archving.

    Reading over data is NOT enough. You have to actually do a complete WRITE of the data. Let this statement be set in stone on this random website and you should live by this for the rest of your life.

    Done.

  • Thanks for clearing up some misleading information, which I have found suspect for quite some time.

  • I’m in the process of testing some old HDDs and floppies (!) for potential re-use. Of 3 IDE 8.4GB drives that are about 20 years old and were last tested about 6 years ago, only 1 is readable. In a 22-yo laptop, the HDD that was OK 7 years ago is also unreadable.
    I’m wondering whether atmospheric moisture slowly causes corrosion of the heads and platter surfaces in unused drives, and when they are powered on again the corroded material gets between the heads and platters and causes mayhem.
    New drives are often shipped in sealed bags, often with a sachet of silica-gel desiccant. Maybe it would be a good idea to do the same when storing our drives offline. It would be best to seal them when they are still warm after testing, as the warmth would drive out most of any moisture from inside the drive.

  • I would use Spinrite on a hard drive at level 4 to refresh the drive & use it on level 2 for SSD. The author Steve Gibson is work8ng on a new version that will work natively on a Mac. Goto http://www.grc.com for more details about Spinrite. It has performed miracles when fixing drive errors in order to recover data.

  • I have asked a lot of experts about this, as well as read a lot about it, and all have read and heard says that merely connecting a magnetic drive and leaving it powered on for a day is all you need to do. According to them, internal diagnostics on the drive will find and rewrite any sectors that have weakened charges.

    • I can pretty much guarantee you this is not true. There’s no way a drive can automatically know every sector on a hard drives magnetic state in a day. That’s assuming hard drive diagnostics scrub an entire drive constantly when it’s powered on.

  • I, too, have used multiple hard drives for longer than a 5-year period without issues. But the suggestion to exercise a hard drive was dealing with extreme endurance, attempting to lengthen the possible lifetime of a drive in storage before failure occurred. The once a year suggestion was to make the process a routine one that is easy to schedule and perform. It wasn’t meant to suggest that if you don’t exercise a drive in storage every year that you’re likely to see a catastrophic failure.

    Data rewrite is part of a somewhat standard enterprise practice to optimize the reliability of data stored long term on hard drives.

    If you really need long-term archival storage, I suggest a better media than hard drives or SSDs, such as write-once BD-R HTL (High-To-Low) that, with proper storage, should be able to last for well over 100 years. (Theoretical limits on certain BD-R HTL with stabilized data layers put the breakdown time at 10,000 years, but the polycarbonate exterior layer at 1,000 years.)

    Tom N.

    • Disappointing about the SDD’s and data retention. We use a number of them at my archive, although they are not being used for archiving long term. The SDD’s are almost all in use daily, with the longest rest being 3-4 weeks of no power.

      I’ve tested SanDisk SD camera cards and found the data fine after 5 years of no power. The 10 year test is still underway.

      Hi-tech companies are working on engraving data with a laser onto a small piece of thin quartz glass that is as small as a post-it note to preserve digital data. They may hold 100gb+ of data. But until that is on the market, the M-disc is as archival as it gets for digital. And in the big picture, if you drop the quartz glass on something hard it may shatter, the M-disc wont. But that is just spec.

      If you are serious about your digital archive you would have the data backed up on both quartz and M-disc, as well as other HDD’s, LTO, Cloud and any other forums of digital storage available to you.

      As curator and archivist for a photo, ephemera, cine’, VHS and audio archive, I deal with a large amount of digital data that needs to be archived, backed up and backed up some more. I am also a photographer and have a huge body of work of my own to preserve. For the last few years I’ve used all sizes of M-discs and found them to be an outstanding media option to use for storing digital data.

      The organic dye based DVD’s are OK short term, as long as they are not exposed to strong light and heat. I just finished transferring a 53 DVD archive to M-disk that was originally burnt in early-mid 2000’s. Only one disc had issues, but luckily it had a backup that was OK. And 93% of the defective disc could still be salvaged with special software.

      If organic dye based DVD’s are kept in dark storage and not exposed to heat they hold up OK. They can last 20 years and maybe a lot longer. Only time will tell. Gold MAM-A DVD’s don’t discolor like silver DVD’s and are marginally better than silver with resistance to degrade from sunlight. But when gold DVD’s are exposed to sun, they will fail within a few days longer than a silver organic dye based DVD failed.

      The old Kodak gold ‘100 year’ DVD’s were better than the current crop of MAM-A gold DVD’s, but again, not by that much, only adding a few days more life in the sun than MAM-A gold DVD’s. But, none of these DVD’s can vaguely compare to the M-disc when it comes to resistance to sunlight and heat.

      Now the testing of the Blu-ray M-disc’s are still underway. They look to be a different composition than the 4.7gb M-disc. But tests, as completed so far, show the Blu-ray M-disc far outlast organic dye based optical media as well. And standard BR-D hold up fairly well in the sun, lasting a lot longer than standard organic dye based DVD’s.

      My only complaints is that they don’t make dual-layer M-discs and CD M-discs. The M-media is all slow to burn, but you learn to live with slow burning as a trade off for archival preservation.

      With photography I go so far to say the M-disc is more archival than film. Don’t believe me? Put your Ektachrome, Kodachrome, Fujichrome, dye transfer prints in the sun for a few months and see what happens. Put a M-disk in the sun for a year and the data is still perfect.

      Daniel D.Teoli Jr. Archival Collection

  • If you need to refresh/rewrite the data on HDDs every year, then why have I (and pretty much everyone else) been able to reliably use HDDs for >5 years and still retrieve all the data when much of that data is static … and the timing/sectoring info (AFAIK) is _only_ written when the drive is low-level formatted, which is an operation that has not even been allowed (by the user) for over a decade. There’s something that doesn’t seem to add up (or I have some serious misunderstandings).

  • You never described the SSD failure mechanism, which presumably is loss of charge at a storage cell, much like a magnetic domain decaying. I question your assertion that just powering up an SSD and “using” it will refresh all cells, not just the ones that get re-written by a short use. (If I’m trying to save backup data, I don’t want to write to the drive!) Do modern SSD have some sort of self refresh mechanism?

    For crucial data up to 25G that you want to keep a REALLY long time, it would seem the best storage is M-DISCs. Have you done a writeup on them?

      • You did not answer the question about SSD’s merely needing to be used vs fully rewriting the data.

        I do appreciate this post. I reminded that I have some old backup drives sitting around that need to be spun up.
        Q: Is there a way to mount a drive truly read-only so that nothing like Spotlight goes out and tries to modify the drive.

  • Interesting finding on SSDs. Do Compact Flash/Secure Digital cards (‘digital film’) have similar properties?

  • I imagine the military uses SSDs. Do you know if the (presumably higher-quality, hardened) drives they use, have better cell data retention than typical consumer grade SSDs? Or does the military just use the enterprise-grade drives you mention above? I see OWC sells only one enterprise SSD, and it is (gulp!) $730 for a mere 200GB.

    • About 40 years ago the AWACS aircraft booted from magnetic tape and stored data on a magnetic drum, because hard drives of the time were not be reliable if the plane it turbulence. RAM was the old fashioned iron cores that you see in history books. There was a switch under one operator’s console that initiated an erase everything sequence. The crews didn’t bother with parachutes because none of the test dummies passed the bail out test.