The Biggest Disk Defragmentation Myths

Founded in 1981, Diskeeper Corporation is the technology innovator in performance and reliability technologies. The company’s products make computer systems faster, more reliable, longer lived and energy efficient, all with zero overhead.

Inventors of the first automatic defragmentation in 1986, Diskeeper pioneered a new breakthrough technology in 2009 that actually prevents fragmentation.

Diskeeper’s family of products are relied upon by more than 90% of Fortune 500 companies and more than 67% of The Forbes Global 100, as well as thousands of enterprises, government agencies, independent software vendors (ISVs), original equipment manufacturers (OEMs) and home offices worldwide.

Today, I’ll be interviewing Colleen Toumayan from Diskeeper.

What is disk defragmentation, and why is it important?

The weakest link in computer performance is the hard disk. It is at least 100,000 times slower than RAM and over 2 million times slower than the CPU. In terms of computer performance, the hard disk is the primary bottleneck. File fragmentation directly affects the access and write speed of that hard disk, steadily corrupting computer performance to unviable levels. Because all computers suffer from fragmentation, this is a critical issue to resolve.

What is fragmentation?

Fragmentation, by definition, means “the state of being fragmented,” or “something is broken into parts that are detached, isolated or incomplete.” Fragmentation is essentially little bits of data or information that are spread over a large disk area causing your hard drive to work harder and slower than it has to just to read a single file, thus affecting overall computer performance.

Imagine if you had a piece of paper and you tore it up into a 100 small pieces and threw it up in the air like confetti. Now imagine having to collect each of those pieces and put them back together again just to read the document. That is fragmentation.

Disk fragmentation is a natural occurrence and is constantly accumulating each and every time you use your computer. In fact, the more you use your PC, the more it builds-up fragmentation and over time your PC is liable to experience random crashes, freeze-ups and eventually the inability to boot up at all. Sound familiar? And you thought you needed a new PC.  Imagine if you had a piece of paper and you tore it up into a 100 small pieces and threw it up in the air like confetti.

That is fragmentation and it is what happens to the data on your hard drive every time you save a file. The question is simple. Why defrag your hard drive after the fact, when you can prevent the majority of fragmentation in the first place.

By intelligently writing files to the disk without fragmentation, your hard drive read/write heads can then read a file that is all lined up side by side in one location, rather than jumping to multiple spots just to access a single file.

Just like shopping, if you have to go to multiple stores to get what you want, it simply takes longer. By curing and preventing the fragmentation up front, and then instantly defragging the rest, you experience a whole new level of computer performance, speed and efficiency.

Fragmentation can take two forms: file fragmentation and free space fragmentation.

“File fragmentation causes performance problems when reading files, while free space fragmentation causes performance problems when creating and extending files.” In addition, fragmentation also opens the door to a host of reliability issues. Having just a few key files fragmented can lead to an unstable system and errors.

Problems caused by fragmentation include:

System Reliability:

  • Crashes and system hangs
  • File corruption and data loss
  • Boot up failures
  • Aborted backup due to lengthy backup times
  • Errors in and conflict between applications
  • Hard drive failures
  • Compromised data security

Performance:

  • System slows and performance degradations
  • Slow boot up times
  • Increase in the time for each I/O operation or generation of unnecessary I/O activity
  • Inefficient disk caching
  • Slowdown in read and write for files
  • High level of disk thrashing (the constant writing and rewriting of small amounts of data)
  • Slow backup times
  • Long virus scan times
  • Unnecessary I/O activity on SQL servers or slow SQL queries

Longevity, Power Usage, Virtualization and SSD:

  • Accelerated wear of hard drive components
  • Wasted energy costs
  • Slower system performance and increased I/O overhead due to disk fragmentation compounded by server virtualization
  • Write performance degradations on SSDs due to free space fragmentation

Why do operating systems need to break files up and spread their contents across so many places? Why doesn’t it just optimize when writing the file?

Files are stored on a disk in smaller logical containers, called clusters. Because files can radically vary in size, a great deal of space on a disk would be wasted with larger clusters (a small file stored in a large cluster would be only consuming a % of the available cluster space). Thus clusters are generally (and by default) fairly small. As a result, a single medium-sized or large file can be stored in hundreds (or even up to tens of thousands) of clusters.

The logic which fuels the native Windows Operating System’s file placement is hardly ideal. While it has made some advances in recent years, a Windows user at any level of engagement (home notebook all the way up to enterprise workstation) is faced with an ever-increasing level of file fragmentation.

This is largely attributable to a lack of consolidated free space as well as pre-existing fragmentation (essentially: fragmentation begets fragmentation). The OS will try to place a file as conveniently as possible.

If a 300MB file is being written, and the largest available contiguous free space is 150MB, 150MB of that file (or close to it) will be written there. This process is then repeated with the remainder… 150MB of file left to write, 75MB free space extent, write… 75MB of file left to write, 10MB free space extent, write… and so on, until the file is fully written to the disk. The OS is fine with this arrangement, because it has an index which maps out where every portion of the file is located… but the speed at which the file could optimally be read is now vastly degraded.

As an extra consideration, that write process is exponentially longer than if there had simply been 300MB of free space available to drop the file into, all connected.

If Windows already has a free disk defragmentation utility, why should I pay money for another one?

  1. Incomplete. Can’t get the job done.
  2. Won’t defrag free space.
  3. Resource intensive. Can‘t be run when the system is active.
  4. Servers? It can’t keep up with them.
  5. Hangs up on big disks. You never know what the progress is.
  6. Eats up IT admin time administering.
  7. Takes forever. May never finish.
  8. Effects of fragmentation still not eliminated!

Inefficient defragmentation means higher help desk traffic, more energy consumption, shorter hardware life, less time to achieve proactive IT goals, and throughput bottlenecks on key consolidation economies such as virtualization.

What are some of the biggest misconceptions that people have when it comes to disk defragmentation?

Myth #1: The built-in defragmenter that comes with Windows® is good enough for most situations because it can be scheduled.

After the fact defrag solutions, including the built-in, allow precious resources to be wasted by writing fragmented files to the disk. Once a file is written, the defrag engine has to go to work to locate and reposition that file. When that file later expands, this doubling effort has to repeat itself all over again — this approach remains a reactive band-aid to a never-ending problem. Not even the built-in defrag can keep pace with the constant growth of fragmentation between scheduled defrags. Manual defrags tie up system resources, so users just have to wait … and wait … and wait.

When you are only doing scheduled defrags (or nothing at all), your PC accumulates more and more fragmentation, which leads to PC slows, lags and crashes. This problem cannot be handled with a freebie utility even if it can be “scheduled”. Here’s why:

Systems accumulate fragmentation continually. When the computer is busiest (relied upon the most), the rate of fragmentation is highest. Most people don’t realize how much performance is lost to fragmentation and how fast it can occur. To maintain efficiency at all times, fragmentation must be eliminated instantly as it happens or proactively prevented before it is even able to happen. Only through fragmentation prevention or instant defrag can this be accomplished.

Scheduling requires planning. It’s a nuisance to schedule defrag on one computer, but on multiple PCs it can be a real drain of time and resources. Plus, if your PC isn’t on when the defrag process is scheduled; it will not run until you turn your PC on again. By then, you will need to use your computer and you will experience PC performance slows while you work – that is, if you are able to work at all.

Scheduled defrag times are often not long enough to get the job done.

Myth #2: Active defragmentation is a resource hog and must be scheduled off production times.

This was very true with regard to manual defragmenters. They had to run at high priority or risk getting continually bounced off the job. In fact, these defragmenters often got very little done unless allowed to take over the computer. When the built-in defragmenter became schedulable, not much changed. The defrag algorithm was slow and resource heavy. Built-in defragmenters were really designed for emergency defragmentation, not as a standard performance tool.

Ever since first released in 1994, Diskeeper® performance software has been a “Set It and Forget It”®, schedulable defragmenter that backed off system resources needed by computer operations. Times have changed and a typical computer’s I/Os per second (IOPS) has accelerated a hundred fold.

Because this drove the rate of fragmentation accumulation way up, Diskeeper Corporation saw the need for a true real-time defragmenter and developed a new technology, InvisiTasking® technology. This innovative breakthrough separates usable system resources into five areas capable of being accessed separately.

As a result, robust, fast defrag can occur even during peak workload times – and even on the busiest and largest mission-critical servers. In the latest version, Diskeeper incorporated a new feature called IntelliWrite® fragmentation prevention technology. This new feature prevents file system fragmentation from ever occurring in the first place.

By preventing up to 85% of fragmentation, rather than eliminating it after the fact, Diskeeper is able to improve system performance much more dynamically and beyond what can be done with just the automatic defragmentation approach.

Myth #3: Fragmentation is not a problem unless more than 20% of the files on the disk are fragmented.

The files most likely to be fragmented are precisely the ones relied upon the most. In reality, these frequently accessed files are likely fragmented into hundreds or even thousands of pieces. And they got that way very quickly. This degree of fragmentation can cost you 90% or more of your computer’s performance when accessing the files you use most. Ever wonder why some Word docs take forever to load? Without fragmentation, they load in a flash. Files load times are quicker and backups, boot-ups and anti-virus scans are significantly faster.

Myth #4: You can wear out your hard drive if you defragment too often.

Exactly the opposite is true. When you eliminate fragmentation you greatly reduce the number of disk accesses needed to bring up a file or write to it. Even with the I/O required to defragment a file, the total I/O is much less than working with a fragmented file.

For example, if you have a file that is fragmented into 50 pieces and you access it twice a day for a week, that’s a total of 700 disk accesses (50 X 2 X 7). Defragmenting the file may cost 100 disk accesses (50 reads + 50 writes), but thereafter only one disk access will be required to use the file. That’s 14 disk accesses over the course of a week (2 X 7), plus 100 for the defrag process = 114 total. 700 accesses for the fragmented file versus 114 for the defragged file is quite a difference. But in a real world scenario, this difference would be multiplied hundreds of times for a true picture of performance gain.

With the release of Diskeeper 2011, we are tracking how many I/O’s Diskeeper helps save your system, making it even easier to gauge the benefits of running Diskeeper on your computer.

In addition to proactively curing and preventing fragmentation, Diskeeper 2011 also now defaults to an optimized defragmentation method (for the fragments that were not prevented) to maximize efficiency and performance while minimizing disk I/O by prioritizing which files, folders and free space should be defragmented. This setting can be changed to a more thorough defragmentation option from the Defragmentation Options tab in the Configuration Properties for those who wish to do so.

How has disk defragmentation changed over recent years? What trends are going to change it in the future?

With new storage technologies such as SAN and Virtualization, defragmentation has changed to address their specific peculiarities.  We have specific product for Virtualization V-locity that goes well beyong defrag and we also have a SAN edition of Diskeeper 2011.  Cloud will bring new technologies  I am sure as well.

What should people look for when shopping around for disk defragmentation programs?

They will want one that prevents fragmentation before it even happens, one that has technology to instantly handle any fragmentation that cannot be prevented, one that is efficient and able to really target that fragmentation that is impeding system performance, one that has zero system resource conflict, one that is specially designed with new storage technologies in mind.

Comments (2)

  1. It’s sad because a lot of people who have a PC don’t even know what it is. It’s like a car owner who does not know how to air up their tires or add a bit of extra oil.

  2. It’s a great write-up, thanks! I do defrag my HDDs regularly, however I never did realise how important doing this was.

Leave a Reply

Your email address will not be published. Required fields are marked *