Home: Issue 2 2012 Lead Story › Reliability is key

Reliability is key

Reliability is key

01/05/2012 | Channel: Technology, E-Business / IT, Business

Naveen Louis looks at SAN storage and RAID – maximising efficiency in a data centre

In running a data centre environment, is any single factor more important than system reliability? Probably not. After all, without it you have no data centre. And you have an expensive failure instead. System reliability is so vital, entire industries are dedicated to ensuring it. RAID systems are a fail-safe solution to the problem of disk failure. In such an instance, data still exists on other disk(s). Mission critical systems have a fail-over system in place. In the event of any server or network failure, another system will initiate and thus remain on-line.

However, despite all modern back-ups and fail-safes, given the very nature of Windows file systems, they will fragment. If unchecked, this will and in fact does cause problems to the user, network and the company as a whole.

Fragmentation is a calculated liability built into every Windows operating system, created deliberately with the purpose of utilising disk space more efficiently. The downside is that it scatters fragmented files all over the disk, and any disk drive used anywhere for any purpose including RAID, mirrored systems, or backups is subjected to the harmful effects caused.

In the IT world, the fact that fragmentation slows down performance is an ‘everyone knows.’ But what might not be so well known is fragmentation’s impact on system reliability. From boot-up to shutdown, a fragmented drive can cause problems with almost any system-level action in Windows. A prime example is the fact that the Windows operating system constantly uses the disk-based page file – hence, reliable disk operation is critical to reliable system operation. Fragmentation issues with the page file can cause ‘out of virtual memory’ errors and can also cause data loss. Elsewhere, a heavily fragmented Master File Table (the file allocation table used by NTFS, the Windows file system) can slow the already extended boot process of a Windows computer.

SAN Storage and NTFS
Today, using SANs to meet storage requirements have become the norm. SANs typically employ a clustered/SAN file system to pool disk arrays into a virtualised storage volume. This is not NTFS, but rather proprietary software, provided by a SAN hardware or software vendor. This file system essentially ‘runs on top of NTFS’, it does not replace it. Keeping in mind that every file system is a ‘virtual’ disk, stacking one virtual component over another (i.e. one file system on top of another) is very doable and increasingly more common.

What the vendor of a SAN file system does to his file system is irrelevant to NTFS. It might well be that you do not need to defragment the ‘SAN file system’. The expert for that file system and the source from which you should get setup tips, best practices, and SAN I/O optimisation methodologies is that manufacturer.

As for NTFS, it still fragments and causes the Windows OS to ‘split’ I/O requests for files sent into the SAN, creating a performance penalty. Given that SANs are only ever block-level storage, they do not know what I/Os relate to what files. Therefore they cannot intelligently spread the fragments of a file across multiple disks. A whole mass of separate I/Os writes/reads for fragmented files (which will most certainly be interspersed with other simultaneous data writes/reads) will be non-optimally spread across the disks in the SAN storage pool i.e. write more fragments of a given file to one disk rather than evenly spreading the data across all the disks.

SAN file system vendors may offer optimisation strategies to move data around the disks as it learns over time – typical data requests are not properly load-balanced across SAN spindles. Generally speaking, the above holds true for disk striping as well (RAID). SAN designers or developers agree that NTFS fragmentation IS an issue and advanced defragmentation is important (‘basic’ defragmenters can actually cause worse problems).

File fragmentation also takes a serious physical toll on hard drives. Disk head movement is increased by the need to access data contained in fragmented files. The more disk head movement, the less mean time between failure (MTBF) will be experienced, shortening the life of the hard drive.

The old days of scheduled fragmentation are legacy procedures and will not be effective in today’s systems due to the sheer size of disks and storage. Running the built-in tool is simply not comprehensive enough to reap the necessary benefits and see the original performance your systems once boasted. Reliability is required 24/7, regardless of the type of backup, storage technology (RAID, SAN) used. System up-time is imperative and reliability is the key.

Naveen Louis
Naveen Louis has been in the IT industry for the last eight years, and has extensive experience with hardware, system management and design, network and application support. His main focus has been within a windows environment, but has also worked with Linux, as well as with different virtualised environments such as VMware ESX/ESXi, vSphere and Microsoft’s Hyper-V. Today, he specialises in storage performance and system management, and is the technical engineer at Diskeeper EMEA.

Diskeeper
Diskeeper data performance technology keeps systems free of fragmentation and delivers critical site-wide system efficiency automatically and cost effectively. Diskeeper significantly increases the speed and reliability of even the busiest and most mission critical laptops, workstations, servers and enterprise servers. For an all-in-one solution to performance and reliability issues, download a free trialware of Diskeeper.

For further information visit www.diskeeper.com