Archives for : May2010

Consolidate Your Storage to Improve Overall Resource Usage and Efficiency

Consolidaye Storage

It’s a typical scenario. You’ve got 2 servers with storage loaded to 20% capacity, and a third server with disks nearly full. In order to remedy the situation, you take money out of your budget and spend it on new hard drives. This way of managing storage is wasteful and unnecessary.

Why not just take the unused storage space from the other 2 servers and move it over to the overloaded machine? Of course, this can be difficult and complicated when we’re talking about physical boxes. But virtualization makes this fast and easy.

Although it’s true that the average cost of storage is steadily decreasing, the amount of data that we need to store is also growing at a much faster pace. Why spend money when you have perfectly good excess storage space available?

When you waste money in this way, you aren’t just losing out on hardware costs. You also need to consider the maintenance, energy and real-estate costs that come from having more equipment hardware than you need.

The average single-purpose server today might only use up about 30%-40% of its total storage capacity. But when you place multiple virtual servers onto a single host device, you can easily raise this efficiency to 70% or 80%.

And if one of your servers needs more storage, there’s no need to buy new hardware or dismantle any equipment. You can simply reassign the necessary storage through the administration console. This process is much faster and cost-effective than manually installing new hardware.

And because you’re hosting fewer boxes, you take up less space in your datacenter while saving on power and cooling costs.

Another added bonus of consolidating your storage through virtualization is that this new, simpler storage structure will simplify regular maintenance and reduce unnecessary downtime. The processes associated with disk replication, server migration and data backup are much easier when working with a single consolidated system instead of several individual boxes.

If your company is struggling to keep data storage costs under control, you may want to consider storage consolidation as a way of making better use of under-utilized disk-space and reducing the need for hardware upgrades.

Image Source: http://www.flickr.com/photos/onaliencinema/3338682756/sizes/m/

Advantages of Tape Automation

Tape Automation

A Modern genomics lab typically produces about 10TB of data per day. And this number is growing on an exponential curve as the technology and methods become more refined.

Can you imagine being part of the IT support staff at one of these facilities? You’d have to manage tens of thousands of tapes. If a user needed a specific record, the act of manually locating the right tape and loading it could take hours… or even days. And then you have to deal with the headaches of backup and archiving.

Without tape storage automation, many organizations such as governments, Internet companies and television stations would grind to a halt. There’s simply no convenient way to handle this volume of data by hand.

Imagine a giant box about the size of a refrigerator, able to hold hundreds of tapes. And inside of the machine, a robotic arm is able to locate and load the proper tapes within seconds. Not only is this faster than a human, but it also takes up less space in the server room.

These systems are also capable of copying tapes for backups, and implementing archiving policies to make room for more data.

For organizations struggling to contain very rapid data growth, additional silos can also be added to the main unite. This extends their storage capacity to handle several petabytes, stored across tens of thousands of physical tape.

Some systems are even equipped to handle boxes of tapes for bulk processing. This dramatically speeds up the maintenance process, and keeps archival data well-organized for off-site storage.

Automated tape libraries re one of the most important innovations in storage technology. Without such automates systems, there would be no cost-effective way to handle massive volumes of business data.

Image Credit: http://www.flickr.com/photos/onaliencinema/3338682756/sizes/m/

The Importance of Hardware and Component Redundancy

Hardware Redundancy

It is the heart of the company. And when IT systems stop working, all operations come to a halt. This means lost sales, angry customers, unproductive employees, and potential legal problems. An hour of downtime during business hours can easily cause tens or hundreds of thousands of dollars in damages for busy company.

That’s why you need to make contingency plans for all possible causes of unplanned downtime. The most common causes are either related to hardware, software, human error, or some sort of external event such as fire or natural disaster.

In this article, we’ll focus on the hardware-related causes of server failure.

The most catastrophic server failures are the ones caused by damage to the internal components of the machine, including the processor, or motherboard. In these instances, the entire machine may need to be replaced. If a duplicate system isn’t already on-site, this may mean shutting down operations until the new replacement box is delivered.

Then, we have the failure of individual server components such as fans, power supplies, disc controllers, internal fans, and network adapters.

Although not as serious as the critical server failure, these types of incidents also require time-intensive manual maintenance, where the server must be opened up and parts must be exchanged.

One way of protect against these types of physical hardware failures is to invest in heavy-duty brand-name equipment, and making sure that proper maintenance schedules are adhered to.

An additional measure you can take is to add some sort of redundancy at the component level. Some basic examples are the addition of redundant power cooling to prevent overheating, and RAID disc configurations to maintain operations if one device should fail.

Of course, there are other potential causes of server downtime. Regardless of how much hardware redundancy is built into your system, you should always have a backup plan.

If uptime is critical, you should implement some sort of high-availability system that can allow you to quickly switch operations over to another device or datacenter in an emergency. Another possibility would be to implement a virtualization system that could allow you to distribute your servers across multiple redundant host machines. Or at the very least, have a spare machine on-site, which can be quickly swapped in for the defective server.

Either way, your company needs to face the possibility that hardware failure is a very real threat to the integrity of your IT systems. That’s why you need to make sure your infrastructure is robust and well maintained, and that you have a backup plan in case something goes wrong.

Image Credit: http://www.flickr.com/photos/louisabate/4507990236/sizes/m/

Virtualization reduces power consumption

Power Consumption

It’s a well-known fact that the cost of hardware has been steadily falling for many years. But at the same time, the cost of powering these systems has also been rising steadily. In fact, many experts are estimating that the cost of powering servers will soon overtake hardware costs.

In some extreme cases, energy costs have been known to represent half of the total costs associated with datacenter operations.

This is such a major problem that the Environmental Protection Agency has even headed a new project called The Green Grid, aimed at raising awareness about the need to improve datacenter energy efficiency.

Although more efficient CPUs (especially multi-core processors) have been helping to contain those costs, the best way to minimize energy usage within the datacenter is simply to minimize the number of physical servers.

In the past, companies would typically base their server buying decisions on functionality rather than their resource requirements. If you needed a database server, a file server, and a Linux server, you would buy 3 physical boxes. Because hardware was relatively inexpensive you would buy more storage and processing power than you needed… just to be safe.

But in today’s economic climate, companies need to squeeze out waste and improve efficiency anywhere they can.

The power and cooling costs for a typical server today can easily range around $500 per year. And this is for a system whose resources which are highly underutilized.

A typical server might only utilizes about 5-10% of its total processing capacity and less than 50% of total disk storage capacity. The rest of the time, you’re paying to power an idle system. When you consolidate systems to a single physical device, you maximize the total resource usage without having to power and cool a room full of redundant boxes.

Since virtualization allows you to consolidate all of your servers to a single host machine, you no longer have to pay for the energy costs associated with running more under-utilized systems than you need. And as a result, your cooling costs will also go down.

Virtualization is a great way to make your company more energy efficient without having to make significant capital investments. Conserving energy this way is not only good for the planet, but it’s also good business.

Image Credit: http://www.flickr.com/photos/adrianblack/2042296535/sizes/m/

Long Live Tape Storage

It’s really pretty amazing how tape storage has managed to stay relevant for well over half a century. Even with all of the advancements in technology, still nothing compares to tape in terms of stability, price, density or practicality.

In the early days of computing, all non-volatile data was stored on paper punch cards, which had 80 columns of data and a maximum reading speed of 100 cards per minute. Today, this would be the equivalent of reading about 133 characters every second.

Punchcards were slow, complicated and difficult to manage.

Then, in 1952, IBM announced the first tape system which they called the 726 Magnetic Tape Recorder. This new recording device was much simpler to manage, provided denser storage, and a fifty fold improvement in reading speed.

Shortly after, in 1956, the first disk storage systems were introduced. This launched a race for dominance which still continues today.

Of course, the fast reading speeds and random access capability have made hard disk storage the standard choice for storage of live data and short-term backup. But when it comes to long-term storage and archival storage, still nothing comes close to the practicality and cost-savings of tape.

First of all, tape offers the densest storage of any media. Slightly more than hard disk storage.

Also tape is far less expensive than any other storage method in terms of cost per GB. It’s about half the price of disk storage, and about 1/10 the cost of Blue Ray disk storage.

But there’s more to tape than just cost and density. Tape is also built to last.

  • Unlike a hard disk, tape storage separates the recording mechanism from the backup media. Backup tapes are designed to be very simple machines with very few moving parts. This means that they’re less likely to fail when you try to recover after long-term storage.
  • Tape is designed specifically for backup, and new tape formats are usually made to be reverse-compatible. If you save your archival data to tape today, you don’t need to worry about its readers, recorders or adapters becoming obsolete in 10 years. The same can’t be said for hard disks.
  • Tape is much more stable. If you scratch a tape, the entire device does not become unreadable. On the other hand, a head crash can cause an entire hard drive to become corrupted and unreadable.

Even with all of the advancements in solid state and optical storage, still nothing compares to tape. This is clearly a technology that’s probably here to stay for another half century.

Avoid Costly Datacenter Expansions Through Virtualization

Datacenter construction costs can be as high as $2000 or $3000 per square foot.

An average rack may only take up about 7 square feet, but it will also require an additional 3 times that much just to ensure space for cabling and adequate air flow.

Of course, physical space isn’t the only thing we have to think about when considering datacenter space usage. As servers become smaller and more powerful, they also use more power and produce more heat within a smaller space than older hardware.

This means that more energy is needed to run the machines. And the more heat these systems generate, the more power is required to cool them. It can be a bit like pushing down the brake and accelerator at the same time.

Since most servers are only utilized 5% to 10% of the time, most of the energy needed to power & cool these servers is wasted.

This waste also places stress power infrastructure of datacenters which were designed to supply energy and cooling to systems which only consumed a fraction of the energy used by today’s compact blade systems. In order to keep up with new technology, either new power lines and cooling systems need to be installed or the entire datacenter may need to be relocated.

Either way, we’re talking about expensive infrastructure costs and downtime. This is why it’s important to maximize your resource usage & efficiency while minimizing the physical space needed to store your systems.

By consolidating your servers into a virtualized system, you save space and reduce the amount of energy needed to operate your servers. When one server isn’t in use, those resources are allocated to other servers instead of simply idling and needlessly wasting electricity.

Depending on the types of systems you’re running, you can typically operate the same systems with only 1/5 of the space needed for individual servers. And your energy costs can be reduced by 50% or more.

Rather than buying a new piece of new hardware to host a new server, consider reducing the burden on your datacenter facilities by consolidating your existing systems and adding this new server as a virtual machine.

This way, you can save money and help the environment while extending the life of your existing datacenter.