Archives for : June2010

Yes Folks, I’ve Got A Twitter

I’m really happy about the progress of the blog, and all of the positive feedback I’ve been getting.

I’ve always been told that it can take that least eight months to a year for a blog to get off the ground.  That’s why it’s been so surprising for me to see the kind of reaction that Enterprise Features has been getting after just a month online.

Now that I’ve got a good momentum going, I want to open the dialog and make it easier for readers to reach me with questions and topic ideas.  So, if you’ve got Twitter, please feel free to add need to your list.

I look forward to hearing from you and interacting with my friends in the business tech community.

https://twitter.com/enterprise_feat

Say hello to Cloud Gaming

Cloud computing is the latest, most discussed data storage service used by businesses and individuals who manage a huge network with multiple devices and large data. This technology not just reduces the expenses of managing huge infrastructure, but also eases of accessing data from anywhere. Being one of the latest technologies, one has to choose the vendor carefully while choosing the services. One of the leading business backup solution provider, enterprisefeatures offers the most updated and reliable services. Log onto their website- enterprisefeatures.com to know more about their services.

Also get to know more about cloud computing and its role in huge businesses. Cloud computing is not just restricted to businesses, but also gaming and gambling. Are you consumed with the myth that casinos could change the payback and theme of the machine with just a flip of a switch? Or lest, is it feasible to store the status and know-how of the game on your gizmos? Well, with the grace of cloud computing, the myth is shaping into reality. Be it you’re at home, office or at any place, you can take full lure of the casino world on your desktop, laptop, tablet, and smart phone or otherwise.

Frightened with the thought that you might run out of internet in the middle of the game? Is storing the games for offline access is all what you wished for? Worry, no more! Earlier in 2007, International Game Technology unveiled the IGT Cloud solution for encouraging the operators to provide seamless gaming experience across IP-based, mobile and online devices. This blend of technology with the traditional game would allow the casino managers to manage game content (graphics, speed, etc.) using the cloud floor manager and would also allow them to access the largest game library in the market thereby helps them sustaining the faith and interest of the customers.

As server-based casinos are vulnerable to security issues, welcoming cloud computing for online casinos is like a whiff of fresh air! Authentication is the key feature of cloud computing and is less attackable compared to server based systems. Plus, it curtails the installation of a data centre which would have required adequate amount of electricity, water, sewage or otherwise. Easy accessibility, customizability, security, easy backup etc. are few of the versatile features to name.

The presence of a master computer which is connected to all other server based slot games promotes the change of games, denominations, and bonus payouts with just a click at any instant which is unlikely in manually operated server games. It just takes some bucks to shed on licensing of a new game and within a matter of seconds, the slot machine downloads a new game instead of buying a new slot machine.visit the website for more details, .

With the demographic and statistical tools, one could know about the taste of the customers and the likelihood of the games played by a particular person. Imagine the depth of this innovative reach if you can control the games, themes, cash in and out flow of Las Vegas and Atlantic city , all from sitting at one centre over the internet. Sounds amazing, right?

Advantages and Disadvantages of Inline Deduplication

450953112_91a9047f5a_mIn a previous post, we’d gone over the advantages and disadvantages of inline vs. postprocess dedupliation. For those of you who haven’t already read the previous article, here’s all you need to remember:

  • Inline deduplication occurs BEFORE the file is written
  • Postprocess deduplication occurs AFTER the file is written

The main advantage of the inline process is that it minimizes I/O operations.

With the Postprocess method, you need to:

  • Input data
  • Write it to disk
  • Read the disk
  • Delete the data
  • Write a pointer to the hash table

As you can imagine, this is a very resource-intensive way to handle large amounts of data during the backup process. Inline deduplication reduces this down to just “input and write the data”. This means that the deduplication process is much faster.

Another benefit of the inline method is that you don’t need a large buffer to hold the uncompressed data while it’s being processed as you might with the postprocess method.

And of course, the simplicity of the inline method makes it very easy to implement and operate.

However, there are also a few downsides to the inline method.
Because this method is very processor intensive, it might slow down the input speed of your backup server. For most businesses, this shouldn’t be a problem. Speeds are still very fast. But if you need to back up extremely large amounts of data very quickly, this may be something to think about.
Another disadvantage is that it gives you less control over how your backups are deduplicated.

But probably the biggest downside to this approach is that inline deduplication can optimize for storage, but not for restorability. This means that might store your data in such a way that backup recoveries may be slow or inefficient. We’ll go into more detail about this in another post.

For most business applications, inline deduplication will work just fine. Especially if this is only being used for long-term backup or archival.
In another post, we’ll go over the pros and cons of compressing backup data using a postprocess deduplication methodology.

Image Credit: http://www.flickr.com/photos/question_everything/450953112/sizes/s/

WARNING: Don’t Use Disk For Archival or Long-Term Backup

Hard drive storage is great for most backup purposes.

Not only is it cheaper than other media like DVD, but its read/write speeds make it the ideal solution for lightning fast recovery in a pinch. And it’s getting cheaper all the time.

dangerAlso, it’s an extremely dense way to store data, which makes it ideal for conserving space in an already over-crowded datacenter.

And of course, nothing beats disk when it comes to speed. Disk gives you random access to your data, which makes it more convenient than tape when it comes to finding specific files of interest.

But hard drives also have some other issues that you should keep in mind.

First of all, disk is very unstable. If a part of the disk fails, you’re very likely to lose all of your data. Contrast this with tape storage, where damage to one part of the tape is very unlikely to destroy the data on the rest of the device.

Second of all, disk drives are complicated devices which contain many moving parts. When stored over long periods of time, there’s an increased chance that these parts will deteriorate and cause the device to be completely unreadable. Once again, contrast this with the way tape separates the storage media from the reading devices.

Finally, unlike tape, hard drives are not built to be reverse compatible. If you store a drive for 10 years, there’s a good chance that the adapters and drivers for the device will become obsolete over time. This means that your data will be trapped forever unless you can somehow locate a reading device that supports your format.

To make matters worse, these risks compound with scale. As the number of disks and the age of the archives continue to grow, so does the chance of potential data loss.

If you enjoy the speed and convenience of disk for backup, keep using it. But if you plan on storing these backups for longer periods of time, you may want to consider adding other types of backup media to the mix.

Image Credit: http://www.flickr.com/photos/almaz73/3564244382/sizes/s/

What’s The Difference Between Inline and Postprocess Deduplication?

It’s a common scenario that we’ve discussed before. IT Budgets are frozen or shrinking, and data growth is rising at a faster rate than storage prices are dropping. Simply throwing new hardware at this problem won’t make it go away. Instead, we need to be smarter about how we use and manage our data.

In order to deal with an exponential problem, we need an exponential solution.

This is partly why everyone is talking about the convenience and practicality of deduplication. Deduplication technology allows your backup system to eliminate all duplicate or redundant content from your backups, and instead replace this information with a pointer to the original file.

If a file is found to be 100% original, it is written directly to the backup device. But if the file is a duplicate, a placeholder called a “pointer” will be written to a listing called a “hash table”. In the event of a restore, the hash table listing will be used to copy all of the duplicate data in order to “re-inflate” the full backup.

In some cases, this is enough to decrease your storage by 80% or more.
One of the great things about data deduplication as a storage strategy is that it scales well. The more data you store, the better it works.
When it comes to data deduplication, there are generally 2 methods that you can choose from:

  • Inline
  • Postprocess

With inline deduplication, all of the compression happens between the ram and the processor. The data is analyzed and deduplicated when it first comes in, and then it’s either written to disk (if it’s an original file) or a pointer is added to the hash table (if it’s a duplicate).

In the case of postprocess deduplication, the files are firsts written to disk in their entirety. Once the files are written, the hard drive will be scanned for duplicates and compressed.

In other words, inline happens BEFORE the files are written and postprocess happens AFTER. Each approach has its benefits and its drawbacks. But we’ll have to save that debate for another post.

For now, at least you have a good idea of how both deduplication methodologies work.

3 Ways of Thinking About Fault Tolerance

When it comes to the enterprise business continuity, the number of options can sometimes feel overwhelming. And one of the more confusing terms that gets tossed around is “fault-tolerance”.

Fault tolerance means different things when used in different contexts. In order to simplify our understanding of the term, I’ll assume that most of the definitions can be classified into one of 3 categories.

Component-Level Fault Tolerance

This will keep your servers protected in the event of hardware malfunctions such as storage, network devices and controllers. In the event that a component should fail, the server will be able to continue operations interruption to the applications.

RAID storage is probably the best known example of component-level fault tolerance. If one disk fails, the others keep running until the faulty disk is repaired.

Server-Level Fault Tolerance.

This will ensure that your company can endure a complete server failure (Or in the case of virtual servers, a complete host failure) without a second of downtime or interruption.

In order for this to happen, the system must ensure that memory state are maintained while clients remain connected. Also, it should ensure that active transactions continue to be processed while the system is in recovery mode.

In other words, neither the application nor the end-user should notice that anything strange has happened. And once the server has been repaired, the application should re-synchronize itself instantly without needing to restart the server or otherwise interrupt operations.

Geographic Fault Tolerance

This is very similar to server-level fault tolerance, where servers are hosted across multiple redundant hosts. However, in this instance, at least 2 of the host servers are spread across a wide geographical area.
This way, your company is protected from the most severe causes of downtime including fire, natural disasters, and in some extreme cases… even war.

Since datacenters are very expensive to build/rent, this type of fault tolerance is – by far – the most expensive to implement. But it’s also the safest.

Next time a vendor starts talking about how their service offers fault-tolerance, this classification system should help you ask better questions and get deeper insights about how these solutions can help your business.

The Problem Of Under-Utilized Servers

You can typically expect about 8:1 to 10:1 consolidation rate. This means that you can replace 8 different physical servers using just once box. At this rate, your server usage goes from around 5% to as much as 50% or 60%.

Older datacenters are quickly becoming obsolete because they weren’t built around today’s technology requirements. Older servers were much larger and produced heat like an electric blanket. Modern blade servers however, are much more compact & efficient and produce energy like a jet engine. This is complicated by the fact that equally robust cooling systems are now required to dissipate this heat.

The same infrastructure that might’ve powered an entire server room 10 years ago might only be able to support about half of that floor space with today’s equipment.

Another problem with server under-utilization is total cost of ownership. With every new piece of equipment, you are adding more work for your IT staff. By maximizing server utilization, you can minimize the number of physical boxes that must be managed, maintained, upgraded and backed up. This way, you can free up IT resources for more productive areas of the business.

This is why it’s not uncommon for companies to save 50% or more when they switch over to virtualized systems. Since workload is consolidated to a smaller number of servers, management and power costs are significantly reduced.

Before you look into hiring new IT personnel or upgrading your datacenter facilities, you may want to look into the synergies offered by consolidating your existing systems.

The Advantages of Virtual Tape Libraries

The durability, low cost and high density of tape make it the ideal media for handling extremely large amounts of information. But tapes also come with one significant drawback: SPEED.

Automated tape libraries help a bit by automating the task of finding and loading the right tape faster than a human could ever do it. But it still wasn’t enough. In order to use the data on any given tape, you must first locate it, load it into the tape drive, and then read it sequentially until you find the file you were looking for.

For a single tape, this is slightly inconvenient. But for a large tape library, it’s completely impractical.

In order to work effectively, automated tape libraries needed a significant boost in speed to take them to the next level. This is why Virtual Tape Libraries or VTLs were created.

Instead of reading/writing directly to tape, VTLs, would keep a large disk-based buffer of Virtual tapes which could be quickly written/read. Often times, this buffer could represent as many as 256 or 512 tapes.
This approach provided 2 significant benefits:

Performance:

Virtual copies of tapes could be instantly accessed in the buffer without having to be loaded into the reader. Now you could access 20 different virtual tapes in the same time it would take to find, load and read just 1 or 2 tapes.

Capacity:

With simple automated tape libraries, storage devices were often under-used. To save a file, you’d pick up a tape, write to it, and then put it back. Now, you could load up the entire virtual tape before putting it back. Or better yet, find a way to optimize file storage across many virtual devices. For example, several smaller virtual tapes could be copied to one physical tape in order to save space.

When you think about how VTLs streamline the data management process, it’s almost hard to imagine that all of this work was once done by hand not too long ago. If it wasn’t for the speed and efficiency offered by these devices, many data-intensive industries simply could not exist.

The Only Way To Truly Maintain Uptime

In 2003, the entire north-eastern part of the United Sates, and much of Ontario, Canada had been hit with a massive blackout that knocked out some of the busiest datacenters in North America. Companies were reporting outages lasting for days, and causing millions in damages.

If your company has mission-critical applications that needed to remain live 24 hours per day, would your company survive an event like this? These are important things to think bout as we sit in a time of worldwide financial and energy crisis.

The only way to truly ensure the resilience of your IT infrastructure is to have it geographically distributed across a wide area. This way, even a nuclear blast can’t take your worldwide business offline.

For larger companies with complex IT infrastructures, distributed computing may be a good option. This is one of the factors that have triggered the popularity of virtualization.

Of course, the idea of operating multiple datacenters may not be financially feasible for smaller organizations. Thankfully, the cloud has a number of cost-effective options.

On the upper end of the scale, there are companies that offer hosted “high-availability” services. These are similar to online backup service. They continuously replicate your server to an off-site facility while also monitoring your server availability. The moment your server goes down, operations will be instantly switched over to the remote server until you decide to switch back. Instead of paying for a remote datacenter, you simply rent the space from the provider.

This makes it a very cost-effective option.

The next level down in price would be so-called “rapid recovery” systems like IBM fastback. These systems are similar to online backup, but they allow you to download your data through random-access instead of downloading your recovery data in batches. This allows you to begin using your servers again within minutes…. even while the download is still in process. It’s not 100% high-availability, but it is cost-effective way to significantly reduce downtime.

Virtualization Datacenter Real-Estate Savings

Too often, we try to evaluate IT costs as a simple sum of hardware and software costs. But there’s actually much more to the total cost of ownership.

First of all, there’s the real-estate costs associated with storing these systems. These days, it’s not uncommon for the average datacenter building costs to range as high as $1000 or more per square foot. And density is another problem since servers need breathing room. For a simple rack that might take up only about 7 square feet, you’ll probably end up needing over 25 square feet just to accommodate for ventilation and cabling.

On top of this, we also need to consider other ownership costs that come with physical infrastructure. Although most servers will only ever reach about 5% utilization, they still need to be cooled and powered 24/7. With rising energy costs and tightening IT budgets, this can be a real challenge.

A recent study by Gartner revealed that insufficient power and insufficient cooling were the 2 biggest facility challenges faced by datacenters.

As equipment becomes denser, it also requires more power and generates more heat. As companies load up their datacenters with more new physical servers, they can quickly exceed the original anticipated power requirements before they’ve filled the room to its holding capacity. When this happens, expensive new renovations or even a newly constructed datacenter will be required.

In order to keep datacenter costs low, companies need to make better use of this space and its resources. Power consumption needs to go down, and the number of physical boxes needs to be kept to an absolute minimum.

With virtualization, dozens of machines can be combined into a single device. This way, instead of having 10 different boxes running at 5% utilization, you can combine 10 machines into a single box that does much more work.

This methodology greatly reduced power and cooling costs, since you don’t need to power these systems the other 95% of the time when they’re not being used. Server resources get used much more effectively. This reduced power consumption will help stretch out the life of your datacenter without having to make any infrastructure changes or move to a new facility.

Another benefit of this approach is that virtualization allows these virtual machines to be packed tightly together into a single box. This reduces the total floor space required for racks, ventilation and cabling. It’s not uncommon for companies to reduce their floor space usage by 80% or more this way.

And finally, maintenance costs are also reduced because you have fewer physical devices to maintain.

The maintenance, power and space savings offered by virtualization are a great way to extend the life and functionality of your datacenter while also reducing the total cost of ownership for your IT infrastructure.

5 Biggest Backup Problems Facing Companies Today

Companies are struggling to cope with the challenges of rapid data growth, combined with increased retention requirements for legal purposed. At the same time, increased uptime requirements are forcing IT departments to improve the speed and efficiency of regular maintenance.

Current technology barely keeping up with the demand, and software/hardware companies are working hard to come up with new solutions to the biggest data protection challenges faced by enterprises today.

Backup Windows

Since it’s now more common for companies to operate around the clock, even a small amount of downtime can cause problems. This is why backups must be performed as quickly as possible, and preferably without locking up system resources.

Restore Times

Increased uptime hasn’t just made regular maintenance more difficult. It’s also placing pressure on IT staff to be able to recover faster in an emergency. The days of driving to the off-site storage to pick up the backup tapes are over. Today’s businesses measure restore-times in seconds, not hours.

Reliability

Physical backup devices such as tapes or hard drives can degrade and fail as they get older. Also, companies need to be concerned about the possibility of accidentally damaging or losing these important digital business records. For this reason, the devices must be either redundantly copied or redundantly stored. Either way, this just causes more problems for IT staff.

Cost

With data rapidly growing, storage costs are certainly a concern, but there are other important factors when considering the total cost of ownership for backup media. Companies today also need to be concerned with the ever-shrinking server room space, labor costs associated with managing complex data stores, energy usage, server upgrades, and more. A small jump in each of these areas could cause a multiplier effect on the total cost of ownership.

Security

Now that backup theft is a favorite tool of hackers and identity thieves, companies must implement special measures to prevent tampering. Special security measures are also needed for regulatory compliance purposes, and to ensure the privacy of confidential client data.

Protecting Against Network Failures

Broken Network Connection

High uptime requirements aren’t just for banks and hospitals anymore. These days, it’s increasingly common for even smaller companies to find themselves operating in a 24/7 business environment… serving customers all over the world.

That’s why it’s now more important than ever for businesses to ensure they can minimize server downtime without letting IT costs spiral out of control. As you can imagine, this can be a major challenge for both business owners and IT administrators.

High-availability systems can continuously replicate our servers to an alternate emergency failover system. And virtualization technology now gives us the ability to host virtual servers across multiple physical devices for redundancy.

But even if your system hardware is 100% resilient, that still won’t protect you in the event of a network outage.

In order to be completely fault-tolerant, you need to configure your network in such a way that is has no single point of failure.

This might mean implementing redundant network paths, switches, routers and other network elements. Server connections should also be duplicated as a means of eliminating problems caused by failure of a single network component.

It’s also important to make sure you don’t allow network hardware to share common components. For example, multiple systems may suffer an outage if you try to share common hardware across a dual ported network cards. In order to achieve full redundancy, you MUST use a separate piece of hardware for each connection.

When looking into your business continuity strategy, make sure to pay special attention to your network configuration. Even the most robustly protected servers will become useless if the network should ever go down.

Image Credit: http://www.flickr.com/photos/mangee/313812439/sizes/m/