Archives for : March2011

The Effect of Time of On The Importance of Files

Traditionally, companies and individuals have manually performed their backups on a scheduled, periodic basis. Of course, there are many problems associated with this approach.

The first problem has to do with the flawed nature of humans. We always say that we’re going to do the backups on a regular basis. But as time goes on, we create excuses for ourselves to skip a day here and a day there. Pretty soon, we’re doing backups once per month, if at all.

Of course, companies tend not to have this problem so much because the task gets delegated to someone who’s paid to perform the task. And if they fail to do their job, there will naturally be consequences.

But as with many companies today, the individual charged with performing the regularly scheduled manual backups isn’t always properly trained. In fact, it’s often the lowest-paid person in the office. Of course, this opens the door for a lot of potential data loss due to human error.

But there is one more hazard that you may not have thought of.

According to many experts in the field, backup consumers have demonstrated that there is a relation between the recency of a file and its importance.

For example, a PowerPoint file that you had worked on this morning will probably be more important than a Word document that you’d written 6 months ago.

This is where regularly scheduled manual backups fail to deliver.

According to Murphy’s Law, data loss incidents always happen at the absolute worst moment. This is usually when you’re working on something very important that needs to be handed in the next day. If you’re only doing backups at the end of the day, one of these incidents could potentially leave you in a very tough situation.

But there is a solution.

Many online backup companies today are beginning to offer Continuous Data Protection that backs up your files and uploads them to an offsite location as soon as you save your document. Instead of losing an entire day’s work, you can now recover from your last saved version from a few minutes ago.

Continuous Data Protection can be a real life saver in a crisis.

If you’re looking into backup solutions for your home or business, make sure to look into a solution that protects your data continuously.

About the author

Storagepipe Solutions has been a pioneer in the field of continuous data protection as it relates to online backup applications.

Optimizing Your WAN And Minimizing Distance Induced Latency

Certeon is the application performance company. aCelera, Certeon’s suite of software WAN optimization products, delivers automated secure and optimized access to centralized applications for any user, accessing any application, on any device, across any network.

With Certeon, enterprises and cloud providers successfully realize key initiatives, including consolidation, virtualization, replication and application SLA’s. Certeon’s aCelera is the only software-based product that is hypervisor agnostic, hardware agnostic and cost effective. aCelera’s ability to provide automated, secure and optimized access to centralized applications at the lowest possible TCO make aCelera a cornerstone of enterprise and cloud-provider infrastructures.

In 2008, Certeon shifted from a hardware-based solution to a software-only approach. The company re-architected the aCelera product platform from scratch to focus on optimizing all applications running over the WAN. As a software-based solution, it now provides optimal performance flexibility and TCO reduction.

Today, I’ll be interviewing Donato Buccella, CTO and VP of engineering at Certeon.

Can you please explain what WAN optimization is? Can you provide some real-life examples?

WAN optimization is a phrase used to describe products that optimize access to applications via the WAN when users are remote and that application server has been centralized. Some of the specific techniques used in WAN optimization include data deduplication, compression and protocol optimization. WAN optimization can be used for many initiatives, including optimizing access to applications like SharePoint, reducing back up and replication times, and more.

A specific example of WAN optimization is Certeon’s customer Pathfinder, who uses the aCelera software-based WAN optimization solution to enhance global communications. Pathfinder has close to 900 employees globally, across more than 40 offices in 24 countries – all of whom need access to optimized application performance. aCelera has increased Pathfinder’s application performance dramatically across all its applications, from finance and data to communications and file sharing.

An example of reducing replication and backup is our customer, HellermannTyton, which is using aCelera to optimize its global manufacturing business. Business continuity processes such as backup and replication are often crippled over the Wide Area Network (WAN) leaving most organizations to throw bandwidth at the problem. HellermannTyton instead chose to deploy Certeon’s software solution to cut its replication window times from 13 hours to less than one hour.

What should companies look for when evaluating WAN optimization solutions?

WAN optimization should not be considered as a point solution to solve pain at a couple of remote sites; instead it should be thought of a strategic initiative for the entire infrastructure.

When deployed across organization, CIOs and IT administrators will no longer have to worry where the data is and where the users are. Data will be available instantly at LAN like speeds from anywhere in the world.

Given that, the right WAN solution should:

  • Be a best of breed WAN optimization and application acceleration technology that delivers the best application acceleration performance and data reduction
  • Have a cost that is value based such that enterprises and cloud providers can afford to deploy it at every point of access in the network.
  • Be scalable to be installed in the largest enterprise or a cloud provider’s infrastructure.
  • Support all use cases:
    • Branch-to-datacenter
    • Datacenter-to-datacenter
    • Mobile User
  • Be future proof to grow in capacity as an organization’s needs increase
  • Be extendable to cloud provider networks as you leverage those

How is the new trend towards global business – including the trend towards distributed teams – changing the way companies work with their WAN optimization solutions?

Cloud services look like a $100 billion-plus opportunity by mid decade, but is cloud computing worth this level of excitement? Think, Internet 1997. Companies were excited about the technology potential and worried about security, privacy, bandwidth, standards and more. In spite of those questions, what transformed communication and commerce? The ability to deliver Business value!

In 2010 and beyond, cloud successes will be measured in business value. The units of measure will be the ability to increase business agility, decrease cost through on-demand provisioning and teardown of infrastructure and services, speed development, and improved reliability. It must be utility-based, self-service, secure and most importantly, have levels of application performance that improve productivity. With as many as 90 percent of workers scattered across the globe, away from datacenter sources of information, their teammates and management, user adoption of collaboration applications and its centralized data is the linchpin of any business value equation.

Cloud success requires integrating network services that are very far away and often owned by strangers.

Leveraging cloud computing and maximizing its business value requires full-featured, scalable, high-performance WAN optimization software that allows applications to perform as expected, and can be part of any organizations’ on demand architecture, rather than part of a farm of tactical hardware or limited virtual appliance solutions.

Business information and resources are increasingly being accessed at global scale distances, from enterprise and cloud sources using Internet or VPN or MPLS connections. At the same time, expectations for application performance remain the same and are even rising.

Enterprises embracing the cost and scalability benefits of cloud computing and service providers delivering consumption, utility-based models, balance the need for security and user expectations for access and application performance. Users don’t care if the resource is in a cloud or on the moon, they expect their applications to work quickly and flawlessly.

Bottom line: the success of cloud computing is irreversibly linked to software-based WAN optimization and application acceleration technologies as the result of distance induced latency and the need to provide ad-hoc secure and multi-tenant access. aCelera software WAN optimization’s ability to provide secure access, application performance and global scale make it the ideal cornerstone of cloud environments, from private to public to hybrid.

Why should companies implement application acceleration? Why not just continue working same way as always, with the aid of a remote collaboration system such as SharePoint?

Enterprises must align their IT infrastructure with their business strategy. As such, the ability to provide agility, contain IT costs and deal with regulatory changes, means adopting a number of initiatives, including:

  • Consolidating hardware and centralizing data centers
  • Increasing globalization with more telecommuters, road warriors and other remote workers
  • Transitioning to network based backup and disaster recovery network replication
  • Leveraging public cloud services

All of these initiatives are ultimately moving end users further away from the applications they need to do their everyday job. While applications like SharePoint are certainly a way to aid in remote collaboration, they get so bogged down with data that communicating over the WAN and storing the information become an extremely slow process.

This in turn decreases employee productivity, ultimately affecting a company’s bottom line. Application acceleration helps companies to successfully take on those business agility initiatives.

Anything else you’d like to add?

WAN optimization is particularly important as more and more companies leverage the cloud. Resources will increasingly be accessed across the Internet or virtual private WAN clouds; and expectations for application performance will increase. Enterprises that embrace the cost and scalability benefits of cloud computing must simultaneously continue to meet standards for employee productivity and application performance. Users don’t care if the resource is in a cloud or on the moon; they expect their applications to work quickly and flawlessly.

The key to deriving value from WAN optimization in cloud environments is to integrate it with the underlying physical infrastructure and virtualization. It needs to be part of an enterprise’s virtualization stack in order to be cost effective and flexible enough to deliver real business value. In short, it must be software residing on a virtual machine.

SaaS Is Worthless Without An Offline Fallback Option

When I was fresh out of college, one of the first jobs that I had was working for a division of a major bank. These people really understood the importance of resiliency and disaster preparedness. And one day, I got to witness this first-hand.

Something happened to their internal systems, and the customer management system for our call center went off-line. This was a huge company, and each second offline was costing the organization thousands of dollars.

And just like clockwork, the floor managers ran out with pads of paper, handing them to everyone in the call center. They had created a paper version of our customer management system. Nearly every aspect of our job had been systematized so that it could be performed off-line.

As a result, we were able to continue serving our clients as if nothing had happened while the technical team brought the systems back online.

Our customers were only mildly inconvenienced, since we were still able to resolve about 70% of cases in this manner. And the truly serious cases could afford to wait a few minutes for service. As soon as the system came back online, it was back to business as usual. Our paper system was promptly handed off to another team for manual data entry.

They had planned well in advance for every possible contingency. This taught me an important lesson about professionalism.

With Software-as-a-service, there are 2 possible points of failure: the provider’s server could crash or your internet provider could go down. And when either of these systems fail, your customers won’t care whose fault it was. All they’ll care about is the broken promise that you failed to deliver.

  • Now that most business applications are moving to “the cloud”, your company needs to start thinking about offline fallback plans for those unexpected emergencies.
  • If your company relies on POS systems to process credit card transactions, make sure that you have an old-fashioned “Katchink Katchink” roller device with lots of paper.
  • If you use any kind of online information management system, ask if they have a downloadable client that allows you to work offline and upload your work in batch.
  • If you’re backing up your servers to an online backup provider, make sure you also have an on-site backup appliance. And if you’re backing up laptops or PCs using the internet, make sure that the software also creates a local backup somewhere on the hard drive.

I’m not trying to knock SaaS at all. In fact, I think that cloud software provides incredible value to businesses while helping to dramatically lower Total Cost of Ownership.

But if you decide to rely on a third-party provider to host your applications, you should make sure that your business won’t get knocked out if they every became inaccessible.

Remember: Nobody will ever care about the safety and security of your company as much as you do. And it’s your responsibility – and yours alone – to ensure that your organization is prepared for the worst.

Today’s Business Challenge in the Storage industry

Traditional means of deploying and managing storage have become outdated. The exponential growth in storage requirements has put an enormous strain on existing storage systems, and customers are very often forced to upgrade their systems to meet the growing needs. Often, this results in fork lift upgrades at an enormous cost coupled with the requirement of having a highly-trained administrator on staff to effectively manage the environment.

This adds time and expense to the deployment of new storage systems and increases the ongoing cost of managing the environment. In addition, these solutions often require massive licensing, support and maintenance fees, forcing IT environments to look for new ways to address their growing storage requirements. Particularly given the current economic landscape, many IT departments are turning to alternatives to high-cost proprietary storage.

Thankfully, the enterprise storage market is changing. OpenStorage, a new approach to storage that eliminates vendor lock-in through the use of an open source core, promises to return control to end-users and reduce the incredible margins businesses pay for traditional enterprise class storage. OpenStorage is a viable alternative to the proprietary vendor lock-in approach, providing:

  1. Freedom of choice – OpenStorage runs on industry standard hardware.
  2. Economies of scale – OpenStorage users can scale their storage needs without putting their budget in jeopardy.
  3. Enterprise rich features – OpenStorage is backed by a powerful community of innovators.
  4. Better storage economics – OpenStorage customers cut costs dramatically while experiencing equivalent or better functionality.

Advantages of OpenStorage

Freedom of choice: With OpenStorage, end users combine industry standard hardware with open source software. This breaks the bonds of vendor lock-in, allowing users greater flexibility and choice in the market.

The open source software found in OpenStorage solutions is supported by a community of thousands who have a passion to create better storage solutions. This community helps spur innovation, which in turn drives better storage economics. The systems can leverage industry standard servers and disk drives as well as an OpenStorage software stack to speed storage innovation.

Unlike traditional proprietary storage solutions, OpenStorage solutions offer freedom of choice at every level of the storage system stack.

Economies of Scale: In traditional storage, the storage stack is a closed stack—and usually three times the cost of an OpenStorage solution. End users are forced to buy components of their storage, including the head unit, storage controller and software from the same vendor. Along with the storage stack, end users are also typically required to pay for individual features.

Proprietary storage vendors’ licensing structure is designed to force the customer to pay for each component. These vendors also source their components and mark up the prices by as much as 5 times what these prices would be in the open market.

Customers choosing OpenStorage alternatives save significantly on all aspects of their storage solution.

Enterprise rich features: OpenStorage fosters faster innovation. Vendors pioneering the OpenStorage model can focus on bringing the latest advancements in storage software with no inhibitions due to hardware dependencies. Customers benefit by deploying solutions that provide the best price-performance ratio.

How OpenStorage breaks through Economic and Performance Barriers

With existing processor technology, servers can easily process more than 1 million IOPS. According to Moore’s law, compute power will continue to increase year over year. Unfortunately, the same cannot be said for mechanical disk drives, which will not match the speed and processing capability of the server performance; in fact, today’s fastest drives are capable of only 300 IOPS to 400 IOPS.

Traditional storage vendors are currently recommending 15K RPM disk drives to meet the requirements of data intensive applications, which usually results in more expensive caches to enable the workload to reside in memory.

As you can see, conventional storage architectures add complexity to storage, and highly complex solutions are always expensive to manage and operate.

The software tools that are used to manage these environments typically require a highly-trained on-site administrator, adding further costs for ongoing management and making it even more challenging to maintain high service levels for storage systems and related applications.

OpenStorage offers an alternative approach to these complex solutions by leveraging the improvements in the production of flash technology. SSDs and other flash storage devices have become much more cost effective, enabling a new approach to tiered storage. Flash technology provides customers with the perfect price/performance “sweet spot” between mechanical drives and DRAM.

Multiple Options for Easy Scalability

OpenStorage solutions provide multiple options for customers to scale their infrastructure. Instead of monolithic upgrades of systems, customers can fine-tune their infrastructure based on the requirements of their applications. For example, customers can:

  1. Simply increase the horsepower of their head units by replacing the CPUS with more cores and compute power along with server cache.
  2. Easily replace existing drives with high capacity drives to expand capacity, especially since high density drives are increasing in capacity year over year.
  3. Leverage hybrid storage pools and support SSDs natives to provide intelligent read and write caches.

All of these expansions and scalability efforts are transparent to the client and applications. Most importantly, businesses can use commodity hardware and industry standard technology to meet their needs. This significantly reduces the exposure of customers locking into a single technology stack and keeping the options open for future growth.

Clearly, OpenStorage can outperform legacy storage by using a proven, open, business model. OpenStorage is at the forefront of a sea change in storage, and is poised to usher in a new era and break the stranglehold proprietary technologies and business models have on enterprise storage forever.

This is a guest article by Evan Powell, CEO, Nexenta Systems

Virtualization Is Changing The Way Businesses Protect Their Data

We are currently witnessing a convergence of three trends within IT: the explosion of virtualization, cloud computing and the growth of storage within these environments.

These trends, affecting the majority of businesses in some form or other, have left IT scrambling to manage very complex and rapidly changing environments, as many of the tools of the past were not designed with the future in mind.

Up until recently, virtualization was mostly deployed on the edge of computing infrastructures. However, as its benefits to the business are further realized, more organizations are starting to deploy it more mainstream and even run business critical applications within these virtual environments.

These installations, as they become more prevalent, will continue to drive complexity further upward inside the datacenter for IT and raise the level of importance on protecting the data inside the virtual machines as well as the virtual machines themselves.

Similar to today, the question will no longer be about “will we lose money” if a disaster strikes our virtual infrastructure and what’s the exposure to the business.

The question will revert to “how much money will we lose” and what is the maximum amount of downtime in our virtual environment that the business can afford, as it has today in the physical world.

Many businesses today don’t protect their virtual data and infrastructure as regularly or on the same levels as they do their physical environment and applications. One of the biggest problems for IT in managing these environments is the very strength of virtualization.

The dynamic nature of virtualization makes for protection and disaster recovery of the virtual environment very difficult to not only keep track of, but to properly protect to ensure compliance and SLA agreements are met.

What used to be considered mundane tasks, like backup or replication, could be very complex tasks in a virtual environment that typically are distributed across multiple operating systems, servers and in many cases multiple environments.

Many of the tools on the market today to manage these types of data protection tasks were designed before virtualization and have left IT trying to fit the proverbial “square peg in a round hole”. Not exactly the position you want to be in to protect your business critical systems.

Because of the exposure to business critical applications and systems, disaster recovery strategies to encompass the virtual environment are being researched and implemented throughout the industry.

These DR strategies are also spanning the entire market, as virtualization is not just an “enterprise” product – small to midsize companies are looking for simplified disaster recovery strategies and solutions for their virtual environments as well.
As more and more workloads reside inside virtual machines, it opens up the opportunity for new alternatives to better manage this business-critical data across the virtual network.

Whether VMs are replicated to a secondary data center or a cloud storage provider for DR purposes, limitations arise when using legacy approaches. As the mass movement of VMs utilizes very high I/O network bandwidth and imposes operational complexities, the process quickly exceeds the capacity of existing network and storage resources.

Managing your virtual environment should be as simple as accessing a network drive – providing a single access & viewpoint that spans across all of your hypervisors as well as your back-end storage (SAN, NFS, DAS, etc.).

The IT administrator should be provided with the ability to leverage existing tools and infrastructure to protect and manage their virtual environment, without having to deploy additional resources and draining their budgets further.

Properly protecting the new virtual datacenter and enabling the mobility of VMs to create a more dynamic infrastructure has become critical as IT looks towards the future of computing when highly fluid VMs can be moved anywhere at a moment’s notice and be assured that they will always be protected.

Before we get to that point, IT today needs the assurance that as they deploy business critical applications and expand the reach of their virtual ecosystem, solutions are available to them – without draining their budgets – to allow for the proper protection and management of these environments.

About the Author

Mitch Haile is CTO and VP Product Management, Co-founder at Pancetera. Mitch was an early engineer at Data Domain where he led the Systems Management group for four years. He was the original author of the Data Domain Enterprise Manager product, which greatly expanded the company’s reach into the large enterprise market. Prior to Data Domain, Mitch held various engineering roles with Motorola and SnapLogic and also founded other start-up ventures. Mitch earned a degree in Computer Science from The College of Wooster, where he was an Arthur Holly Compton scholar.

Image Source:quickmeme.com

8 Key Advantages That Cloud Computing Delivers to IT

Often resource constrained, IT departments in companies and government organizations are immersed in workday responsibilities needed to support the business. Cloud computing can help take pressure off IT staff while also helping deliver measurable business benefits. For instance, with cloud computing, organizations can leverage the benefits of a shared IT infrastructure without having to implement and administer it directly.

While it took virtualization many years to be widely accepted by businesses, cloud computing is experiencing a much shorter ramp-up period for acceptance. With cloud computing, the battle has already been won, in part, since organizations rely heavily on virtualization. The business benefits are also much clearer than they were initially with virtualization. At the end of the day, cloud computing can help businesses save money on day-to-day operations, making it an easy adoption decision for most organizations.

Cloud Advantages

Simplified Cost and Consumption Model. Prioritizing activities that align with core business needs and drive tangible business value and top-line revenue are top IT concerns. This focus has driven IT organizations to reassess the costs of procurement and maintenance of infrastructure and non-core applications. Cloud computing allows companies to better control the capex and opex associated with non-core activities.

Enterprise Grade Services and Management. Typically, 70 to 80 percent of IT budgets are devoted to maintenance of existing infrastructure – a massive overhead. Cloud computing offloads this burden from the shoulders of companies, freeing core IT resources to focus on initiatives that drive revenue growth.

Faster Provisioning of Systems and Applications. Traditional methods to buy and configure hardware and software are time consuming. Cloud computing provides a rapid deployment model that enables applications to grow quickly to match increasing usage requirements. It can accommodate “peak times” where a company needs to scale up dramatically, such as a holiday season or special event.

Right-Size to Address Business Changes. Clouds are elastic. They can contract if necessary to meet changing business needs. With an in-house datacenter, if a company over-provisions, it can’t scale back. In a cloud, an organization can quickly and easily right-size its environment if necessary.

Ease of Integration. An increasing number of enterprise applications require integration with third-party applications that are often hosted outside the enterprise firewall. The cloud with its configuration flexibility, integrated security, and choice of access mechanisms has a natural advantage to serving as a core platform and integration fabric for these emerging applications.

Highly Secure Infrastructure. By taking a system-based – not point-based – approach, cloud environments can perform security at all levels (applications, middleware, operating system, compute/store/network). This will safely support highly mobile users that need a variety of connection options – coming into the cloud from secure and non-secure networks.

Compliant Facilities and Processes. Many midsize companies don’t have the resources needed to manage audit and certification processes for internal datacenters. Compliance standards cut horizontally – like Sarbanes-Oxley – and vertically, such as PCI DSS and HIPAA. Cloud facilities and processes that address both areas can help companies address regulatory and compliance processes.

Flexible and Resilient with Business Continuity/Disaster Recovery. Managing business continuity and recovery internally requires a dedicated focus so companies typically concentrate only on the most critical applications. Utilizing cloud environments allows organizations to safeguard their full IT infrastructure because the cloud’s inherent scalability integrates disaster recovery capabilities.

This guest post was written by Satish Hemachandran, Director of Product Management at SunGard Availability Services

Can Deduplication Really Reduce Data Storage By 95 Percent?

CA is a well-known world leader in the Enterprise IT Management Software space, and they have a lot of deep insight into the business issues surrounding deduplication.

Today, I’ll be interviewing Frank Jablonski, Senior Director of Product Marketing for the Data Management business at CA Technologies.

Backup deduplication is a hot topic in IT right now. What factors are causing businesses to embrace this technology?

It’s about having to do more with less. Companies need to reduce costs, and cutting backup-related storage is the low hanging fruit. It’s an easy place to start.

Deduplication helps companies defer CAPEX spending, as they postpone purchases and get more out of their existing infrastructure. What’s more, storage prices are continually on the decline, so the longer a purchase is deferred, the cheaper the storage becomes.

Deduplication also works well. Companies can trust deduplication to work as advertised, and data is recovered reliably and efficiently. Deduplication is in the general acceptance stage of the technology lifecycle.

Aside from storage costs, what are some other drawbacks of storing too much duplicate data?

With duplicate data, backup processing takes much longer, as all the data needs to be written to the disk for backup storage.

Reduced retention time periods are also an issue with duplicate data. With deduplication, storage space frees up which allows for more recovery points on-line for fast recovery. It’s also possible to retain a longer history of recovery points.

I’ve heard you mention that deduplication can reduce data storage by as much as 95%. How is this kind of compression achieved?

The deduplication method is key to maximizing data reduction. Some backup and recovery vendors deduplicate data on a per-backup job basis as this allows comparison of like data for greater reductions. For example, comparing Exchange email data to Exchange email data will typically identify more duplicate data than comparing Exchange email to a SQL database.

Vendors might also employ a block-level, variable length target-side data deduplication technique. This provides a very granular comparison of the data which results in identifying more duplicate data and hence greater reductions in the overall data set.

What is target-sided deduplication, and how is it different from other methods?

Target sided deduplication is performed at or behind the backup server. Target side deduplication can be performed on the backup server, or on a hardware device recognized as a backup target by the backup server. The big benefit of target sided deduplication is that no performance degradation occurs on the production server, as all deduplication processing is done on the backup server or the backup target device.

Many companies go to great lengths to make sure that their users can work without interruption or degraded performance with their key business applications. An alternative to target-sided deduplication is source-side deduplication, whereby all deduplication processing takes place on the production server before the data is sent over the network to the backup server. While this results in reduced network traffic, many companies don’t view backup network traffic as much of a problem as compared to reduced application performance. Companies often use dedicated backup networks or a Storage Area Network (SAN) to reduce network traffic so there is no need to perform deduplication on a production application server.

What advice can you give when it comes to choosing and implementing a backup deduplication system?

Some companies sell deduplication separately from their base backup product. It may be possible to save money and benefit from tighter integration and ease-of-use by considering vendors who offer built-in deduplication functionality.

I’ve heard you mention that ease of deployment was the most important factor in selecting a deduplication solution. What are some key criteria that would indicate ease of implementation?

An ideal option would be built-in deduplication that does not require any extra software installation, configuration or licensing. Companies should also look for simple, wizard-driven setup interface, no more than a couple of minutes. Having the flexibility to seamlessly phase the deduplication into existing backup processes – without having to buy additional hardware – is also a plus.

Anything else you’d like to add?

Customers should also look for reporting features. The more advanced deduplication products will also offer reporting that graphically illustrates the volume of deduplicated data, which servers are running deduplicated backup jobs, and other information.

Amazing Video: Upgrading Through Every Windows Version

This is absolutely incredible! Have a look.