Archives for : April2011

A License to Print Money (Lowering software license costs from waste)

This is a guest post by Sumir Karayi, CEO of 1E

In the United States alone, there is more than $12.3 billion worth of preventable and ongoing costs associated with unused software and shelfware.

My company, 1E, recently commissioned independent research on software waste in association with the International Association of Information Technology Asset Managers (IAITAM) and the Federation Against Software Theft Investors in Software (FASTIiS), surveying more than 500 IT professionals, each responsible for managing software licenses at companies with more than 500 employees – and what we found astounded us.

View a quick overview of the high-level findings here:

Audit Anxiety

Our research findings reveal that when managing software licenses and assets, most organizations in the United States focus on compliance, rather than on controlling costs.

This is particularly true given the potentially damaging results of failing a vendor audit – heavy fines, substantial back-charges and the threat of legal action. When coupled with the time and complexity involved in comparing the amount of software being used with the amount being paid for, it becomes easier for many organizations to over-provision licenses “just in case.”

However, by focusing solely on preparing for an audit, organizations are buying ever-increasing amounts of software, ignoring the very real danger that much of it will never be deployed, never be used and never add any value.

Unused Software

There are many scenarios that can lead to software licenses being installed but not used. Licenses may be allocated to users based on their department as opposed to their need, they may be bundled together by the vendor, or users may be given software for a specific need or role when they first join the company that is never reclaimed. Whatever the reason, unused software is a significant drain on IT budgets – on average costing organizations $415 per PC.

Shelfware, as its name suggests, is software that is left to sit, undeployed on the proverbial shelf in a forgotten corner of the IT department. But even though it remains undeployed, shelfware still incurs maintenance and support charges.

Through this research, we have found that shelfware costs every organization in the United States approximately $155 per user per year. The challenge with shelfware is that, though widespread, it can be hard to find in most organizations. Traditional systems management tools can only detect software that has been installed, so many organizations are aware of the problem of shelfware but unable to do much about it.

A Missed Opportunity

When asked directly, most senior managers accept that there is software waste within their own organizations. More than half admitted they had software waste and nearly one-third (29 percent) admitted that they have no way to quantify the problem.

More than 80 percent of respondents agreed that there was more than $100 worth of waste per PC. Additionally, the majority of organizations (77 percent) have never reclaimed any unused software licenses.

Uninstalling unused software and redeploying wasted licenses would significantly drive down costs and deliver a better return on investment from existing assets. Yet despite the obvious benefits of software license reclaim and the staggering cost associated with unused software and waste, three quarters of organizations have never done it.

Almost half (48 percent) of respondents said that they had considered actively uninstalling and redeploying licenses, but that they are unable or unwilling to take the next step. Reclaiming unused software would mean that many organizations would be able to avoid buying new licenses until they actually need them, which could have very real financial impact on the value they get from their software.

The Financial Imperative

As organizations are so concerned about software vendors’ audits and compliance, they often overlook the huge waste involved in shelfware and unused software. This anxiety, coupled with the complexity of software licensing, is generating significant inefficiency and waste. These costs are largely preventable, though unfortunately, few organizations truly realize how much money can be tied up in software waste and how much money they could be saving.

Unfortunately, merely detecting unused software and shelfware across a myriad of systems, locations, PCs and servers presents a very real challenge for most, never mind reclaiming and reusing those licenses. In fact, 71 percent of those surveyed felt that software asset management is overly complex. This is perhaps not surprising, given the complicated nature of software license agreements, bundled license and maintenance deals and the general vagaries of product names, versions and editions.

My company, 1E, provides IT efficiency solutions that help organizations identify, financially quantify and eliminate software waste. By leveraging innovative efficiency tools, organizations are empowered to only use – and pay for – the software and licenses they truly need.

These findings reveal that the problem of waste is likely to get worse if organizations don’t focus on managing their software assets more efficiently. There is a clear financial imperative for every organization to do so.

To learn more, please click here or contact us directly.

Differential Incremental VS. Cumulative Incremental Backups

Making a full backup every day can consume a lot of time, and get very expensive pretty quickly. In order to minimize potential backup times while also saving money on storage media, backing up incremental changes is the only smart way to protect your data.

Incremental backup starts with the assumption that only a small portion of your data will change on any given day. Typically, a company will only modify less than 5% of their business data on any given day. If you can isolate these changes into a backup, you could theoretically cut storage costs and backup time by 95%.

One of the longest-standing debates within the backup space has to do with deciding which incremental data capture methodology is best: Differential Incremental or Cumulative Incremental?

What exactly does this mean? Well, incremental data changes can be captured in one of two ways.

The first – and most efficient – technique would be to perform a single full backup, and then only copy the data which has changed on a daily basis. This is often called the Differential Incremental approach. The main advantage of this approach is that it keeps the amount of backup storage to an absolute minimum.

Unfortunately, differential incremental can also be a logistical nightmare. If you do a full backup on day one, and your server crashes on day 100, you’ll need to load the first full backup along with the following 100 incremental backup copies.

If any of those 100 incremental backups are corrupted, it could potentially cause major problems with the recovery. Also, this added complexity greatly increases the chance of possible human error during the recovery process.

Differential incremental backups also offer very slow recovery since you have to load a lot of redundant data during the recovery process. For example, you might load several hundred gigabytes worth of temporary files, only to delete them immediately after. Or you might need to load dozens of copies of a frequently accessed file when you only need to recover the most recent version.

There are 2 common ways to get around this problem.

One way is to perform full backups on a regular basis in order to cut down on the number of incremental backups which must be loaded in the event of a data disaster. Today, it’s common for companies to perform a full backup on the first of every month, followed by – at most – 30 daily differential incremental backups.
Another approach would be to consolidate all of the previous daily incremental backups into a single backup storage unit for easy recovery.

This is what the Cumulative Incremental approach tries to do. Every day, a backup is performed which only copies the data that’s changed since the original full backup was performed. If you need to recover your systems, you only need to load 2 sets of backup: the original full backup and the last cumulative incremental backup.

By simplifying backups in this manner, you can greatly speed up recovery while also reducing the potential for human error.

The downside of the cumulative incremental approach is the fact that the required backup storage can grow exponentially.

If your data changes only 10 GB per day, your incremental backups can grow to several terabytes pretty quickly. Because of this, a cumulative incremental backup cycle requires short cycles with frequent full backups in order to keep storage costs low.

Backup administrators have to fight a constant battle in order to minimize storage costs, backup windows, recovery speeds and handling complexity. As data growth continues to accelerate, it quickly becomes apparent that a new approach is needed.

This new approach must have the efficiency and speed of a differential incremental backup, the simplicity of a cumulative differential backup, and it should completely do away with the need for periodic full backups.
Thankfully, such a solution exists.

Within recent years, the Progressive backup paradigm has completely changed the way IT administrators protect their data. With progressive paradigm, you only need to perform a single full backup. From that point on, you will only perform daily differential incremental backups which will be sent to the backup server.

Once at the server, this incremental backup is combined with previously stored versions, and a new full backup is artificially recompiled. And no matter how many times you repeat this process, you’ll never have to perform another full backup again.

This is why the progressive paradigm is often referred to as “Incremental Forever”

If you ever need to recover your data, you simply load the appropriate version from the backup server as a single full backup. Not only does this speed up the recovery process, but it also greatly reduces the potential for human error.

If you’ve been struggling to maintain control over exponential growth within your differential or cumulative incremental daily backups, you may want to consider evaluating a managed backup service that offers a progressive “incremental forever” backup capability.

About The Author: Storagepipe Solutions is a leader in server online backup solutions, and offers many time-saving options for protecting businesses data.

How does IT benefit from a move to the cloud?

Working in IT can be a thankless work. Nobody appreciates all of the work that goes into keeping the network and servers running. But when something goes wrong, everyone blames the IT guys.
Part of this attitude comes from your end-user’s assumption that computers should be easy to maintain since they are completely automated.

They don’t understand all of the hard work that goes into setting up servers, swapping backup tapes, upgrading hardware, troubleshooting problems, and preventing the wiring from looking like spaghetti.

And when you’re not fiddling with hardware, you’re in board meetings trying to justify expensive projects and negotiating budgeting increases for future problems that may or may not materialize.

But what if it was easier? Cloud computing makes it easier.

When you move your servers to the cloud, you can extend the life of your current IT infrastructure while also eliminating many of the maintenance and financing headaches that come with IT management.

One of the biggest sources of waste within IT spending comes from the fact that you need to purchase, install and configure a new server in order to use it. This means a huge up-front capital outlay to run software that might not be a proper fit for your organization. And since IT growth can be somewhat unpredictable, you need to over-spend on hardware in order to anticipate potential problems.

Instead of buying tomorrow’s software today, what if you could buy it tomorrow… when prices will be lower? Cloud computing lets you do this.

When you host your servers in the cloud, you only pay for what you use on a pay-as-you-go basis. There’s no point in buying a 500 gig hard drive if you’re only going to use 100 gigs.

And the fact that you’re only paying for what you use on a month-to-month basis means that new projects can be quickly deployed, tested and abandoned if it’s found to be a poor fit for the organization.

With cloud computing, you don’t have to worry about cooling, power, or datacenter space allocation. All of these details are already taken care of for you. And if you ever decide that your cloud host isn’t right for you, it’s a simple matter to transfer your servers back to your datacenter or move them to another provider.

Although we would never advocate handing over your security obligations to a third party, cloud providers invest a fortune into the security, monitoring and protection of their datacenters. And for most small businesses, a cloud datacenter might very well be more secure than what they could affordably maintain within their own IT infrastructure.

When it comes to hardware maintenance, nothing could be easier than the cloud. Instead driving over to the datacenter, taking the server offline and opening it up to change replacing hardware components, you can simply fill out a form online without ever leaving your house.

With cloud computing, your datacenter follows you everywhere you go. And every hardware maintenance task either gets automated or replaced with a software process.

Backup, recovery, mirroring and emergency migration are also much easier in the cloud than they would be with a physical box. In most cases, these processes can either be completely automated or reduced to a few mouse clicks.

But one of the best things about cloud computing is the fact that you get access to highly trained IT support staff… usually at no extra charge. And especially with SaaS, you can offload many of your technical support calls to the SaaS host.

If you’re looking to manage your IT infrastructure in a simpler, more flexible and more cost-efficient manner, you’ll want to look into what the cloud has to offer.

How To Optimize Your Datacenter To Reduce Energy Consumption By 60% Or More

Dimension Data is a specialist IT services and solutions provider that helps clients plan, build, support and manage their IT infrastructures. They were founded in 1983 and headquartered in Johannesburg, South Africa. Currently, Dimension Data operates in 49 countries across six continents.

Today, I’ll be interviewing Kris Domich, who is the principal data center consultant for Dimension Data Americas.

What sorts of problems are companies facing when it comes to the power requirements of their data centers?

Most data centers have or will experience constraints on how much conditioned power they can provide to some portion or all of the racks. This is commonly due to the increasing power densities of late model equipment and the adoption of such technologies without properly planning and anticipating the need for an adequate power distribution system.

What are some of the biggest mistakes that companies which leads to inefficient data center power usage?

The most visible mistakes tend to be overcooling, or overuse of the wrong type of cooling. An inefficient cooling strategy will lead to the deployment of more cooling components than may actually be required — this will drive use of unnecessary power. Other examples include poorly planned space configurations or ones that were initially planned well but degraded over time. Lastly, a lack of a capacity planning and management regimen will also lead to energy and space inefficiencies. Much of this can be remedied by gaining alignment between IT and facility departments, and ensuring that capacity planning and management is a joint effort between both groups.

What are some tips that you can give when it comes to designing an efficient data center?

Understanding what critical loads must be supported over the planning horizon of the data center is paramount when designing power distribution and cooling subsystems. It is also critical to understand the availability requirements of your organization. This will allow you to choose the right amount of redundancy and reduce the risk of over-provisioning and driving inefficiency. Organize the physical data center based on load profiles, and if possible, create an area that is designated for higher density equipment and other areas that are not. This will allow you to prescribe the proper cooling regimen for the various equipment types and lessen the chance that the entire data center is designed to support a single power density – resulting in over-cooling and potentially less cooling that some of your equipment may require.

What are some good best practices that companies can adhere to when it comes to implementing heating and cooling systems within their data centers?

Baseline critical load today and work with IT to develop a growth plan over at least a five year horizon. Develop notional power and cooling designs that meet the requirements over the entire planning horizon. Translate those designs to specific technologies that are modular enough to be expanded beyond the planning horizon.

What are some of the most efficient data center heating and cooling systems? On what criteria do you base this decision?

Precision and variable output systems are the most efficient. These systems are designed to provide cold air where it is needed (i.e. at the equipment air inlets) and evacuate hot air from its origin (i.e. equipment exhaust points). This precision design is more efficient than the traditional means of “flooding” the room with cold air and having no deliberate means to evacuate heat. The variable aspect of these systems means that as loads increase and recede, so does the amount of cooling supplied, which results in a more efficient use of energy.

How can companies get a better idea of their data center resource utilization?

Implementing a meaningful set of instruments and paying attention to what they tell you will provide intelligence on a number of levels. Such instrumentation can initially provide visibility into what loads are being generated by specific equipment at a given time. This intelligence can aid in meeting the needs of nominal loads and not designing to support maximum loads 100% of the time, which is a rare occurrence.

What are some common power-hungry services which could easily be made more efficient?

On the average, the cooling subsystem — including everything from the chiller/condenser through to the computer room air conditioners (CRACs) — is among the most power hungry subsystems in the data center. They can be made more efficient through the use of precision and variable-output components. This will ensure that the cooling subsystem is not operating at levels which exceed the amount of critical load being generated by data center systems.

What kind of cost savings can a company expect from refining the efficiency of their data centers? Are there any other benefits besides cost savings?

Actual cost savings and avoidance will vary by organization and vary by the degree of inefficiency that is corrected. Historically, we have seen reductions of overall power consumption in excess of 30% and as much as 60% or more. Another way to look at savings is to examine the cost to run a given workload. When introducing efficient power and cooling strategies, organizations can more easily capitalize on more efficient computing platforms such as blade servers.

Adding virtualization to these platforms can dramatically reduce the cost to run a given workload and allow for the running of more simultaneous workloads. As a whole, the IT “machine” becomes more efficient and can respond to business requirements faster.

Will Your Network Evolve To Meet Tomorow’s Bandwidth Requirements?

Evolve IP was founded in 2006 to change the way that organizations buy, manage and secure their vital communications technologies. At that time, there were multiple point solutions providers, but few integrated providers for all unified communications, including Managed Telephony, Managed Networks, Security & Compliance and Hosted Business Applications (such as Microsoft Exchange, SharePoint and Hosted Data Backup and Recovery). Evolve IP was launched to address this gap in the marketplace.

Today, I’ll be interviewing Joseph Pedano, Vice President, Data Engineering at Evolve IP.

Can you please tell me about your Network Management System? How does it work, and how does it detect and repair potential problems that could lead to service degradation?

Most NMS’s are built to be reactive instead of proactive. However, EvolveIP has developed a number of ways to ensure that we are responding to events before they become incidents. Technologies such as IP/SLA and Voice Health monitoring via Mean Opinion Scores (MOS) allow us to see performance issues before they turn into incidents that degrade network performance.

Can you please explain Managed Networks (WAN and LAN) for my readers? What types of companies require managed networks, and what are some of the key business benefits?

Managed Networks are a higher touch offering of the standard telecommunications line or Ethernet cable (WAN and LAN). Routers and switches are the technology providing the WAN and LAN Services.

A Managed Network typically means configuration, monitoring and fault resolution on these technologies.

Any company can utilize Managed Networks, as this frees up internal resources to work on in-house projects instead of managing the network.

What are some best-practices that companies should take in order to ensure secure, optimal performance amongst the various elements of their networks?

With the advent of MPLS technology, most of the security and performance issues have receded to the background. Best practices can be identified in the proper configuration of the LAN and WAN infrastructure and implementing services like QoS, SNMP, IP/SLA and Monitoring.

What are some of the biggest advantages of using private bandwidth over the Public Internet?

Cost, first and foremost. However, the disadvantages of using private bandwidth over the Public Internet include the addition of the complex configuration of VPN’s and the limitation of upload speeds compared to download speeds.

What are some of the leading causes of network performance problems?

Improper configuration is by far the leading cause of performance issues. Improper use or lack of VLAN’s, lack of QoS (Quality of Service), or other physical faux pas (like daisy chaining) are the most common causes of performance problems.

What are some of the biggest mistakes that companies make when managing their own network infrastructure?

Neglect. Most customers that manage their own network tend to install it and forget about it, because that is not their core competency. Configurations become outdated, the infrastructure is not properly monitored, and most businesses grow and continue to add to the existing infrastructure without planning for that growth initially.

What should business owners look for in a Managed Network provider?

The key factors to look for in a Managed Network provider are knowledgeable support personnel that can visually diagram and explain the current status of the WAN and LAN and lay out a solid plan to stabilize and manage the infrastructure moving forward.

Other considerations are the breadth and depth of their Network and Security Operations Center (NSOC), as they will become your lifeline for support moving forward. Can you call in and get to a technician without waiting? Are they helpful? Do they understand your network on the first call? These are key considerations to evaluate.

What are some of the greatest networking issues that companies will begin facing in the near future?

Bandwidth constraints.

Every application has moved to IP Transport. For example, the bandwidth requirements for Voice, Video Conferencing, File Sharing over multiple locations have shot through the roof. While some LAN’s have kept pace, the WAN Infrastructure has not necessarily caught up and is starting to fall behind.

Some customers look at Gigabit Ethernet on the LAN side, which is a one-time investment. On the WAN side, big monthly dollar spends are usually needed to bring in adequate bandwidth to support all of these bandwidth-hogging applications.

While technology, such as EoC (Ethernet of Copper) and Fiber based networks (traditional or something like Verizon FiOS), is starting to become more mainstream, it is still hard to cost effectively ensure high-speed WAN for your business.

Anything else you’d like to add?

As part of being a Managed Network provider, we see some customers and prospects that struggle with anticipating their network infrastructure needs.

In today’s economic environment, more and more customers/prospects fail to identify future opportunities for growing or fixing their existing infrastructure. I’ve seen many customers upgrade their LAN, and then a year later, rip out all the new switches because they support PoE (Power over Ethernet) for VoIP Handsets. One thing to ask a potential partner/provider is what other applications are they working with and what other technologies need to be taken into consideration.

We understand that IT budgets are shrinking, but I would hate to see a company have to rip out gear that’s a year old because there wasn’t a detailed plan of action in place.