Archives for : August2011

How To Evaluate The Support Capabilities Of SaaS Providers

Some SaaS applications are very intuitive and easy-to-use. For services like these, you probably won’t need to worry about support.

But for more complex SaaS applications – such as CRM – you’ll want to make sure that help and training are available when you and your employees need it. Fast support and quick resolution will save you a lot of time and money, while also preventing more serious problems from adding up and snowballing.

Below, I’ve listed a number of the most important support features that you should evaluate when comparing SaaS applications.


Live Phone Support:

The best way to get fast-accurate systems for immediate problems would be to speak with a live agent. When evaluating live phone support, you should ask whether this is for a 24 hour call center or if it’s only open during certain hours. This level of service might come for free with the price of the SaaS service, or there may be an extra fee.

Answering Services:

Some companies will send their calls to an outsourced call center that’s only equipped to handle a limited number of client issues. If the problem is outside of this limited scope, it’s escalated to the internal support staff and you should expect to get a response within 24 hours. This is a smart financial decision for the SaaS provider, since outsourced off-shore labour is cheap and 90% of inbound calls are about the same handful of problems.

However, the price of outsourced labour has been rising steadily over the past 5 years. And many companies are finding that the drop in customer service suffered from this approach is more expensive than the money saved by outsourcing to low-quality call-center providers.

Live Video Support:

Video is changing the way we do business, and we’re now starting to see the emergence of video-chat in technical support. Video facilitates the interaction since most of our interpersonal communication is done through non-verbal cues.

Also, the video approach allows the support staff and the end-user to exchange pictures and diagrams instead of trying in vain to describe hard-to-visualize ideas over an auditory medium.

Call-Me-Back Service:

This is a newer type of phone support that has emerged within the past few years. It does a very elegant job of eliminating those long on-hold wait times. Instead, you enter your phone number into an online form and the call center will dial your number when an agent is available.

Email Support:

Some companies offer email-based support. These can become disorganized if you have to deal with multiple agents about different issues. If possible, try to insist on always dealing with the same agent for support questions. This person would act as your liaison within the company, and you could hold them accountable for the resolution of your problems. And much like going to the same doctor every time you’re sick, going to the same agent will help you build a history and a relationship.

Trouble Tickets:

Trouble tickets are similar to email, except that there is a sophisticated back-end system that coordinates communications between various parties, tracks incident history, and ensures fast and complete resolution of all customer problems.

Live Chat:

Live chat is a more economical substitute to live phone support, since a single agent is able to manage many chat windows at the same time for better productivity. As with live phone support, you’ll want to check the hours when live chat is available.

One of the SaaS services I use has a chat support which is only available for 4 hours per day.

One of the drawbacks with live chat is that – unlike with trouble tickets – these sessions don’t usually build or track history. So you might call in every week about the same problem, but the root cause can’t be determined without a detailed incident history.

Automated Chat:

Some companies try to save even more money on their support costs by getting an artificial-intelligence bot to handle customer inquiries. You may believe that you’re speaking to a computer, but you’re actually talking to a computer. If the computer can’t resolve your problem through its database of solutions, it will take your information and escalate the issue to a human.

I happen to know of several high-profile companies that use chatbots, but you’d probably never guess unless I told you.

If you’d like to see how powerful computer logic can be, I’d suggest you visit



Online Videos:

In the early days of the Internet, all of the experts were claiming that “information wants to be free”. Today, information is so free that we’re overloaded with the sheer abundance of it. Today, information doesn’t just want to be free, but it also wants to be presented nicely.

If your SaaS provider has an online video library, it can help you learn new concepts faster… with a richness that would simply not be possible through text. And if they publish on YouTube, you’ll have access to additional question/answer capability in the comments section.

Online Documentation:

Does your SaaS provider have a detailed online library of all their functionality? This is probably the single most important support feature you should look for, since it will allow you to have quick self-service access to problem solutions. Unfortunately, most SaaS apps are built iteratively, with new features being added, removed and modified every day. And in their rush to optimize, improve and modify the software, documentation often gets forgotten.


The vast majority of support calls revolve around a small number of problems. With a well-organized Frequently Asked Questions section, you can quickly find the solution to your most common issues without having to dig through tons of boring and irrelevant technical documentations.


Many SaaS founders have been kind enough to write in-depth books that explain every aspect of their software. These can be very helpful when evaluating solutions, or when learning complex SaaS apps. Some of these books are available on, some can be downloaded as ebooks, and others are available through small self-publishing printers.



Dedicated Training Agents:

Some companies have on-staff agents that will provide detailed, personalized walk-thoughts of their services. They don’t just provide support, but actually provide expert tips and advice for optimal performance. Some companies charge for this service, others offer the service for free, and others only provide the service to top-their clients who spend a minimum amount of money.

Google AdWords has become well-known for their excellent support in this area.


Some organizations will hold conventions, networking events and get-togethers for their users, partners and enthusiasts. One of the most famous such events in the SaaS space would be the Dreamforce event, held by SalesForce CRM.


Many of the more complex SaaS applications have “certified trainer” programs. You can attend a live seminar with one of these trainers, or hire them to come in and train your employees. There is usually a fee for these services, since the trainers are independent of the SaaS company.


A thriving online community is usually a sign of a healthy SaaS service. These communities can be online message boards or social media groups. When you join an online community, you can get answers from other users, and search historical comments for solutions to your problems.

These communities are also a great place to find out how to resolve brand-new undiscovered bugs and issues that the company may not have addressed yet.

How Cloud Computing Affects HIPAA Compliance

In the United States, every healthcare provider and every company that deals with protected healthcare information (PHI) must adhere to the guidelines stipulated under the HIPAA act. HIPAA is designed to protect patient privacy, and does so by enforcing strict rules over how medical information is collected, handled, protected, used and disclosed.

Although HIPAA does not apply directly to third-party service providers, healthcare organizations must require the third-party providers sign contracts which require them to handle all PHI in adherence with HIPAA standards.

When it comes to storing or handling medical data in the cloud in a way that’s HIPAA compliant, there are a few things you should consider.

  • You must ensure that the data never leaves US soil. If the data is physically moved to another country, it will be out of US jurisdiction. When this data is stored abroad, it may be subject to international laws which would force your cloud provider to take actions that would put you out of compliance.

  • When data is stored in the cloud, you need to make sure there is a way for you to know exactly where the data is physically stored, how many copies of the data have been made, whether or not the data has been changed, or if the data has been completely deleted when requested. When you hand over control to a third-party, you have no direct control or access to their hardware. (Many companies get around this by separating their private and non-private data, and using cloud systems for non-private or non-identifiable information)

  • You need to ensure that your cloud provider has adequate physical security measures in place. All servers should be in cages, with redundant power supplies, alternate recovery sites, live security guards, fire suppression systems, etc… Otherwise, your data could potentially be destroyed in a fire or a thief could walk in and steal hard drives containing confidential patient data.

  • Is your data deleted or wiped? When data is simply deleted, only the index of the file is eliminated. The actual data blocks still reside on the hard drive until they are eventually overwritten. In a virtualized cloud environment – where virtual servers and data are frequently moved around – this could create a potential security hazard. The only safe way to eliminate any trace of a sensitive file would be to delete the index and overwrite the data blocks. (But how can you ensure this has been done?)

  • Under the Patriot Act, the government may make a request to access patient information which is stored on the cloud provider’s server. Additionally, a gag order may be issued to prevent the cloud provider from disclosing this breach to the healthcare provider. In this case, the healthcare provider would be unable to notify the patient, as required under HIPAA.

  • How will the data be stored on the cloud provider’s server? If you don’t feel comfortable hosting patient data in the cloud, you can encrypt the data before sending it over to the cloud provider. Although this is very practical for backup, it’s not practical for other applications where the data must be manipulated on the third-party server.

  • Under HIPAA, healthcare providers are required to provide patients with the details of their information handling practices. However, many cloud providers would be reluctant to discuss or disclose their internal information security.

  • Under HIPAA, patients have a right to access any information stored about them, and to correct any inaccuracies. Verifying the integrity of patient data may be a challenge when relying on third-party systems.

Of course I am by no means a HIPAA expert, and some of the information outlined here may have changed since its publications. For this reason, this article should not be taken as legal advice. If you’d like to speak experts in cloud computing for medical practices, I’d recommend our good friends at Argentstratus.

If you’d like to learn more about your obligations under HIPAA, you can visit the Department of Health and Human Services for the most detailed and up-to-date information on this legislation.

Who Owns Data In The Cloud?

An interesting question was posed on a recent teleconference on data security when a healthcare practice uses a Cloud-based Electronic Medical Record system (EMR):  Whose data is it?  It is more than a debate point because, as illustrated below, there are significant issues that potentially arise as the volume of data stored increases.  The leadership of today’s healthcare practice need to consider such things as “Whose data is it anyways?”;“Who is responsible for notification in the event of a breach?”; “What happens when you want to switch software platforms?”; when thinking about moving into a Cloud-based EMR system.

Two principle means exist for delivering a Cloud-based EMR service: Software as a Service (SaaS) and Application Hosting Providers.  SaaS is generally “vendor” based, meaning that your software developer stores the application and data on their private data center for their subscribers to access.  Vendors generally build their SaaS sites around their suite of programs.  Application Hosting Providers are usually independent businesses that host your licensed software on virtual servers.  Regardless of which service you choose, the real problem remains, “Whose data is it?”  The answer is, not surprisingly, “It depends.”

When you think of that Software as a Service (SaaS) offering, an argument could be made that the data no longer belongs to the practice.  It is the vendor’s software and data storage system; you are just dropping data points into it.  If this is the case, then potentially the vendor, not the practitioner, owns and controls the data.  If the vendor owns and controls the data then, in the event of a data breach, accountability should lie with the vendor, not the practice.  This actually is the ideal situation in the event of a data breach, but you should evaluate the unintended consequences of this position.

A significant unintended consequence pops up when you decide to switch software platforms.  Before moving to any Cloud-based system, you should get answers to the following questions and ensure they are in the Business Associate Agreement and the subscription/service agreement:

  • If the data belongs to the vendor, how do you switch to another software system?
  • What is the cost to move the data?
  • What happens in the event of a dispute and you withhold payment of the bill, can they hold your data hostage?

Keep in mind that soon your practice’s database could be terabytes in size and transferring it will be no easy feat.  Today, a charge of say $10 a gigabyte to transfer may seem reasonable because your database is 20GB in size.

In five year, that same database might be 250GB; which leads to a transfer charge of $2,500, even though your practice might not have grown during that time.  Remember, you are now responsible for holding more data for longer periods, which means larger data-files.  In addition, with the requirement to deliver a portable patient file on demand, you cannot off-load parts of the database to deep storage because your EMR system holds everything.

Another point to think about when moving to a Cloud-based service is responsibility for completing a record transfer request.  If you put in the request on behalf of a patient and the vendor doesn’t respond, you could be held liable under HIPAA.  You will want to make sure that your cloud-based provider understands their part in responding to an electronic record request.

With an Application Hosting Provider, the lines of ownership are a little clearer. The AHP simply hosts your licensed software on their virtual server.  This means, at least in the case of Argentstratus, that the data is yours, not the AHP’s.  Since it is your data, the Business Associate Agreement should spell out the circumstances when either party is responsible for data breach notification.  You will also want to verify the charge, if any, for data transfer.

The best way to protect your business is to ask hard questions and then document the answers in the service agreement or better yet, the Business Associate agreement.  Here you will need to spell out who owns the data and responsibility for notification and transfer. Ensure the Business Associate agreement includes relevant parts of the service agreement. These issues should not be left to chance.

About The Author: John Caughell is the Marketing Coordinator for Argentstratus. They are leading experts in the field of cloud technology for the medical industry. If you have any concerns about privacy and security for PII or PHI in the cloud, get in touch with them. (PHI Protection and PII Protection)

How SMBs Benefit From Cloud Computing

The SMB market has seen some of the fastest growth in adoption of cloud computing technologies. This has been especially true when it comes to Infrastructure-as-a-Service (servers hosted in the cloud) and Software-as-a-Service (cloud-based applications, hosted by remote providers).

Because of the way in which these cloud-based technologies help Small and Medium-Sized businesses save money and remain flexible, they present a number of special strategic advantages for these organizations.

What do I mean when I say “Small and Medium-Sized businesses”? This can be characterized either by employee size or by revenue.

  • Typically, these are start-ups that have matured and moved into a growth phase. This is the phase in a company’s lifetime when many of the core business processes and internal regulations are established. They are often positioned for a merger or buy-out, and need to set the groundwork for a smooth transition.
  • Others companies in the SMB category might be smaller organizations that have broken off from a larger unit. They still have a lot of legacy processes that must be maintained, but must now do so with a smaller IT staff.

Smaller and medium-sized organizations have a lot to gain from moving their IT infrastructure in the cloud.

When companies reach the SMB phase, they start to have many of the same security and compliance concerns as larger organizations. This can be a problem since the IT departments at these organizations are generally smaller, with a less diversified skillet.

And since these companies have smaller budgets than large enterprises, it can be much more difficult to justify new internal IT projects. In this case, cloud computing is an attractive option since it requires no hefty up-front investment. Also, the cloud makes it easy and cheap to test and destroy experimental projects, eliminating a lot of risk from IT decisions.

One major advantage that these smaller organizations have over large enterprises is agility. Since they have fewer employees involved in the decision-making process, they can adapt more quickly to changes and opportunities in the market.

In the case of mergers and acquisitions, the cloud helps enable and a number of technologies – such as desktop virtualization and Service Oriented Architecture (SOA) – which make the transition and transfer of business processes much easier.

We’re now seeing rapid adoption of cloud computing amongst the SMB market, with increasing adoption of mixed infrastructures combining both private and public cloud topologies.

Private Cloud: Essentially, just a virtualized datacenter. In this example, the company is responsible for purchasing, hosting, and maintaining the hardware themselves.

Public Cloud: The term “Public Cloud” is usually used when discussing IaaS, although it could equally apply to any cloud-based service that’s hosted by a third party. With Public Cloud IaaS, servers hosted on third-party hardware, which is owned and maintained by the remote host. Other than that, there should be very little difference between a private virtual server environment and a public IaaS service.

Hybrid Cloud: This term describes any IT infrastructure which combines both internally managed servers and publicly hosted IaaS accounts. You’ll usually see this with companies that require external data feeds or integration within their internal systems (ex: ecommerce), or where a portion of the company’s data is too sensitive to be trusted in the hands of a third-party.

For SMBs, cloud computing is an excellent option, and can really improve the agility and responsiveness of the organization.

Tools and Timing For Moving Servers To The Cloud (Video 2)

This is the second video in CoreVault’s 3-part series about moving your systems to the cloud. Often, businesses imagine “the cloud” as being this overwhelming concept that’s the exclusive domain of technology uber-experts.

But the reality is that everything about cloud computing revolves around making technology simpler, easier and more accessible to businesses. When your servers are in the cloud, you don’t have to worry about handling, troubleshooting or upgrading any hardware. All of that dirty work is taken care of in the background by the experts at CoreVault.

Instead, hardware becomes a commodity just like the electricity in your walls or the water in your faucet.

In a past video, CoreVault discussed the importance of finding the right cloud partner. In this video, CoreVault talks about the Tools and Timing that are required when moving servers to the cloud.

You need to have the right tools in place during the transition process so that you can determine the amount of resources that the server is using and the amount of data that will be transferred during the migration. Using this data, combined with other information, the client can be sized into the most appropriate package for their requirements.

Once a migration plan has been put in place, the IaaS provider will then work with the client in order to establish the most appropriate timing for the transition. Especially with critical 24/7 systems, timing is critical.

An experienced cloud provider can help you in making this switch as fast and painless as possible, and help avoid many of the unexpected last-minute problems that come up with complex technical projects. If uptime is absolutely critical, you might want to ask about the possibility of setting up server replication for the fastest possible cutover.

When you pick the right cloud provider, you shouldn’t have to think about these tiny details. The cloud provider should be taking control, leading the way, and making you feel safe during this sensitive transition period.

For more help with migrating your servers to the cloud, you can contact CoreVault directly by following this link.

How To Tell When Virtualization Is A Poor Fit For Your Organization

Virtualization is so hot, so hip, and so sexy that vendors and the media seem to make it sound like a magic pill that can fix all of your IT management problems. However, this is not always the case.

Despite all of the hype, there are certain situations where server virtualization might not be ideal for your organization. And I’ve outlined a few of the most important points below.

Organizational Resistance

It’s hard to take on new projects if you can’t get the funds or the authorization. In addition to deciding if virtualization is right for you, you’ll also need to figure out how to convince other stakeholders of its value to the organization.

Skills and Training

Does your in-house IT support team have the proper training to set up and support these new virtualized systems? The management of virtualized servers is very different from traditional server management, and it requires special training and re-education. You need to make sure that you have people within your company who are willing to learn this new skill set.

Financial Considerations

Consolidating your servers into a virtualized environment isn’t free or easy. The migration process will require time, effort and money. Before you decide to make the move, you must establish a target ROI and decide if this project can realistically meet your desired target. If you aren’t struggling with high energy bills, scarce datacenter space, hardware maintenance burdens, or underutilized hardware, you might be fine just the way you are.

In addition to your current requirements, you should also evaluate your anticipated future requirements. If your projections don’t justify the investment and effort, then this might not be the right approach for you.

Vendor Support

Software vendors will only support their applications when installed in the proper configuration. You might think that a Windows 2003 box is the same as a virtualized installation of 2003, but your software vendor might not agree with you. These are important questions to answer before you start changing everything around.

And if your applications have specific hardware or configuration requirements, you’ll obviously have no choice but to adhere to those requirements.

You should also speak to your software vendor about their willingness to resolve issues relating to the new virtualized environment. When dealing with unusual or complex configurations, it can be very tempting for vendors to blame any problems on the new configuration.

If this is the case, you might end up creating more problems than you solved by virtualizing.


When you place a hypervisor between the OS and the hardware, you’re adding another layer which will affect the performance of the system. For this reason, virtualization is usually best-suited to under-used servers.

If you have a large database server that processes millions of transactions per hour, then you might be better off installing this server on its own isolated hardware.

If any of these reasons seem to hit too close to home, you may want to re-evaluate your ambitions or implementing virtualization.

After all, virtualization isn’t just some new tool that you add to your arsenal. It’s a complete strategic shift in the way you manage your IT infrastructure. And that requires a lot of time, effort, training, and capital investment.

What Is Application Packaging

Like it or not, we live in a Windows-dominated world. And nearly all of these machines have different hardware.

As we all know, you can’t just copy a program onto Windows and expect it to work. Instead, it must go through an installation process where its roots embed themselves deep into the system.

Usually, this process goes fine. But sometimes the installations of these programs affect other applications in surprising and unexpected ways. This is especially true of the hundreds of malware and viruses that we’re constantly installing and removing from our computers.

This is why – every once in a while – you need to completely wipe out Windows and re-install your programs from scratch.

For a large company, the maintenance of laptops and PCs is a major contributor to IT costs. In fact, reinstallation of operating systems and hardware is one of the most common daily routines for the tech support department.

If there was a way to install desktop applications within their own isolated “bubbles” on the system… in such a way that their installation does not affect the core OS or other programs… then this would reduce a lot of support calls and save the company a lot of money.

This is what application packaging tries to do.

Application packaging can almost be thought of as a type of virtualization that runs on top of the OS itself.

Before installing a new program, you use special software to give it its own “sand box” or virtual environment. Within this environment, the application has access to its own set of virtual system resources. (registry, files, etc..)

This way, you can install the program without having it affect the underlying operating system. And the application itself will remain isolated in such a way that viruses and malware cannot affect it.

But this isn’t a magic pill. Application packaging will not help if end-users install all sorts of unauthorized software directly onto their machines. That’s why your company still needs to set up and enforce policies around this type of activity on company hardware.

Having said that… Application packaging is great for prolonging the life of your employees desktop and laptop configurations and reducing the overall workload for IT support.

The Benefits Of Virtual Appliances

The “hardware appliance” delivery model is one that’s very convenient, and appeals to both consumers and vendors. It’s no wonder that pre-configured computer appliances have been so well received in both the consumer market and the enterprise. Today, you can purchase everything from storage servers, to NAS devices, to  DVRs as all-in-one boxes that have all of the necessary hardware and software pre-installed and pre-configured.

For example, my house is currently watched by IP security cameras that have built-in computers which send the images to my storage server over our internal network. Having the computer built into the camera means that I don’t have to worry about any messy wires or new software installation.

And since all appliances have identical hardware and configuration, technical support and customization are much easier.

Of course, adding a new piece of single-purpose hardware to your datacenter cancels out all of the benefits that you might have obtained by installing this software yourself as a virtual machine.

This is where Virtual Appliances come in

Virtual appliances are designed to give you the simplicity and ease-of-use of an appliance, but with the cost-savings and efficiency of a virtualized server.

Virtual Appliances are complete pre-packaged virtual servers which can be quickly added with little or no post-installation customization. All of the ugly software maintenance has already been done for you beforehand.

Virtual Appliances have a number of unique benefits that make them particularly appealing for use in virtualized environments:

  • Because virtual appliances come ready-to-go, they can be installed in only a fraction of the time that it would take to build a new server, and install or configure the operating system and applications.  This frees up a lot of time for already over-burdened IT staff.
  • Because the package is pre-configured, there will be fewer support calls into the vendor. And technical support calls are also shorter because the vendor already knows how the operating system and programs have been installed and configured.
  • The new servers can be installed and deployed without any special training beyond what would normally be required to install any other virtual server. This simplicity makes it very attractive to organizations with smaller IT departments and limited access to specially trained experts.
  • Because the vendor has already tuned and optimized the virtual server for you, there is less chance of error during the installation process and the new system will usually run more efficiently.

Currently, there are a number of vendors embracing the use of virtual appliances.

In much the same way that the introduction of the PC completely changed the way IT departments were structured, virtualization and virtual appliance will require IT administrators to re-define the way they work.

How Do You Move Your Servers To The Cloud? (Video 1)

On this blog, we’ve spent a lot of time talking about cloud the amazing benefits of virtualization and cloud computing. But have you ever wondered about what first steps you’d have to take before moving your servers to the cloud?

Well then, you’ll definitely want to check out this next series of videos from our friends over at CoreVault.

If you’ve been evaluating the many features that the cloud has to offer – and you’ve decided that you’d like to dip your feet in the water – there are some steps you can follow to ensure a smooth transition.

Amongst the most important pre-migration points that you’ll need to consider is which partner to choose. You’ll need to do your homework and make sure that you partner with a company that has the technical capabilities, certifications, staffing, reputation, service, support… and a range of other criteria.

In order to address all of the most important questions that you might have during the migration process, CoreVault has put together a helpful 3-part video series that covers everything you need to know about cloud migration.

If you’d like to learn more about cloud hosting, I’d encourage you to visit our good friends at CoreVault for more information.

Key Points to Consider When Upgrading Or Designing A New Datacenter or Server Room

Your business is facing an interesting challenge when it comes to designing your enterprise data center.  The previous method of having servers support each office no longer really delivers an effective or secure solution.  What are some of the things you should consider as you plan your updated data center? The first thing to consider in planning your data center should be the operating needs of the business.  Next, it is important to determine if virtualization is a viable option.  Another area for the leadership to study is how redundancy of computer components helps or hurt the business.  Finally, the executive team should study if using a Cloud service provider can deliver the needed resources safely and at a substantially reduced cost.  With these issues in mind, management can design the data center to benefit the business.

Far too often, the business creates a data center that does not support the strategy.  This potentially leads to higher operating costs and missed opportunities.  For example, a month ago we met with a large insurance company that decided to open several new branch offices.  Each office has a server and these feed to the central data center for backup and archive services, but not for data sharing.  When speaking with the executive team it became clear that they lacked an operational vision for how the data center should operate in support of their growth strategy.  This lack of planning led the IT department to plan the technology to support the strategy by putting servers in each location.  The IT department needs leadership to design the best solution to support the vision or old methods will continue to hamstring opportunities.

The next thing to consider is the role of virtualization.  Today’s data center can potentially reduce hardware costs through creating several virtual servers on a single physical server.  Where once there were many servers scattered through the country, an enterprise can now work with fewer servers in a central data center, substantially reducing risk and costs.  This method requires running virtualization software such as Microsoft’s HyperV or VMWare’s VCloud.  Some important things to consider are the cost of the virtualization software and the technical skill required to run a virtualized data center. It is also critical to make sure that sufficient bandwidth exists at both the data center and remote locations.

Another important issue when considering replacing servers is the role of redundancy.  Five to ten years ago, the concept of redundancy made sense as a lost hard disk or drive controller effectively shut down the server.  Today the main problem with this concept, at least from a multi-server data center standpoint, is that computers, not components, are the redundancy.  Since each machine works as part of the whole, stripping out redundant components substantially reduces cost.  This approach however can lead to additional risk if too few servers make up the data center.

When planning your data center and server replacement policy, one concept to keep in mind is using a Cloud server solution to deliver the enterprise’s computing needs.  In a Cloud environment, the provider virtualizes the physical servers to keep utilization high and ensure that each client has the computing resources necessary to support the business.  Cloud server providers, such as Argentstratus, build servers to reduce the cost by stripping out unnecessary redundancy while employing virtualization managers to keep the servers running at high utilization and putting data on multiple drives to keep it safe.

Your business can no longer effectively compete without seriously considering the role of the server today.  Your data center must support your strategy; but this requires looking at leveraging today’s technology and understanding available options such as virtualization and reducing hardware costs.  To succeed however, you seriously need to consider if the costs of owning and operating an internal data center are substantially lower than outsourcing to a Cloud provider.  There are other issues to consider, such as data security and data center location, but if profits and cash flow are critical, outsourcing could provide a critical edge over your competition. In the end, however, it is essential that the executive team ensure IT supports the vision, not the other way around.

About The Author: Argentstratus provides specialized IT services and private virtual offices for the healthcare industry.

How Virtualization Helps With IT Maintenance Costs

When we think about the costs associated with owning and managing IT systems, maintenance costs and hardware costs are 2 of the most important ones which come to mind.

And thanks to Moore’s law, hardware prices are now dropping at an exponential rate every year. But as these hardware costs continue to fall, IT maintenance costs will continue to highlight themselves as the most important area for potential cost-savings.

As we all know, servers don’t manage themselves. There are a number of maintenance tasks which must be performed by live human beings:

  • Hardware monitoring, diagnosis and troubleshooting
  • Repairing and troubleshooting hardware problems
  • Upgrading and replacing obsolete or defective hardware
  • Installing and configuring software and operating systems
  • Troubleshooting OS and software, and managing patches
  • Monitoring server resources such as disk and memory
  • Backup, archiving and disaster recovery planning

These are all jobs that – for the most part – can’t be outsourced to some off-shore company over the Internet. Instead, they must be done on-site by a real live flesh-and-blood human being. This is good for IT administrators, because it means their jobs will still remain safe for a while.

As labour costs continue to take up a larger percentage of IT costs, virtualization increasingly becomes a more attractive option for companies.

By consolidating all of their servers to just a few pieces of hardware, you can greatly simplify the physical management of datacenter hardware. Many organizations have reported operations cost-savings of 40% for every server moved to a virtualized environment.

Of course, virtualization is not a magic pill.

Although virtualization can help reduce the costs of managing the physical hardware, the software-related maintenance tasks remain the same.

(The cloud promises to offer further cost-savings by completely eliminating any hardware-related maintenance and enabling cost-effective outsourcing of backups. Also, SaaS can help eliminate software-related maintenance altogether. However, I’ll save this discussion for another post.)

We haven’t seen the end of the single-purpose box yet. Companies continue to add more servers and grow their data storage at a faster rate than ever. But virtualization allows these same companies to continue growing their datacenters without having to grow their operation costs in the process.

7 Reasons Why Testing Backups Is Critical

One of the oldest clichés in the data protection world is the fact that 9-in-10 companies will fail after a major data loss incident. And this should not be surprising to anyone.

Given how critical digital information has become to the daily operations of our businesses, it’s still astounding to see how many organizations still don’t have a process in place for the regular testing of their backups.

How can this be?

Many companies dimply don’t feel that testing is a necessity, or simply aren’t aware of everything that can go wrong when recovering emergency backups.

That’s why we’ve put together a short list of key reasons why your organization should consider testing your backups on a regular basis.

Reason 1: Hardware Failure

When you think about failed backups, this should be the first scenario that comes to mind.

Given enough time, any tape, hard drive, flash drive or other physical device will eventually break down.

Reason 2: Human Error

Since the responsibility for backing up data – arguably the most important security-related task within your company – is usually assigned to the most junior employee, there is a lot of room for human error. At most companies, the backup administrator is somebody with little or no IT training. So when you need to make an emergency recovery, there’s a chance that your backups might’ve never been properly processed in the first place.

Reason 3: Physical Security

Many companies keep their backups on-site at the same location as the primary production server. This not only leaves them open to theft (backup tapes are very prized amongst hackers) and also leaves them at risk of destruction if the primary location is destroyed. (flood, fire, etc…)

You need to ensure that you have at least 2 copies of your data, stored at long physical distances from each other, and that this data can be quickly obtained in an emergency.

Reason 4: Technology Change

Your IT infrastructure is constantly in a state of change. New servers are being added, modified, moved and removed… and your backup and recovery process has to take these changes into account. Nothing could be more devastating than adding a new critical database to the datacenter, but forgetting to notify the backup administrator. This is particularly common with companies that manage private clouds of in-house virtualized servers.

It’s also important to make sure that your backups are reverse-compatible. Imagine having to restore a file, but being unable to locate the legacy program on which is was created.

Reason 5: Cost

By testing your backups frequently, you’ll also find new ways to save on storage costs while refining the speed and consistency of your backup process. This is an immediate benefit that puts money in your pocket.

Reason 6: Restore Speeds

Now that the world is moving towards a twenty-four-hour business trend, downtime the costs associated with downtime have skyrocketed. If your server goes down for just one or 2 hours, Twitter and Facebook will spread the message, and your reputation will be hurt.

Testing helps you identify your most critical systems, and set priorities for their continuity and recovery.

Also, practice helps ensure that everyone knows their role during the emergency. A time of panic is no time to come up with improvised solutions.

Reason 7: Learning

Every recovery is slightly different from the last. When you test your backup recovery process, you’ll learn new things about your IT infrastructure that can help you reduce backup and storage costs, improve overall security, and improve backup and recovery speeds.

What if your servers crashed, and your IT guy had quit 6 months ago? These drills also give you an opportunity to share emergency recovery knowledge with others in the company. That way, your survival doesn’t need to be in the hands of any single person.

About The Author:’s online server backup solutions among the fastest and most secure on the market. A free trial is available so you can try it in your environment.

Tape is Still Great

When asked recently where I didn’t agree with the status quo in IT, it came to me almost instantly:  tape.  It seems that tape has many more enemies than friends these days, but it still has its place and I’m tired of hearing “do you still use tapes in your business?  Quit wasting your time and move to our fabulous XYZ disk/cloud solution!”  I feel like I hear or see this message every day.

Most days I’m working with small businesses to better prepare them for technological disasters, and during that work I often design, install, and support backup and archive systems.  While disk or cloud based solutions often play a role, tape is still very useful for a number of situations.  Let’s take a look at the strengths of tape versus the strengths of disk based backup, why and where tape will continue to be useful, and along the way debunk some myths slung around about tape.

Benefits of Disk Based Backup/Archive

While trying to show that tape is awesome, or at least still relevant, we need to address why disk based backup ever caught on; that is how did disk catch on if tape was so great?  Disk caught on as it does have some advantages, many of which are complementary to tape:

  • Quick and easy restores, especially small restore sets
  • No physical system access required to swap media
  • No moving parts to break (other than drives themselves)
  • Easily handles multiple, simultaneous backups

With disk based backup/recovery systems, it’s very quick and easy to recover files, especially if only a few files or small files are needed.  Also there is no fussing around with tapes to worry about; all of your capacity is online and available 24 hours a day.  That is convenient.

Tape systems often contain libraries with hundreds or more tapes, which require lots of moving parts to move tapes from slot to drive, in and out of the library, etc., which can increase the amount of maintenance and hand holding required to keep a library moving.

While only the best (and most expensive) hard drives beat the speed of modern tape drives, most disk systems contain an array of many drives, yielding high speed recoveries, especially when taking into account the time to retrieve, ship or pick up tapes from an off-site location.

Most large backup/archive systems take advantage of these disk benefits by first sending all data to disk, then moving to tape.  This setup provides the advantages of both mediums:  restores of small or recent data happen quickly from disk, and rarely needed long-term data is still available at a lower cost per GB.  Backing up to disk first also enables sites to off load data quickly from all hosts during a small backup window (say midnight to 5am), as disk can more easily handle many streams of data at once.  Once all data is on disk, tape drives are kept running at high speeds (tape behaves badly when sent data at changing rates; Google “tape shoe shinning”).

Downside of Disk Based Backup/Archive

  • Disk systems must be replaced about every 5 years
  • Disk systems are difficult to move for off-site backup

I’m sure there are even more downsides, but that is what comes to mind.  Some will chime in that it’s very straightforward and convenient to use disk based systems and replication for off-site backup; this is true, but it only works if you have a relatively slowly changing data set, a small data set, or a very large network connection.

For example, say we have a small video production company in Los Angeles, California, with a subsidiary in New York City.  Video does not compress well, and they are likely to have many TBs of new data each day.  Assuming HD video consumes about 40Mbps uncompressed, 2 hours of recording would yield nearly 5TB of data per day.  Over a 10Mbps connection (a decent upstream connection for small to midsize business), it would take about 7 weeks to transmit just on day of recording.  Even with a 100Mbps upstream, which is very expensive for all but large businesses, it would still take about five days to copy (not fast enough to keep up).

Compare this to FedEx or a courier where you can transmit as many tapes as you want in less than 24 hours… in this case the equivalent of 213GB/hour or 485Mbit/sec.

There are other issues as well when trying to synchronize data over a WAN (or Internet) connection, including the effect of latency on transfer rates, and other internet traffic sharing the same link (many suggest assuming the maximum transfer rate of about 80% of the actual speed, to account for overhead and other traffic).

Benefits of Tape Based Backup/Archive

  • Speed per drive (LTO5 140MB/sec)
  • Higher media reliability
  • Long media life (20-30 years)
  • Long term drive availability (~10 years)
  • Lower power consumption
  • Easily transportable

While tape has been around for over 60 years, today’s tape is much improved.  Consider that the first LTO generation, LTO-1, held just 100GB of data and had a maximum throughput of 20MB/sec.  Today’s latest LTO-5 generation operates at 140MB/sec and holds 1.5 TB uncompressed, a speed unsurpassed by all but the fastest (and expensive) enterprise hard disk drives.

The other strong point of tape is that tape is built for backups and archives.  It’s not designed to be randomly accessed or with a focus on high speed (as hard drives are), but rather longevity and reliability.  In fact, LTO has a bit error rate of about 1016, an order of magnitude higher than enterprise SAS/FC disk, and two higher than SATA.  Tape drives also read just after they write to prove it was written correctly, and the magnetic medium on which data is stored is more stable than disk.

Many vendors and salespeople proclaim that tape is unreliable, and that up to 70%+ of tape restores fail, but that is neither my experience nor the experience of others with a documented story.  There may very well be many tape recovery failures, but they are almost always related to a site that rarely, if ever, tested recoveries until they needed them.

Green is a buzzard these days everywhere, including IT, and tapes are better here in most cases as well.  While idle disk drives take nearly as much power as when in use, idle, un-mounted tapes use no power at all.  Also, while you may have hundreds of tapes, you are likely to have just a few drives, each of which uses about 1-3 times the power of a hard drive in use.  Thus if you have three drives operating all the time, you may use about 9 hard drives worth of power; however, you can store as much data as hundreds of hard drives.

This results in a dramatic power use drop for data that can be moved to tape.

Home Run Tape Use Cases

  • Large backup data sets (lower price per GB)
  • Rarely accessed information (archives)
  • DR copies of large, uncompressible data

For businesses with relatively large data sets (say greater than 2-3TB), tape still makes a lot of sense as an off-site backup medium.  When businesses reach this size, often online backup systems do not work well or can be prohibitively expensive (often costing from $1-5/GB or $1,000+ per TB per month).  Things get even worse with more random data, such as engineering drawings, pictures, and videos, as deduplication technology is of no help (such as the example above under Downside of Disk Based Backup/Archive).

For companies that need to archive data, that is store data that may not be needed for months or years, if ever, tape again can come to the rescue.  Many smaller businesses have fairly large storage systems with several terabytes of storage.  Storing archived data on these systems is expensive and of little benefit, costing at least $0.26/GB (RAID 10/Enterprise SATA disks, not including server costs) versus $0.05/GB on tape ($0.10/GB for two copies on tape).  It also lessens the burden on your backup system when large quantities of data are removed from servers as they need not be copied or scanned each day.

Keep in mind that even when using tapes for archives, you still need a backup.  I recommend keeping two identical tape copies for long term storage of archival data; one on-site, and one off.


In closing, I hope you are convinced tape is still great and that it is appropriate for some data backup and storage use cases.  While disk has advantages of faster access and less “hand holding,” tape takes the cake just about everywhere else; from the long term reliability to price per GB, tape is here to stay.

About the Author

Nick Webb is the founder of Seattle, Washington based Red Wire Services, LLC and has made a career of designing solutions that improve data availability and enable disaster recovery. Nick brings more than 10 years of experience planning, implementing and maintaining best practice solutions for systems management for a wide range of organizations. His firm, Red Wire Services, specializes in preparing small and medium-sized businesses to survive technological disasters — while also helping unprepared organizations recover their data and systems so they can return to operation.

Follow @RedWireServices on twitter.

Resources/Related Articles

How Does 2-Factor Authentication Work?

When your employees are working from within the office, it’s a fairly simple matter to control who gets in and out of the building. But when employees are accessing the internal systems from external networks, security becomes a much more serious challenge.

If someone steals an employee password, how would you catch and stop the attacker?

This is where 2-factor authentication comes in.

2-factor authentication relies on adding an extra layer of security to the authentication process, and makes it more difficult for unauthorized individuals to use stolen credentials. Most 2-factor authentication schemes rely on asking the end-user to provide 2 of the following:

  • Something you know
  • Something you are
  • Something you have

Just to give you a better idea, here are a few examples of 2-factor authentication:

  • A hospital worker must also scan their fingerprint when entering their login/password information.
  • As mentioned earlier, PayPal is currently testing a new type of card which generates a time-sensitive code that must be used when authenticating a payment.
  • When you use your credit card, you must present both the chip and your pin code in order to complete the transaction.

Although many are saying that this trend will present new opportunities for biometric identification, I strongly doubt it. I believe that fragmented offices, outsourcing and remote workers are the future. And without on-site supervision, these biometric systems can be cheated. (Although biometrics will certainly have a bright future for other purposes)

Even the medical industry is moving to tablets and mobile devices, creating a whole new set of HIPAA challenges.

Given the anonymous nature of remote access, and people’s naturally lazy password protection habits, 2-factor authentication will continue to become more important in the future.

What’s The Difference Between Hashing and Encryption?

Many people confuse hashing with encryption. However, these are 2 completely different processes, that serve completely different functions.

When we think of a typical encryption use scenario, we imagine one party scrambling a message using a secret algorithm, and sent to a second party who can decrypt the message because they also has access to this secret algorithm.

But encryption has a few key problems associated with it: the keys can be stolen, deciphered or leaked. (Although deciphering is usually highly unlikely or very expensive)

Let’s suppose that you’re the owner of a large e-commece web site. Keeping a database of usernames and passwords can be very dangerous, since a hacker could potentially break in and steal this information.

At first, you may think that encrypting all of the user passwords might be a good idea. However, these could still become compromised if a malicious person got their hands on the encryption key. (Most system attacks are internal)

So what do you do?

This is where a hash comes in handy.

A hashing algorithm will take a variable block of text, and perform a one-way encryption process which outputs a fixed-length digest.

For example, I could use a 160-bit hashing algorithm to convert this entire article into a 160-bit digest. If you suspected someone of having secretly altered the article, you could run it through the same hashing algorithm and compare your new hash to the one I had created earlier. If they are different, you know someone has changed the text.

But what really makes hashes unique is that the original message cannot be derived from the hash… even if you have the original hashing key.

As seen in the example above, a web site could hash a password from a web form and compare that output to the digest which is stored in the database. If the 2 hash digests match, you know that the password was correct.

And this can be accomplished without storing any passwords in the database.

A good hash should be highly volatile, where a small change to the input should produce a large difference in output. Also, a good hash should make it very difficult for 2 sets of input data to produce the same output.

With a good 160-bit hash, the chances of any 2 inputs producing the same output digest would be 1 in roughly 100 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000. So you should be fairly safe from a brute force attack.

There are 2 scenarios where hashing is more ideal than encryption:  when you want to verify if a block of information has been accidentally changed in transit, or if you want to compare 2 inputs to see if they match.

Most Popular Types of SaaS and Cloud Software

It’s really pretty amazing how far cloud computing has come in such a short time. Today, you can get just about any type of enterprise system on a pay-as-you-go basis, without having to buy and maintain any bulky hardware.

Just to show you how much variety there is to be had in the cloud market space, I’ve put together a short listing of all the most common different SaaS applications that are now available to businesses.

  • CRM Software – Manage customer information, automate marketing, and track sales through the pipeline.
  • ERP Software – Improve process efficiency and empower information sharing across the organization, while giving management better insight into workflow and productivity.
  • Accounting Software – Since finance is the “language of business”, you want to make sure your “grammar and spelling” are perfect. SaaS accounting software helps you keep your finances organized and properly tracked, without needing a finance degree.
  • Project Management Software – Track scope, requirements, progress, changes, communications and deadlines to ensure projects get completed in the shortest possible time while meeting stakeholder requirements.
  • Email Marketing Software – Automate email marketing and relationship building, while optimizing message delivery.
  • Billing and Invoicing Software  - Safe time and improve customer satisfaction by automate billing and invoicing in order to provide customers with self-service payment options, reducing data entry costs, and eliminating expensive errors.
  • Collaboration Software – Simplify communications across the organization, prevent information silos, and empower employees to follow complex interactions more easily. A good collaboration system allows for more effective communications and a more productive enterprise.
  • Web Hosting and Ecommerce – Everything you need to do business on the Internet. This includes web hosting, CMS systems, Message Boards, Shopping Carts and more.
  • HR Software – Track employee hours, hire more intelligently, schedule more effectively, automate payroll, and manage every other aspect of your human resources management.
  • Public Sector, Compliance and EDI – Certain industries have standardized rules that must be strictly followed by all parties. Thankfully, you can leverage cloud applications to simplify inter-organizational communications and ensure that you’re following the rules.
  • Vertical Applications – Everything you need to operate a photography studio, martial arts dojo, real-estate firm, law office, medical practice, or just about any other business. Every industry has its own management applications, designed to optimize productivity and profitability.
  • Transaction Processing – Accept credit cards, process bank transfers, keep a tab, publish and track coupons, barter or even run a loyalty rewards program. Cloud software offers unlimited ways to accept payment for your products.

Am I leaving anything out? Leave a comment below and let me know.

3 Main Security Concerns When it Comes To Storing Data In The Cloud

Since cloud computing offers such amazing convenience and cost savings for organizations, many people have been asking questions about the safety and security of data in the cloud.

This is a very complex and polarizing issue, and everyone in the technology industry has their own opinions on this matter. But the majority of the arguments fall into one or more of the following 3 issues:

  • Confidentiality of Data
  • Integrity of Data
  • Availability of Data

Although these arguments would apply just as well to PaaS or SaaS cloud services, end-users in these cases get less control and transparency when it comes to how their data is handled. For this reason, we should assume that these arguments are primarily aimed at IaaS (Infrastructure-as-a-Service) cloud services.

Confidentiality of Data Stored In The Cloud

Before sending sensitive customer information into the cloud, you first have to think about your legal compliance obligations in addition to your data security obligations. This will help you decide on a number of different factors relating to how this information is handled on the cloud provider’s servers.

When it comes to confidential or personally identifiable information, you will probably want to have it stored in encrypted format when not in use. If this is the case, there are 3 decisions that you’ll need to make regarding the encryption of your data:

  • Should you be encrypting the data on your end before sending it over to the cloud server, or will the encryption be done on the server side?
  • What kind of encryption methodology will you be using to protect the data? Generally, there is a trade-off as processing overhead will increase in proportion with the strength of the encryption algorithm used.
  • Will you be encrypting the data yourself, or relying on your cloud provider? If you choose to encrypt the data on the server-side, implementing your own data encryption will help ensure maximum control, transparency and security. If you decide to let your cloud provider encrypt this data for you, ask about their methodology. (ex: Does each client get their own encryption key, or is every client encrypted using the same key?)

If you’re at all sceptical about storage of sensitive data in the cloud, another ideal option would be to set up a hybrid cloud where only low-sensitivity data is sent off to the cloud… and all sensitive information is strictly processed on your own in-house servers.

Integrity of Data Stored In The Cloud

Once your data has been stored in the cloud, how can you be sure that it hasn’t been corrupted or modified? In addition to the loss of a valuable and costly business asset, corruption or loss of critical data can also put you out of compliance with your legal obligations.

That’s why it’s important to frequently check and monitor the integrity of your data when stored in the cloud.

But this presents a special challenge. Since cloud computing charges you for resource usage on an as-needed basis, how do you verify data integrity while keeping costs to a minimum? This is also complicated that cloud servers are in a constant state of change and dynamic movement, and customers have little or no way of knowing how and where the data is being physically stored.

Thankfully, there are a number of mathematical algorithms which accomplish this purpose in an efficient manner.

An extra layer of complexity comes into play when you’re trying to verify the integrity of encrypted data.

Availability of Data Stored In The Cloud

If you’re going to be running critical business services out of the cloud, you have a lot to lose if any of these servers should ever become unavailable.

There are 3 basic areas that you need to think about when considering the availability of servers stored in the cloud.

Network Integrity: If a single cloud provider is storing many servers at a single datacenter, hackers could do a lot of damage by attacking the network which connects this datacentre to the outside world.

Cloud Provider Uptime: It’s important to monitor the uptime and performance of your cloud providers in order to ensure that your servers have the highest-possible availability. (In recent years, Amazon has acquired a poor reputation when it comes to this)

It’s also important to consider the financial health of your cloud service provider. You need to ensure that they won’t be going out of business any time in the near future. Nothing is worse than having to move your entire IT infrastructure to another host with only a few days’ notice.

Data Backup: Whose responsibility is it to back up data stored in the cloud? Will your provider be doing it for you, or is it your own responsibility? And where/how will this backup data be stored?

If your provider is backing up the data for you, will this service be done for free, or is there an extra charge? And how can you test your backup and recovery process?