Archives for : October2012

Frequent Amazon Outages Raise Skepticism

4122944724_f4e0e37d5aThe advantages of cloud computing are abundantly clear. Especially for smaller businesses, hosting applications in the cloud offers greater cost savings, better efficiency, stronger security, and a host of other critical benefits. It allows businesses to host their enterprise systems without the need to hire IT staff, buy hardware or build a datacenter.

Many leading ERP, CRM, WHM and other systems are now being offered by vendors with the choice of either on-premises or as a cloud-based service. But there are a few perceived disadvantages which have made many businesses reluctant to switch, preferring instead to keep their solutions on-premises.

One of the most important issues has been the resiliency and reliability of cloud-hosted systems. For many industries – such as Retail, Logistics and Manufacturing – unexpected downtime can be very undesirable.

Much of the cloud’s poor reputation has come from the web hosting and online backup industries.

Many low-end backup providers will cut corners in order to offer more competitive prices. This often means placing limitations on what kind of data can be backed up, limitations on security, penalties for heavy users and even bandwidth throttling.

And within the web hosting space, we see many unlimited plans which aren’t unlimited at all. Instead, they cut off your storage after 4GB of space usage and cut off your account if you get more than 1000 visitors per day. These web hosts also offer extremely limited memory for running database scripts and PHP applications. But the worst complaint about shared web hosting has to do with the “bad neighbour effect”, where your site is brought down because another account on the same server had a spike in traffic or began using more than its fair share of resources.

Having a web site go down is bad enough. But when your critical business systems fail, the consequences can be significantly worse for most businesses.

The cloud has had a reputation of unreliability and instability within recent years, and much of this can be attributed to Amazon. The value offered by Amazon’s cloud services has made it a highly-scalable, cost-effective platform for shoestring entrepreneurs and innovative start-ups.

And many of the companies which have built themselves on the Amazon platform are also SaaS solutions which serve as critical business systems for many smaller and medium-sized companies.

On October 22 2012, Amazon suffered a major outage which affected many of the Internet’s top web properties, including (reportedly) Minecraft, Netflix, Imgur, Reddit, and Minecraft. These sites were made unavailable for several hours.

According to DataCenterKnowledge, this outage was the fifth one in only 18 months… and that Amazon also experienced 4 outages within the space of one week during 2010.

The message is clear for businesses: If you’re going to host to critical applications the cloud, don’t cut corners. Do your homework, be vigilant, make sure you have the best, and ensure that your host’s services are backed up by solid SLAs. If you require high availability, make sure you have a process in place which will back you up of your cloud option fails.

Image Source:

Offsite Data Recovery: Who can you trust?

The Information Age has empowered businesses with technologies that make business easier, faster, more profitable and completely on demand. It has also created a host of concerns for the business owner such as hard drive backups, cloud computing and identify theft.

When it comes to storing your data, how do you pick a vendor you can trust? There are so many to choose from, and while you should always perform data backups locally, you should still have an offsite data backup option.

Here are a few things that you should think about when scouting offsite backup solutions.


What is this company’s focus? You should feel with a certain amount of assurance that the company you are choosing to work with cares about your needs as a client and isn’t simply in the business of taking as much of your money as it can. You need a company that is largely focused on your digital peace of mind.

Response Time

If you have a problem, you have to trust that it will be resolved with urgency and proficiency. The best way to test a company’s response time is to generate a complicated question about their service and then communicate with them via phone or email. The quality of the response coupled with the amount of time your message languished in queue will tell you whether or not the company is capable of handling your needs with the careful, quick service you need to stay on top of the business world.


A good, friendly recommendation never hurts. Your friends and colleagues have probably been faced with a similar question at one point or another. They’ve had to test the waters with a company and protect their data as well. Learn from their experience and go with someone your network trusts.


You can always find reviews and customer opinions of a particular company online. PC Magazine regularly reports on tech businesses and the best information is always a quick search away. You should examine these closely to see if there are recurring patterns of downtime, customer frustration or service issues. These will provide for you a broad stroke of the company as a whole. The more positive reviews, the more trustworthy the company is.


How clear is the company’s mission and objectives? You can’t develop trust with a company if they have a hidden agenda. You should be able to believe the company truly has your best interests in mind. This is the Information Age. The more information the company is willing to provide, the better. In the business of data security, transparency is a must!

Ultimately, how a company treats you symbolizes how they are going to serve your needs as your business grows. You have to have to establish what matters to you when it comes to your information needs and pursue a company that supplies it. You’re building a business relationship and the most important feature that a company can offer is trust.

About The Author: Kay Ackerman is a self-proclaimed tech geek and freelance writer, focusing on business technology, innovative marketing strategies, and small business. She contributes to and occasionally writes on behalf of StorageCraft. You can also find her on Twitter.

How to Survive a Cloud Outage

A cloud outage can effect many entities, including enterprise-level companies. Having said that, there are still many businesses that rely on the cloud. With the pay as you go pricing structure and flexible options to choose from, the cloud services many customer related needs.

Even if just a minor glitch occurs with a cloud hosting provider, this can be a huge catastrophe for those relying on its services. If high visibility companies go down because of an outage, the situation can become extremely complicated, and can result in loss of many potential clients for the business. Nuvem Analytics monitors many accounts, including several which went down in 2011 due to the outage. By monitoring these customers, they reported that about 35% of the beta partners are highly vulnerable to an outage, due to their link to the cloud.

This does not have to be the case; you don’t have to be a tech guru to figure out how to protect your cloud. This list is going to provide you with 5 things you must do, to protect your cloud from future outages and issues.

1. Keeping up with your EBSs (Elastic Block Store) and maintaining snapshots of them is also critical. If an outage damages the EBS, you can simply create a new one via a snapshot. And, if the EBS remains down for longer periods of time in a given snapshot, users can use a provisional snapshot from a different availability zone, since snapshots are only tied to the region (not the availability zone). So, if one is unavailable, you can restore it from a different snapshot in another availability zone.

2. It is important to keep cloud data file copies in hand, and to keep offsite file copies available as well. A third party service is a good option for data backups.

3. Use multiple availability zones while elastic load balancing is in place. By balancing the incoming traffic, the system becomes more stable. And, by using this through multiple zone layers, you can further enhance the tolerance levels in your system. So, even if one zone goes down, you have other ELB zones up and running in the infrastructure.

4. You must watch out for unhealthy instances with your ELB. Due to the fact that the unhealthy instances do not receive traffic, even if other healthy ones are up and running, this can interfere with operations in the infrastructure.

5. Using external monitoring systems and tools is also imperative. Although AWS Cloudwatch is a great service, the levels of interdependence aren’t clear – this means that in the event of an outage, the services might not be available. So considering an external monitoring system to alert you of outages would be a great idea.

These are important tips to help you cope with a cloud outage, but they are only a starting point for protection. Following these tips will not guarantee that your cloud system is going to run properly during an outage, but it should definitely help.

About The Author: Ray is a freelance writer who enjoys writing about cloud technology and its impact on Enterprise Resource Planning (ERP). You can find more of his work on his website about ERP Vendors.

The IBM i Sucks (iSeries, AS/400)

If I had a dollar for every time I heard the IBM i sucks.

Usually it comes from folks that don’t know the first thing about the platform, what it excels at and what it does not.

Time to dispel some myths and take a look at a few key areas where the IBM i platform can shine.

The IBM i operating system come with a slew of tools out of the box including a database system. And not just any database, but a full-blown DB2 database that is robust and scalable. The DB2 database is tightly integrated into the operating system, so you don’t have to worry about it crashing or silly connectivity issues that plague other platforms.

These systems are ready and primed for the next generation of enterprise web enabled apps that solve business problems.
People have a misconception IBM i systems consist of tired old terminal “green-screens” and outdated software. But the fact is you can run all the up to date web technologies and programming languages like PHP, Java, Apache HTTP services and web frameworks.

And green screens are a thing of the past. The interfaces can be run in fancy, stylish looking GUIs with clickable buttons that users are fond of working in.

Big Data… have you heard of it? I frankly think it is funny  to hear the term “big data” thrown around today because older
AS/400s from a decade ago handled big data. Storing very large data and tables with millions of records where not a problem for the system years ago and it is a non-issue today. In fact large organizations like Costco Wholesale on down to your local city utility trust data to the IBM i.

Innovation is alive and strong. IBM is investing in advancements to the platform and database engine called the SQL Query Engine or SQE. It is not at all uncommon for SQL queries to see drastic improvements by simply updating the SQE and without any changes to an SQL statement or requiring new indexes be built or maintained.

Finally let’s talk about security. As long as the system has been configured with a modicum of care security compromises are a non-issue. You also need not worry about forced weekly security updates, zero day exploits or any of the other garbage system administrators deal with when maintaining other server operating systems.

Now one of the drawbacks to using an IBM i is with staffing and programmers.

The IBM i has a bad rap, in no small part due to old school programmers that have not kept up with the pace of new technology… or even keep up with old technology. These tend to be the folks still cranking out programs exactly like they did in 1995. For example, I was working with a business in the mid-2000s and the programmer on staff was happily writing RPG II programs.

This was long after that language was past its prime and could not take advantage of new technology available for fifteen years.

I have culled my share of resumes for IBM i professionals and know it is rare to find a cutting edge programmer current with new technology. But they do exist and when you land one, the results can be dynamite.

About The Author: John Andersen is an irreverent and longtime IT manager of the IBM i platform. He writes a popular AS/400 and IBM i blog with practical tips, research and other musing at

(Image From IBM Press Resources, Copyright IBM)

The Importance of Visualizing Big Data

For those of us who love and understand big data, there’s never been a better time to be in the business. Thanks both to a host of new measurement technologies and the integration of these tools into popular culture, we’ve knocked holes in the dams that once stood between us and a formerly inaccessible frontier of data. The result: a flood of information, filled with boundless potential for providing the kind of insight that can radically change the way we live our lives, run our businesses, and interact with our planet.


But raw data, of course, isn’t readily translatable to most human minds, especially to the many big decision makers who don’t specialize in information sciences. And if data doesn’t make sense to the people who call the shots or to the masses for whom a small change of behavior could have significant outcomes for humanity as a whole, it doesn’t do much good. That’s why it’s so important to develop the right tools for compressing knowledge to highlight the most important take-aways, discovering the links between data sets to get a better sense of causal relationships, and communicating analysis in the kind of visual form that both experts and lay people alike can understand and engage with intuitively.

Breaking it into Digestible Pieces

Tell any big data vet, researcher or statistician that data is itself a story and you’ll likely get a roll of the eyes and a “tell me something I don’t know.” But even those of us who love the numbers aren’t made of numbers. Without any kind of business analytics software to help us break down a mass of data into manageable chunks, it’s hard to know even where to start our analysis. What’s more, when that data moves to the next link on the corporate chain, users won’t know what to make of a mass of numbers.

Breaking data down into subsets and visualizing those subsets in graphs, charts and infographics helps compress the knowledge into digestible pieces, allowing viewers to allocate their full attention and analytical tools to the presented information, which can then be pieced together in different ways in other visualizations to highlight different aspects of the analysis. Ironically, by distilling the larger picture into more limited subsets, we’re better able to grasp the larger picture. It shouldn’t be surprising, as this is how it goes for any type of learning. We can’t lay the first floor until the foundation is set, and we can’t move into the house until the whole thing has been built, layer by layer.

Finding Trends

What, exactly, does it mean to get lost in a sea of data? Losing our bearings, for one, something that’s far too easy to do when unprocessed data is dumped into a spreadsheet and expected to perform. Dashboard reporting is essential to alleviating this problem. Not only do dashboards allow users to personalize incoming data to suit their own processing styles and current focus, but they also provide a range of analysis and visualization tools that empower the user to easily find trends in the data and discern the relationship between various topically disparate factors.

Creating this kind of landscape – that is, parsing data in such a way that it makes intuitive sense to the viewer – is what will change data into knowledge into insight into action.

Right/Left Brain Understanding

While the depiction of brain lateralization has been both oversimplified and over-hyped in the popular press, discussing cognition in terms of the right and left mind can provide an illuminating framework. In fact, more and more evidence points to the importance of full-brain thinking – when both hemispheres apply their unique skills in tandem to process a difficult task. When data is effectively visualized, it speaks directly to the intuitive right brain to help us just get what the data is saying. Early on in analysis, this nurtures hunches that the left brain can then submit to cross-examination, further crunching the numbers and pinpointing the truth.

The result of this process can then be returned to the right brain to create an even more compelling story or visualization that will speak intuitively to the minds of a boss or a friend, who can then process the data in a similar way. Translating data between the two hemispheres of the brain doesn’t just give insight, it’s what insight is made of. That’s why it’s so important to embrace the kinds of tools that speak the language of each.

Lying latent within massive data sets lies the kind of wisdom that can take a fledgling business to a profitable one, change lives, and better the whole of humanity. But we can’t act on those insights if we don’t understand the knowledge that’s being handed to us in the first place. Visualizing data is the key to unlocking those secrets. It’s the key to understanding, engagement, and, ultimately, change.

Image Source:

What is Heterogeneous Computing?

If you’ve been reading up on the latest innovations in microprocessor technology recently, you might have heard a number of new buzzwords such as Heterogeneous Architecture, HSA, and APUs. Many industry leaders such as Oracle and AMD have been very vocal about their belief that heterogeneous computing is the future of the industry.

Because of this, I thought that it might be a good idea to provide some background into what heterogeneous computing is, and how it differs from previous processor technologies.

Early computers were designed to automate data handling and calculations which had previously required lots of error-prone manual work. Central Processing Units (CPUs) were designed to take a list of complex calculations or data manipulation tasks, and sequentially perform these tasks at high speeds and with perfect accuracy.

Although the sophistication has increased a lot over the years, the essential idea behind a CPU has remained the same. CPUs are still the ideal mechanism for handling complex sequential tasks such as email or mathematical problem solving.

But when the PC era began, consumers saw that these machines had the potential beyond strictly utilitarian functions. As consumers began to also see the PC as an entertainment or gaming platform, they began demanding higher-quality graphical experiences. However, the sequential processing logic of traditional CPUs created a bottleneck which limited the speed at which high-quality graphics could be delivered. In response to this demand, Graphics Processing Units (GPUs) were created.

The main difference between GPUs and CPUs is that a GPU can perform operations in parallel, making it faster and more responsive than a CPU for certain types of applications. Because of this, GPUs are ideal for multimedia and gaming applications. But beyond entertainment, GPUs are also useful for many critical business tasks such as 3D modelling and crash test simulations.

But many applications require both complex processing capabilities of a CPU, with the responsive parallel processing power of a GPU. This can lead to new bottlenecks as information must be passed back and forth between the GPU and CPU when working on a task.

Homogeneous computing aims to eliminate this bottleneck by allowing the GPU and CPU to work on a problem simultaneously. In a homogeneous processing architecture, this is often accomplished by having a single address space which can be accessed by the CPU, GPU and other function devices at the same time. This greatly speeds up processing because data can be used by all processors without first having to move it.

The benefits of homogeneous computing really fall into 2 major categories: development and efficiency.

  • This new architecture simplifies programming, and provides developers with greater flexibility and more powerful capabilities. It also has the potential to greatly improve the experience for gamers and people with graphically-intensive design and multimedia applications. Finally, this will lead to richer and more powerful web browser experiences, which will be increasingly important as productivity applications move primarily towards a SaaS-based delivery model.
  • This new architecture also has the potential to reduce power consumption for applications which require both GPUs and CPUs. This means that mobile devices such as tablets and smart phones can offer longer battery life. And it also promises to help reduce power consumption costs for high-performance computing applications and supercomputers.

It’s important to note that heterogeneous computing doesn’t only apply to CPUs and GPUs, but also applies to the incorporation of other types of specialized processors such as digital signal processors and application-specific integrated circuits.

(Image Source: AMD Press Resources. Copyright AMD)