Archives for : June2013

Your Cloud Operations as — No, IS — A System of Systems

As many IT managers and CIOs found over the years, growing from one major application to multiple applications was tough in a broad range of areas such as planning, budgeting, implementation, user training, internal organization transformation management, support, interoperability, and more. Keeping up with applications upgrade was tough enough, but when you are building data warehouses and data marts on top of everything else, and then trying to become more flexible and agile to meet market and supply chain shifts, it just simply got tougher.  IT Management started to feel the pressures of managing a system of systems — all these applications and products — inter-operating with each other.

Eighteen (18) years ago  I was involved in one effort —  a “simple” move to an ERP to consolidate the operations of 13 business units globally. This led to the complex implementation  and integration of SAP, Freight Rater, Manugistics, PeopleSoft, a global messaging gateway, and a large number of custom “apps” under/in SAP to meet the firm’s needs, PLUS the implementation of a second SAP instance to handle OLAP since the core system was devoted to high-volume real-time 24X7 OLTP … and OLAP functions killed the OLTP system performance. They not-quite doubled the core system hardware costs with that realization — and synchronization of databases became another major task to do, and manage. There were many other “things” we had to do along the way, and while the goal was simplification, those other “things” added complexity.

Recognized or not, cloud computing was and is very much a form of business process re-engineering, only now it’s what I call IT Systems and Services Re-engineering (ITSSR). With the Cloud vendors offering a broad range of PaaS, IaaS, SaaS, DaaS, and now MAaaS — Mobile Authentication as a Service, and apps that support the seemingly old-fashioned desktop user to the mobile user in virtually any form, the bar has been raised, and perhaps even the stakes have gone up, in the use of Cloud Computing.

Regarding MAaaS, virtually all major application systems now offer remote user/mobile device support, and integrated authentication services. Other firms offer third party solutions. MAaaS simplifies the problems you may have had in maintaining separate authentication services and product features for each application by unifying the authorization for multiple apps/devices into one consolidated service — in the cloud.  I have no fundamental objection to the concept other than the consolidation of essentially access information and certain forms of security information in the Cloud, and the on-going issues of ownership of data in the cloud, and where that data might reside country-wise. No disrespect to anyone or any product, but now — IN THEORY — you need only hack one site for access to many user domains simultaneously…but you could do that a number of ways.

I may have missed something (possible), but in all the discussions on Cloud Computing, I have yet to see the issue discussed regarding the evolution of a cloud computing environment (CCE) for a user evolving into a complex System of Systems (SoS).  Personally, I have no problem with this having worked on many complex business and government SoS projects in the past. But many of the systems had a logical sequencing of activities that enabled structured operation, and sometimes this point, and effort, seems to be lost in migrating to the new CCE.

If one takes Wikipedia as an authoritative source for definitions, a “System of systems is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete. The emphasis is mine.

Recently, I published a book Understanding the Laws of Unintended Consequences, and the narrative takes you on/through the escalating journey of understanding the Laws from their most basic form (i.e., every action as an intended consequence, and quite likely, at least ONE unintended consequence (that you may not even know about or see)), to what happens or can happen in a system of systems. For the occasional humor injected into the book, the subject is quite serious. The discussion of complex and systems of systems unintended consequences should make one think — not stop, but think.

In August, 2008 — before Cloud Computing really gained traction, the US Department of Defense published is Systems Engineering Guide for Systems of System. The publication makes for interesting reading for anyone looking at cloud computing or an in-house system of systems. The US military is built on a system of systems; in fact, so is the US government or any government for that matter. So are most corporations, but the evolution of those corporate SoS environments evolved over time in most cases, and were tuned and controlled as they went along.

Arguably, even major ERPs with their many modules that do different things — many of which can be done independent of other modules, are SoSes. Some firms, driven by acquisition, have lived with SoSes that are inherently incompatible, but found simplistic ways to integrate and operate them effectively, focusing on data as the driver and not apps, for example. One major automotive  supplier, with almost 200 plants acquired through growth and acquisition, at nearly as many locations globally, patently refused to try apps standardization and happily used an essential business operations data summary pull-down every day for the operational data it needed. Simple, easy to backstop, efficient — and VERY effective.  Inexpensive to implement, very low cost to operate, and 100% accurate 100% of the time (unless some plant was off-line).

But back to the DOD SE Guide. It  points out many things, one of which jumped off the page at me, notably about the emergent behavior of systems, an area of significant study now.  I know this issue all too well from past experiences where well defined and constructed SOSes suddenly seemed to take on a life of their own.  “What the …?” is a phrase I absolutely do not like.

The DOD guide notes on pages 9-10, and all emphasis is mine:  “In SoS contexts, the recent interest in emergence has been fueled, in part, by the movement to apply systems science and complexity theory to problems of large-scale, heterogeneous information technology based systems. In this context, a working definition of emergent behavior of a system is behavior which is unexpected or cannot be predicted by knowledge of the system’s constituent parts.

“For the purposes of an SoS, ‘unexpected’ means unintentional, not purposely or consciously designed-in, not known in advance, or surprising to the developers and users of the SoS. In an SoS context, ‘not predictable by knowledge of its constituent parts’  means the impossibility or impracticability (in time and resources) of subjecting all possible logical threads across the myriad functions, capabilities, and data of the systems to a comprehensive SE process.

The emergent behavior of an SoS can result from either the internal relationships among the parts of the SoS or as a response to its external environment. Consequences of the emergent behavior may be viewed as negative/harmful, positive/beneficial, or neutral/unimportant by stakeholders of the SoS.” [1]

Key word: HETEROGENEOUS. This whole emergent concept lands squarely on top of the Law of Unintended Consequences regarding a system of systems, which in part states: “In a system-of-systems, the Unintended Consequences may not be detected until it is far too late, but if detected early, more will be assured … that you become aware of. Never trust the obvious …”

It is fundamentally impossible to test all aspects of a SoS. Even an operating system has evolved to an SoS with the innumerable subroutines/processes running in parallel and doing different things for the OS as a whole.  Network products are released with high confidence levels they will work, but testing them in all configurations is impossible, as is testing bug fixes to complex problems. This later point – not testing all reported bugs in a major and ubiquitous software product — was made by the head of product testing for a major network products firm on a panel I chaired at a Networking products conference. You could hear 1,000 people in that room suck in all the air as he made that statement. This was news to them. And as some people commented to me later: “Well, that explained a lot.”

Which leads me back to the Cloud.  As firms move at measured or rapid pace to Cloud Computing using in-house systems and products migrated to the cloud, or use replacements provided by the Cloud Computing Environment suppliers, the adopters have begun to change the nature of how they conduct their business, from slightly change with direct 1:1 replacements to significantly with new products, services, and tools being bolted on and integrated.

As Cloud adopters use more third party tools and products — SaaS, MAaaS, etc., that inter-operate with or are dependent upon each other, they begin to draw or establish different linkages and connections between their current work environment and their IT. If users wind up using products from different vendors in different clouds — which is quite possible, they have added complexity and risk to the mix in excess of their current exposure. If you’ve decomposed applications and created SOA components, you just added to the mix.  Therein lies the “trap,” so to speak. De-linkage creates opportunities for problems that simplify magnify comparable problems that might occur in more tightly linked or homogeneous environments.

But it goes beyond that: many of the SaaS offerings that people are using don’t know about other’s systems/SaaS offerings, and the integration of these into a cohesive working environment can be a challenge. A recent article I read regarding one firm’s adoption of a mobile CRM product indicated they had to “fill in the blanks,” so to speak, and write their own enhancements for the additional features/services they wanted that the CRM package did not [yet] offer.  So even while finding a “solution,” it was incomplete and they added to it — and thus added to the complexity of their overall systems management requirements. The app with enhancements worked fine … but it was one more thing to track and manage.

I could go on, but the message is simple: an SoS cannot just spin into existence. It needs to be as well-defined, architected and managed as any other prior-in-house system. If anyone forgets that, they will really find out what the scope of unintended consequences can be.

===

[1] Office of the Deputy Under Secretary of Defense for Acquisition and Technology, Systems and Software Engineering. Systems Engineering Guide for Systems of Systems, Version 1.0. Washington, DC: ODUSD(A&T)SSE, 2008. ( Document on line at http://www.acq.osd.mil/se/docs/SE-Guide-for-SoS.pdf)

Enterprise Cloud Backup Review: Zetta.net

Zetta.net has been in the enterprise cloud backup business since 2009 and the latest version of their DataProtect product offers a “3-in-1” server backup solution, combining backup, disaster recovery, and archiving. Zetta’s solution is currently the #1 ranked solution on our list of the top 25 enterprise cloud backup solutions, and here’s why:

1. Speed – Zetta’s backup performance is faster than any solution we’ve tested, and the company claims to have customers that have recovered up to 5TB in a 24 hour period.

2. No Appliance – Many well regarded hybrid-cloud backup products are based on a PBBA, or purpose built backup appliance. It’s EF’s opinion that these solutions, while great for on-premise backup, are limited in offsite backup capabilities.

3. Pricing – Capacity based pricing (paying per GB or TB of backup storage used) strikes us a better deal for most organizations. Since most backup admins would prefer a single backup solution for servers and endpoints, it’s cheaper than paying for software licenses per computer. Also for deployments that include multiple remote offices, Zetta’s hardware-free solution avoids the cost of multiple PBBAs. Zetta’s pricing is all-inclusive (software, storage, and support) and starts at $225 a month.

Another reason we like Zetta’s solution is that it enables backup for both physical and virtual servers, with plug-ins available for Hyper-V, VMware, and Xen, in addition to the more standard physical SQL & Exchange servers. This is a key feature since the recent trend of separate backup solutions for physical and virtual servers has a tendency to increase overall costs and complicate backup processes even further.

Zetta also offers local backup in addition to their cloud-based snapshot and replication, allowing for faster recovery of large database or VM files, for example. In short, we like Zetta’s cloud backup solution because it provides local, offsite and remote backup without the need for new hardware or portable media – eliminating travel time to and from your remote offices.

We’ll continue trying backup solutions and reworking the top 25 list, but for now Zetta is the Enterprise Features #1.

What enterprise cloud backup solution do you consider the best? Leave your thoughts in the comments.

The 5 Most Important Steps to Ensuring Data Security in the Cloud

The use of cloud computing services makes a lot of sense for many businesses. By storing data on remote servers, businesses can access data whenever and wherever it’s needed, and cloud computing reduces the cost of the infrastructure required to manage their networks.

While cloud computing offers a number of benefits, it also has some drawbacks, most notably in the realm of security. Whenever data is transmitted between endpoints it’s vulnerable to loss or theft, and in an era when employees have 24-hour access to servers that are “always on,” security is of the utmost importance.

If you have already made the move to the cloud, or if you’re considering it, there are some steps to take to ensure that your data stays safe and secure.

Step 1: Choose the Right Vendor

The growth in cloud computing means an increase in cloud services providers. Before you choose a vendor to store your valuable data, find out how the vendor will keep your data safe. But don’t rely on sales presentations and literature from the vendor explaining their security policies and procedures. Perform your own background checks on the vendor, check references, and ask questions about where and how your data will be stored, as physical security is just as important as network security. Be sure that the vendor can provide proof of compliance with any governmental regulations regarding your data, and employs adequate encryption protocols.

Step 2: Encrypt Data at All Stages

Speaking of encryption, ensuring data security requires encrypting data at all stages — in transit and while in storage. When data is encrypted, if there is a security breach the data will be all but useless unless the criminals hold the encryption key. However, according to a recent survey, few companies actually encrypt the data at all stages, instead only encrypting during transit or while in storage, creating serious vulnerabilities.

Step 3: Manage Security In-House

Data breaches in cloud computing often occur because the right hand doesn’t know what the left is doing; in other words, no one really knows who is responsible for the security of the data in the cloud. One survey indicated that nearly half of all businesses believe that security is the vendor’s responsibility, while an additional 30 percent believe that the customer is responsible for securing its own data. The answer lies somewhere in the middle. While the cloud provider certainly has a responsibility for securing data stored on its servers, organizations using the cloud must take steps to manage their own security. This means, at minimum, encrypting data at endpoints, employing mobile device management and security strategies, restricting access to the cloud to only those who need it and employing strict security protocols that include two-factor authentication.

Step 4: Provide User Training

One of the biggest mistakes that companies switching to cloud computing make is assuming that users know how to use the cloud and understand all of the security risks and protocols. For example, it’s not uncommon for employees using their own devices for work to log on to the cloud from the closest hotspot — which might not be the most ideal situation security-wise. Employees need to be trained and educated on how to properly maintain the security of their devices and the network to avoid security breaches.

Step 5: Keep Up With Advances in Security

One of the most common causes of devastating security breaches is a vulnerability created by failing to install security updates or patches to software. Malware is often designed to exploit vulnerabilities in common plug-ins or programs, and failing to keep up with updates can lead to disaster. All endpoints must be continuously updated to keep data secure. In addition, as the security industry changes protocols, best practices change as well. Understanding changes in best practices, technology and protocols and making changes accordingly will help prevent a costly disaster.

There are many factors that go into developing a robust security plan for data stored in the cloud, but at the very least, these five points must be taken into consideration. Without addressing these issues, even the best security technology and plan will leave your data susceptible to attack.

 

About the Author: Christopher Budd  is a seasoned veteran in the areas of online security, privacy and communications. Combining a full career in technical engineering with PR and marketing, Christopher has worked to bridge the gap between “geekspeak” and plain English, to make awful news just bad and help people realistically understand threats so they can protect themselves online. Christopher is a 10-year veteran of Microsoft’s Security Response Center, has worked as an independent consultant and now works for Trend Micro.

How the Failure of Windows 8 Could Destroy the PC Market

The release of Windows 8 was supposed to bolster PC sales. Instead, companies are seeing a record drop in sales since the release of the latest operating system.

In the past, other operating systems caused a major disappointment to the computer world, while some caused a significant increase in sales. In light of this, and with the popularity of tablets rising, could this new let-down affect the PC market? Let’s take a look.

Windows 8 Was Essentially Created for Tablets

Image via Flickr by Dell’s Official Flickr Page

A huge component of Windows 8 was to be user-friendly with tablets. While it’s true that many people are switching over to tablets, there are still those who use PCs on a daily basis, particularly in the workplace. Some observers have observed that trying to create “one OS to rule them all” has resulted in compromised performance for PC users.

Tablets Have Different Operating System Requirements

Another reason that Windows 8 could crash the PC market is simply because tablets and PCs have an entirely different set of requirements when it comes to their OSes. Part of this has to do with the simple fact that PCs have a practically infinite amount of resources available, while tablets are very limited by comparison. Subsequently,

WiFi Capability

Another reason users may switch from PCs to tablets is the increasing number of WiFi spots. Tablets are able to connect to the via WiFi, which allows users to take them virtually anywhere. They can read their email, use social media sites, chat, and surf the web from anywhere around their home and at most public places. If you’re trying to get that perfect internet in your home, check out wimax to get the most out of your new tablet.

New Operating Systems are Becoming Less User-Friendly

As new operating systems continue to be developed, they’re becoming less user-friendly. There are more updates that have to be downloaded on a regular basis, so the programs users access on a regular basis are not always compatible with the new operating systems, and older accessories, such as printers and scanners, don’t work with the new operating systems. If this trend keeps up, users will no longer turn to PCs as their first choice of a computer.

Not Everyone Needs a New PC

One of the big fix-its Windows 8 mentions is getting a new PC that is more compatible with the latest operating system. But not everyone likes to update their PCs on a regular basis. A lot of people prefer to keep their computers for as long as possible, until they absolutely have to buy a new one because theirs no longer works. If the solution to fixing problems with Windows 8 is to buy a new PC, users are going to give up on PCs altogether, rather than face the cost of replacing them every time a new operating system comes out.

Windows 8 may have solutions to their problems, but the biggest solution could be that users stop using their PCs altogether. Switching to a tablet may be easier than dealing with the changes, especially with the latest Internet capabilities.

 

Data Consolidation Facts

Data consolidation is an important process that is used to summarize large quantities of information. On a regular basis, the respective information is to be found in the form of spreadsheets encompassed into larger worksheets. Computers are involved in the data consolidation process, and Microsoft’s Excel is one of the most popular tools of choice. Data consolidation is done at an automated basis with the help of the tools that are incorporated within the program.    

What Is Data Consolidation?

Briefly put, data consolidation refers to the use of several data cells originating from a spreadsheet and compiling the cells into a different sheet. The process does an excellent job at aiding computer users personally and manually record individual data cells from particular reference points, then entering them into various other places using a brand new spreadsheet. This way, the formatting, re-organization, and re-arranging of huge amounts of information can be considerably simplified.    

Data consolidation programs require certain conditions to be met by the spreadsheets and files whose data needs to be consolidated. First of all, each of the large worksheets has to share the same information range while analyzed in both of its axis. This requirement is going to aid the data consolidation computer program to complete complex calculus; the calculations are meant to determine the way each data cell corresponds with the data belonging to other worksheets or pages. Once the process will be completed, the computer program is going to create a brand new worksheet that is going to set up a summary of all the data belonging to the respective worksheets

 

Frequent Data Consolidation Users   

Usually, data consolidation is use in a wide array of fields for a more efficient organization of employee’s work. Data consolidation is also a process that can bring considerable improvements to one’s proficiency levels. Physicians normally use data consolidation to keep records and tracks of their patients and treatment schemes. Teachers can make full use of data consolidation to create fast summaries of their students’ grades, test, projects, or assignments. Retailers can also find great use in data consolidation when needing to track down the stores or the items of merchandise that are creating the largest profits and so on. Needless to say this is not an exhaustive list of all of the practical applications of data consolidation.    

Therefore, there are also a large number of people who are willing to pay in exchange for such services. Data consolidation software is also available for purchase. The particularity of these applications is the fact that they are not fully automated. They are completed with the help of several worksheets ran in several formats/programs. The presence of an individual who manually needs to summarize data at the time spreadsheets do not meet the previously established requirements is often times necessary.   

In case you need to use the services of such a specialist, browse the internet and get in touch with a few people. You can also quickly visit the mastercoin.it site and look at the latest Mastercoin rate to make some informed and profitable investments from the comfort of your own home.