Archives for : Full Article Archive

MTBF: What it Really Means?

mean time between failures

You may see a disk manufacturer touting that their drives have a Mean Time Between Failures (MTBF) or Mean Time to Failure (MTTF) of a million hours or so. Now, you will never run a disk for that long, so what does that mean in terms of what you can actually expect out of a disk. This is what disk manufacturer Seagate Technology LLC says about the matter:

“It is common to see MTBF ratings between 300,000 to 1,200,000 hours for hard disk drive mechanisms, which might lead one to conclude that the specification promises between 30 and 120 years of continuous operation. This is not the case! The specification is based on a large (statistically significant) number of drives running continuously at a test site, with data extrapolated according to various known statistical models to yield the results.

Based on the observed error rate over a few weeks or months, the MTBF is estimated and not representative of how long your individual drive, or any individual product, is likely to last… Historically, the field MTBF, which includes all returns regardless of cause, is typically 50-60% of projected MTBF.” (

So, setting aside the MTBF metric, how often do disks fail? Three studies have come out over the last several years that address this topic. (Note that the data in these studies is not applicable to’s own enterprise-class storage system which makes heavy use of Flash and SSDs to boost performance and a RAIN 6 architecture for reliability, but does apply to many of our customers.)

In 2007, both Google and Carnegie Mellon University (CMU) presented papers on their experiences at the 5th USENIX Conference on File and Storage Technologies (FAST07). CMU’s paper, Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you? (, looked at data on about 100,000 disks, some with a lifespan of five years. They found that, while the MTTFs on the data sheets suggested an annual failure rate of no more than 0.88%, “in the field, annual disk replacement rates typically exceed 1%, with 2-4% common and up to 13% observed on some systems.” The disk failures were found to increase constantly with age, rather than setting in after a nominal life time of five years. “Interestingly, we observe little difference in replacement rates between SCSI, FC and SATA drives, potentially an indication that disk-independent factors, such as operating conditions, affect replacement rates more than component specific factors.”

Google’s FAST07 paper Failure Trends in a Large Disk Drive Population ( ) looked at that company’s experience with more than 100,000 serial and parallel ATA consumer-grade HDDs running at speeds of 5400 to 7200 rpm. The disks were at least nine different models from seven different manufacturers, with sizes from 80 to 400 GB. Google found that disks had an annualized failure rate (AFR) of 3% for the first three months, dropping to 2% for the first year. In the second year the AFR climbed to 8% and stayed in the 6% to 9% range for years 3-5.

In January of this year, consumer online backup vendor Backblaze published the failure rates of the 27,134 consumer-grade disks it was using ( The results varied widely by vendor. The Hitachi 2TB, 3TB and 4TB drives all came in with AFRs under 2%. Certain Seagate drives performed much worse, particularly in the 1.5TB size where the Barracuda LP had a 9.9% AFR, The Barracuda 7200 a 25.4% AFR and the Barracuda Green a 120% AFR. However, these failure rates doesn’t mean that the Backblaze is unhappy with all the Seagate drives. As Backblaze engineer Brian Beach stated, “The Backblaze team has been happy with Seagate Barracuda LP 1.5TB drives. We’ve been running them for a long time – their average age is pushing 4 years. Their overall failure rate isn’t great, but it’s not terrible either.” He also said, however, that “The non-LP 7200 RPM drives have been consistently unreliable. Their failure rate is high, especially as they’re getting older.”

So, what does all this mean? By all means read the entire reports cited above to get additional information on differences in models, what causes disks to fail and how to predict when a particular disk will fail. The main lesson to learn is that disks do fail, and they fail a lot more often than many people expect. This is especially true when using consumer-class drives. That doesn’t mean that you shouldn’t use them; their cost advantage often makes them a great choice for many applications.

But the high rate of disk failures, with all types of drives and under a wide variety of operating conditions, means that you definitely need a robust backup system no matter what you are using as your primary storage.

Virtualization: Are the Benefits Worth the Investment?

Can you really save money with server virtualization? While every business is different, the numbers are very encouraging. By some estimates, virtualization may reduce your hardware and operating costs by as much as 50 percent, and the energy it takes to run your servers by 80%.

Take a look at some of the benefits:

•   Reduced hardware expenses – Fewer physical servers means smaller upgrade and maintenance costs.
•   Lower power and cooling costs – Using less equipment reduces the amount of energy that must be expended for hardware upkeep.
•   Improved asset utilization – Virtualization makes it possible to get more out of the hardware that you already have.
•   Fewer management touch points – Consolidating servers, storage, networking, and software means fewer individual machines that must be maintained.
•   Better IT responsiveness – Processes are automated, meaning that you can speed deployment and provisioning, increase uptime, and recover from problems faster.
•   Reduced carbon footprint – Virtualization helps not only your bottom line, but the planet as well.
•   Reduced down time – Virtualization helps you get back on your feet quickly in the case of natural or man-made catastrophe.

Virtualization can make your business more flexible and agile, and by so doing, decrease your IT overhead. Virtualization also positions you to take advantage of cloud computing more easily, should your organization choose to do so.

You Don’t Have to Start Your Virtualization Project from Scratch

The good news is, virtualization may be less of an investment than you think. If you start from where you are and begin making progress slowly, you may be able budget for these expenditures more easily. Here are a few tips:

•   Use the IT you already own. You may not need to replace much hardware—if any at all. Begin virtualizing the servers you have already.
•   Introduce new innovations a piece at a time. Virtualization allows for real-time allocation of resources, so you don’t have to cause too much disruption to your employees.
•   Virtualize your entire infrastructure, including servers, applications, storage, and networking.
•   Enable your company to respond dynamically to business demands.

Are you taking the virtualization plunge? What has been your experience so far?

Dell Virtualization Benefits
Author Bio:

Matt Smith works for Dell and has a passion for learning and writing about technology. Outside of work he enjoys entrepreneurship, being with his family, and the outdoors.

Enterprise Cloud Backup Solution Review: MozyPro

Summary: Created in 2005 (and acquired by EMC in 2007), MozyPro offers business backup for individual computers and for servers. EMC also offers Mozy Home for personal computer users, and MozyEnterprise.


Being part of EMC gives Mozy access to storage expertise and other resources. However, the per-computer pricing may be a concern for some prospects, along with the lack of Linux support, along with no mention of support for virtual machine environments. (Note: Parent company EMC also owns VMware.)


Multiple, globally distributed.


• Remote: Incremental; scheduled or automatic
• local (Windows only) to external drive, via Mozy2Protect (
• Can back up all open and locked files as well as common business applications running on Windows servers


• Per computer/gigabyte/month (see for specifics)
• For servers, additional monthly fixed “Server pass” fee
• No set-up fees


• Desktop: Windows, MacOS
• Servers: Windows 2012, 2008, and 2003 servers and Mac OS X 10.8, 10.7, 10.6, 10.5, 10.4; Exchange; SQL

REQUIRES: (hardware, software)

• “Data Shuttle” hard drive is shipped for initial backups of larger data sets.

• Client-side encryption (256-bit AES)
• SSL-encrypted tunnel
• Data center: 256-bit AES or 448-bit Blowfish


• Audits/Certifications: SSAE 16 audit, ISO 27001 certification
• Accessible via mobile apps (iOS, Android)
• Bulk upload via Data Shuttle hard drive

MozyPro is currently ranked #25 on the Enterprise Features Top 25 Enterprise Online Backup Solutions for Business Cloud Data Protection list.

Does Windows 8.1 Have Any Chance of Salvaging PC Sales?

The global PC market just took the biggest nosedive of the last twenty years, just in time for Microsoft’s new operating system Windows 8.1. The new OS has some great features, but they have been largely overshadowed by user distaste for the new interface.

Some are optimistic about the second quarter’s sales, speculating that back-to-school shoppers will boost numbers. Will that be enough to bring PCs back from the brink though? Here are some things to consider regarding this trend.

Sales are Down for PCs While Other Devices are Up

Let’s just acknowledge the elephant in the room up front. Wireless devices like smartphones and tablets are choking out the PC industry. With all of the advances to these devices in the last several years, it should not come as a surprise that they are replacing desk and laptop computers. Simply put, PCs are a dying breed.

According to a Gartner report, an astonishing 70% of devices sold in 2012 were either tablets or smartphones. Sure, traditional computers will still likely be a standard fixture in home and corporate settings — for a while at least — but the emphasis is undeniably on other devices these days.

PCs Last Longer Than They Ever Have

For a long time PCs dominated the market, and they seemed untouchable. People just couldn’t imagine life without their standard desk and laptop computers. The recession we have experienced over the last decade or so really changed the game.

People have stretched their dollars as far as they’ll go. In an age when streaming movies and games on high-speed internet at home is what the average consumer is using their computer for, it’s easier to justify the monthly bill for companies like CenturyLink than on new hardware. It’s easier to spend $100 on antivirus software, or have Best Buy’s Geek Squad give a computer a tune-up than it is to drop several hundred on a new model.

Seasonal Boosts May Affect Numbers

It’s hard to say whether back-to-school sales will boost sales. If they do, who knows if it will be enough to make up for the recent drop. HP’s recent numbers show that its PC sales fell by 18% compared to last year. Every market fluctuates, but this recent trend is looking bleak for PC manufacturers.

People Love new Gadgets, But Crave Familiarity

When new technologies are introduced, the public usually snatches them up at lightning speed. The technology of tablets and smartphones is and new, but people will eventually go back to what they know, if only in part. Tablets and smartphones are designed to work on their own, but also to work in tandem with traditional computers anyway.

In short, no, the PC market will probably not recover from this recent decline in sales. That doesn’t mean they are dying out, it just means other devices are taking their place at the top of the food chain. Just like the typewriter did not disappear the minute the computer made its appearance, the traditional computer isn’t going anywhere. For a while a least.


Top 25 Enterprise Online Backup Solutions for Business Cloud Data Protection

Online backup isn’t just for laptops anymore. The best modern enterprise cloud backup services are able to deliver complete backup & DR functionality for multi-platform server environments with data sizes up to 100TB and beyond.

Here’s a quick checklist of what to look for:

1. Performance – Performance is the Achilles heel of practical cloud backup in the real world. Test performance in your environment, the biggest mistake is to buy without trying.
2. Cost – Evaluations of cloud backup cost work best when the total costs are compared, not only the “teaser” price per-GB-per-month.
3. Security – Checking that data is encrypted in flight and at rest in the data center. Also look for audited data handling standards like SSAE-16.
4. Local backup capability – This is an obvious part of enterprise backup, a must have.
5. VM and Physical server backup capability – To be considered among the best enterprise backup solutions, a product should be able to backup both server types.
6. Disaster Recovery – This is why offsite backup is done in the first place. Best practice is to evaluate recovery performance during a trial period.
7. Archiving – Not the most critical component but large amounts of company data are never accessed, and storing it offsite frees up primary storage.

Below, I’ve made a list of what I consider to be the 25 most important backup services for business-class server backup.

    Affordable enterprise-grade cloud backup, that’s faster than anything else out there, with backup stated backup speed up to 5TB a day. Includes online & local backup software, disaster recovery functionality, cloud storage, and plug-ins for SQL, Exchange, VMware, Hyper-V and NetApp servers.
  2. MozyPro
    The business-class counterpart to one of the world’s most popular low-cost backup solutions. Backing from EMC is a major plus or negative depending who you are.
  3. CrashPlan
    Offering both free and enterprise backup solutions with support for VMware, Sun, Linux, Windows and Mac. Large enterprises have had good results for endpoint backup, less so for servers.
  4. EVault
    Online backup backed by a strong name, and trusted by a broad client base.
  5. IDrive
    Pricing and support is well reviewed, but “1TB per week” is too slow for even small enterprises.
  6. Carbonite
    Very popular with consumers for home backup, Carbonite’s server backup pricing is low, but performance is slow at “up to 100GB.”
  7. DataBarracks
    Serious business backup service with support for many different operating systems.
  8. AmeriVault
    Enterprise online backup from Venyu, helping to maximize both data protection and availability.
  9. Novosoft Remote Backup
    Online cloud backup that is easy, affordable, secure and practical. They also offer your first 5 GB for free.
  10. SecurStore
    An industry-certified leader in cloud backup and corporate disaster recovery.
  11. LiveVault
    Iron Mountain’s entry into the business online backup market.
  12. BackupMyInfo
    Online backup from a talented and diverse group of entrepreneurs.
    Helping ensure that your company is thoroughly prepared for even the most menacing of data disasters.
  14. GlobalDataVault
    Advanced, full-featured backup service provider with a special focus on compliance.
  15. Backup-Technology
    A rapidly-growing innovator which has been providing online backup since 2005.
  16. Intronis
    Fast, secure online backup with an established partner network.
  17. StorageGuardian
    Award-winning backup provider, recommended by VARs for over 10 years.
  18. CentralDataBank
    A trade-only cloud backup provider, built on a network of over 50 independent reseller partners.
  19. Storagepipe
    The Canadian leader in online backup to the cloud, with a broad presence in the blogosphere.
  20. OpenDrive
    Business backup with additional services built in, such as file storage, synching and sharing.
  21. Yotta280
    Years of experience in providing scalable data protection to companies of many different sizes.
  22. DataProtection
    Fast, reliable premium backup company that offers world-class support at no extra charge.
  23. RemoteDataBackups
    A premium data protection provider that offers a free product trial. They have a long list of clients and testimonials available.
  24. DriveHQ
    Offering both cloud storage and IT services in the cloud, for a higher level of service.
  25. OnCoreIT
    Pure backup for service providers, businesses and individuals.

Your Cloud Operations as — No, IS — A System of Systems

As many IT managers and CIOs found over the years, growing from one major application to multiple applications was tough in a broad range of areas such as planning, budgeting, implementation, user training, internal organization transformation management, support, interoperability, and more. Keeping up with applications upgrade was tough enough, but when you are building data warehouses and data marts on top of everything else, and then trying to become more flexible and agile to meet market and supply chain shifts, it just simply got tougher.  IT Management started to feel the pressures of managing a system of systems — all these applications and products — inter-operating with each other.

Eighteen (18) years ago  I was involved in one effort —  a “simple” move to an ERP to consolidate the operations of 13 business units globally. This led to the complex implementation  and integration of SAP, Freight Rater, Manugistics, PeopleSoft, a global messaging gateway, and a large number of custom “apps” under/in SAP to meet the firm’s needs, PLUS the implementation of a second SAP instance to handle OLAP since the core system was devoted to high-volume real-time 24X7 OLTP … and OLAP functions killed the OLTP system performance. They not-quite doubled the core system hardware costs with that realization — and synchronization of databases became another major task to do, and manage. There were many other “things” we had to do along the way, and while the goal was simplification, those other “things” added complexity.

Recognized or not, cloud computing was and is very much a form of business process re-engineering, only now it’s what I call IT Systems and Services Re-engineering (ITSSR). With the Cloud vendors offering a broad range of PaaS, IaaS, SaaS, DaaS, and now MAaaS — Mobile Authentication as a Service, and apps that support the seemingly old-fashioned desktop user to the mobile user in virtually any form, the bar has been raised, and perhaps even the stakes have gone up, in the use of Cloud Computing.

Regarding MAaaS, virtually all major application systems now offer remote user/mobile device support, and integrated authentication services. Other firms offer third party solutions. MAaaS simplifies the problems you may have had in maintaining separate authentication services and product features for each application by unifying the authorization for multiple apps/devices into one consolidated service — in the cloud.  I have no fundamental objection to the concept other than the consolidation of essentially access information and certain forms of security information in the Cloud, and the on-going issues of ownership of data in the cloud, and where that data might reside country-wise. No disrespect to anyone or any product, but now — IN THEORY — you need only hack one site for access to many user domains simultaneously…but you could do that a number of ways.

I may have missed something (possible), but in all the discussions on Cloud Computing, I have yet to see the issue discussed regarding the evolution of a cloud computing environment (CCE) for a user evolving into a complex System of Systems (SoS).  Personally, I have no problem with this having worked on many complex business and government SoS projects in the past. But many of the systems had a logical sequencing of activities that enabled structured operation, and sometimes this point, and effort, seems to be lost in migrating to the new CCE.

If one takes Wikipedia as an authoritative source for definitions, a “System of systems is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete. The emphasis is mine.

Recently, I published a book Understanding the Laws of Unintended Consequences, and the narrative takes you on/through the escalating journey of understanding the Laws from their most basic form (i.e., every action as an intended consequence, and quite likely, at least ONE unintended consequence (that you may not even know about or see)), to what happens or can happen in a system of systems. For the occasional humor injected into the book, the subject is quite serious. The discussion of complex and systems of systems unintended consequences should make one think — not stop, but think.

In August, 2008 — before Cloud Computing really gained traction, the US Department of Defense published is Systems Engineering Guide for Systems of System. The publication makes for interesting reading for anyone looking at cloud computing or an in-house system of systems. The US military is built on a system of systems; in fact, so is the US government or any government for that matter. So are most corporations, but the evolution of those corporate SoS environments evolved over time in most cases, and were tuned and controlled as they went along.

Arguably, even major ERPs with their many modules that do different things — many of which can be done independent of other modules, are SoSes. Some firms, driven by acquisition, have lived with SoSes that are inherently incompatible, but found simplistic ways to integrate and operate them effectively, focusing on data as the driver and not apps, for example. One major automotive  supplier, with almost 200 plants acquired through growth and acquisition, at nearly as many locations globally, patently refused to try apps standardization and happily used an essential business operations data summary pull-down every day for the operational data it needed. Simple, easy to backstop, efficient — and VERY effective.  Inexpensive to implement, very low cost to operate, and 100% accurate 100% of the time (unless some plant was off-line).

But back to the DOD SE Guide. It  points out many things, one of which jumped off the page at me, notably about the emergent behavior of systems, an area of significant study now.  I know this issue all too well from past experiences where well defined and constructed SOSes suddenly seemed to take on a life of their own.  “What the …?” is a phrase I absolutely do not like.

The DOD guide notes on pages 9-10, and all emphasis is mine:  “In SoS contexts, the recent interest in emergence has been fueled, in part, by the movement to apply systems science and complexity theory to problems of large-scale, heterogeneous information technology based systems. In this context, a working definition of emergent behavior of a system is behavior which is unexpected or cannot be predicted by knowledge of the system’s constituent parts.

“For the purposes of an SoS, ‘unexpected’ means unintentional, not purposely or consciously designed-in, not known in advance, or surprising to the developers and users of the SoS. In an SoS context, ‘not predictable by knowledge of its constituent parts’  means the impossibility or impracticability (in time and resources) of subjecting all possible logical threads across the myriad functions, capabilities, and data of the systems to a comprehensive SE process.

The emergent behavior of an SoS can result from either the internal relationships among the parts of the SoS or as a response to its external environment. Consequences of the emergent behavior may be viewed as negative/harmful, positive/beneficial, or neutral/unimportant by stakeholders of the SoS.” [1]

Key word: HETEROGENEOUS. This whole emergent concept lands squarely on top of the Law of Unintended Consequences regarding a system of systems, which in part states: “In a system-of-systems, the Unintended Consequences may not be detected until it is far too late, but if detected early, more will be assured … that you become aware of. Never trust the obvious …”

It is fundamentally impossible to test all aspects of a SoS. Even an operating system has evolved to an SoS with the innumerable subroutines/processes running in parallel and doing different things for the OS as a whole.  Network products are released with high confidence levels they will work, but testing them in all configurations is impossible, as is testing bug fixes to complex problems. This later point – not testing all reported bugs in a major and ubiquitous software product — was made by the head of product testing for a major network products firm on a panel I chaired at a Networking products conference. You could hear 1,000 people in that room suck in all the air as he made that statement. This was news to them. And as some people commented to me later: “Well, that explained a lot.”

Which leads me back to the Cloud.  As firms move at measured or rapid pace to Cloud Computing using in-house systems and products migrated to the cloud, or use replacements provided by the Cloud Computing Environment suppliers, the adopters have begun to change the nature of how they conduct their business, from slightly change with direct 1:1 replacements to significantly with new products, services, and tools being bolted on and integrated.

As Cloud adopters use more third party tools and products — SaaS, MAaaS, etc., that inter-operate with or are dependent upon each other, they begin to draw or establish different linkages and connections between their current work environment and their IT. If users wind up using products from different vendors in different clouds — which is quite possible, they have added complexity and risk to the mix in excess of their current exposure. If you’ve decomposed applications and created SOA components, you just added to the mix.  Therein lies the “trap,” so to speak. De-linkage creates opportunities for problems that simplify magnify comparable problems that might occur in more tightly linked or homogeneous environments.

But it goes beyond that: many of the SaaS offerings that people are using don’t know about other’s systems/SaaS offerings, and the integration of these into a cohesive working environment can be a challenge. A recent article I read regarding one firm’s adoption of a mobile CRM product indicated they had to “fill in the blanks,” so to speak, and write their own enhancements for the additional features/services they wanted that the CRM package did not [yet] offer.  So even while finding a “solution,” it was incomplete and they added to it — and thus added to the complexity of their overall systems management requirements. The app with enhancements worked fine … but it was one more thing to track and manage.

I could go on, but the message is simple: an SoS cannot just spin into existence. It needs to be as well-defined, architected and managed as any other prior-in-house system. If anyone forgets that, they will really find out what the scope of unintended consequences can be.


[1] Office of the Deputy Under Secretary of Defense for Acquisition and Technology, Systems and Software Engineering. Systems Engineering Guide for Systems of Systems, Version 1.0. Washington, DC: ODUSD(A&T)SSE, 2008. ( Document on line at

The 5 Most Important Steps to Ensuring Data Security in the Cloud

The use of cloud computing services makes a lot of sense for many businesses. By storing data on remote servers, businesses can access data whenever and wherever it’s needed, and cloud computing reduces the cost of the infrastructure required to manage their networks.

While cloud computing offers a number of benefits, it also has some drawbacks, most notably in the realm of security. Whenever data is transmitted between endpoints it’s vulnerable to loss or theft, and in an era when employees have 24-hour access to servers that are “always on,” security is of the utmost importance.

If you have already made the move to the cloud, or if you’re considering it, there are some steps to take to ensure that your data stays safe and secure.

Step 1: Choose the Right Vendor

The growth in cloud computing means an increase in cloud services providers. Before you choose a vendor to store your valuable data, find out how the vendor will keep your data safe. But don’t rely on sales presentations and literature from the vendor explaining their security policies and procedures. Perform your own background checks on the vendor, check references, and ask questions about where and how your data will be stored, as physical security is just as important as network security. Be sure that the vendor can provide proof of compliance with any governmental regulations regarding your data, and employs adequate encryption protocols.

Step 2: Encrypt Data at All Stages

Speaking of encryption, ensuring data security requires encrypting data at all stages — in transit and while in storage. When data is encrypted, if there is a security breach the data will be all but useless unless the criminals hold the encryption key. However, according to a recent survey, few companies actually encrypt the data at all stages, instead only encrypting during transit or while in storage, creating serious vulnerabilities.

Step 3: Manage Security In-House

Data breaches in cloud computing often occur because the right hand doesn’t know what the left is doing; in other words, no one really knows who is responsible for the security of the data in the cloud. One survey indicated that nearly half of all businesses believe that security is the vendor’s responsibility, while an additional 30 percent believe that the customer is responsible for securing its own data. The answer lies somewhere in the middle. While the cloud provider certainly has a responsibility for securing data stored on its servers, organizations using the cloud must take steps to manage their own security. This means, at minimum, encrypting data at endpoints, employing mobile device management and security strategies, restricting access to the cloud to only those who need it and employing strict security protocols that include two-factor authentication.

Step 4: Provide User Training

One of the biggest mistakes that companies switching to cloud computing make is assuming that users know how to use the cloud and understand all of the security risks and protocols. For example, it’s not uncommon for employees using their own devices for work to log on to the cloud from the closest hotspot — which might not be the most ideal situation security-wise. Employees need to be trained and educated on how to properly maintain the security of their devices and the network to avoid security breaches.

Step 5: Keep Up With Advances in Security

One of the most common causes of devastating security breaches is a vulnerability created by failing to install security updates or patches to software. Malware is often designed to exploit vulnerabilities in common plug-ins or programs, and failing to keep up with updates can lead to disaster. All endpoints must be continuously updated to keep data secure. In addition, as the security industry changes protocols, best practices change as well. Understanding changes in best practices, technology and protocols and making changes accordingly will help prevent a costly disaster.

There are many factors that go into developing a robust security plan for data stored in the cloud, but at the very least, these five points must be taken into consideration. Without addressing these issues, even the best security technology and plan will leave your data susceptible to attack.


About the Author: Christopher Budd  is a seasoned veteran in the areas of online security, privacy and communications. Combining a full career in technical engineering with PR and marketing, Christopher has worked to bridge the gap between “geekspeak” and plain English, to make awful news just bad and help people realistically understand threats so they can protect themselves online. Christopher is a 10-year veteran of Microsoft’s Security Response Center, has worked as an independent consultant and now works for Trend Micro.

Data Consolidation Facts

Data consolidation is an important process that is used to summarize large quantities of information. On a regular basis, the respective information is to be found in the form of spreadsheets encompassed into larger worksheets. Computers are involved in the data consolidation process, and Microsoft’s Excel is one of the most popular tools of choice. Data consolidation is done at an automated basis with the help of the tools that are incorporated within the program.    

What Is Data Consolidation?

Briefly put, data consolidation refers to the use of several data cells originating from a spreadsheet and compiling the cells into a different sheet. The process does an excellent job at aiding computer users personally and manually record individual data cells from particular reference points, then entering them into various other places using a brand new spreadsheet. This way, the formatting, re-organization, and re-arranging of huge amounts of information can be considerably simplified.    

Data consolidation programs require certain conditions to be met by the spreadsheets and files whose data needs to be consolidated. First of all, each of the large worksheets has to share the same information range while analyzed in both of its axis. This requirement is going to aid the data consolidation computer program to complete complex calculus; the calculations are meant to determine the way each data cell corresponds with the data belonging to other worksheets or pages. Once the process will be completed, the computer program is going to create a brand new worksheet that is going to set up a summary of all the data belonging to the respective worksheets


Frequent Data Consolidation Users   

Usually, data consolidation is use in a wide array of fields for a more efficient organization of employee’s work. Data consolidation is also a process that can bring considerable improvements to one’s proficiency levels. Physicians normally use data consolidation to keep records and tracks of their patients and treatment schemes. Teachers can make full use of data consolidation to create fast summaries of their students’ grades, test, projects, or assignments. Retailers can also find great use in data consolidation when needing to track down the stores or the items of merchandise that are creating the largest profits and so on. Needless to say this is not an exhaustive list of all of the practical applications of data consolidation.    

Therefore, there are also a large number of people who are willing to pay in exchange for such services. Data consolidation software is also available for purchase. The particularity of these applications is the fact that they are not fully automated. They are completed with the help of several worksheets ran in several formats/programs. The presence of an individual who manually needs to summarize data at the time spreadsheets do not meet the previously established requirements is often times necessary.   

In case you need to use the services of such a specialist, browse the internet and get in touch with a few people. You can also quickly visit the site and look at the latest Mastercoin rate to make some informed and profitable investments from the comfort of your own home.  

Internal Threats: Employees More Dangerous Than Hackers

One of the biggest responsibilities of any IT department is to maintain a high level of security and ensure that the company’s data is properly protected. The dangers of breaches in security are very real and the effects can be crippling to a business. Many IT departments tend to direct their focus and attention towards external threats such as hackers. However, more and more companies are coming to the realization that internal sources, such as employees, may present the biggest security risks to the company. As technology continues to advance and the business landscape keeps evolving, IT departments are scrambling to keep up and protect their company and it is becoming clear that the best place for them to start is domestically within the company.

Dangers of a Security Breach

The threats and possible repercussions of a breach in security are the primary concerns of any company’s IT department. The damage that can be caused by these breaches can be devastating for a business of any size.

According to a study done by Scott & Scott LLP of over 700 businesses, 85% of respondents confirmed they had been the victims of a security breach. These types of breaches can be detrimental to a company in numerous ways. The most tangible damage caused by these breaches is the fines that are typically associated with them. The legal repercussions of a data breach, such as fines and lawsuits, can become costly in a hurry. Also, the loss in customer confidence is something that can continue to hurt business for years and something that some companies may never be able to overcome. Finally, if the compromised data from a security breach makes its way into the hands of a competitor it can be disastrous.

Employees Pose Largest Risk

To avoid the negative ramifications listed above, an IT department must first identify where potential risks for a breach exist. While outside sources like hackers do pose a threat, the biggest risk for a security breach to a company lies with its employees.

Unlike hackers, employees are granted access to important company data on a daily basis. This level of access to information is the reason employees represent such a large security risk. There are a number of ways and reasons that an employee can compromise the security of a company. For instance, disgruntled employees may intentionally try to leak information or a former employee could use their intimate knowledge of the company to attempt to breach security. However, the most common breaches happen when an employee either willingly ignores or fails to follow security protocols set forth by the IT department.

BYOD Increases Risk

The “bring your own device” or BYOD philosophy is one that is gaining momentum and popularity among many different industries. While this type of system has its benefits and can be a successful model for most companies, it unfortunately also increases the risk of data breach and makes it more difficult for a business to ensure its information is secure.

The main risk associated with BYOD is the danger of lost or stolen devices. This is one of the drawbacks of BYOD because although this allows for an employee to continue working while out of the office, it also means that valuable data leaves the office with them. Allowing employees to work from their personal devices drastically increases the risk of data breach as people take these types of devices everywhere with them. Devices such as phones or tablets can be more susceptible to loss or theft as they are smaller and easier to misplace.

Another problem with storing important data on these kinds of devices is that if they are lost or stolen, the level of security for these devices tends to be quite low. Many users do not even have a protective password on their phones or devices and those that do usually have a four digit sequence that does not provide much security.

The other issue with BYOD in regards to security is that third-parties can gain access to a device through mobile applications. This is a problem because the person who owns the device may be downloading apps infected with malware which can provide undesired third-parties access to your business’ sensitive information.

Ways to Protect Against Security Breaches Caused by Employees

Although there are numerous threats to security, especially with a BYOD model, associated with employee activity; there are a few different things that a company and their IT department can do to protect their valuable data.

The first thing to do is make sure that your employees are aware of these threats to security and the damage they can cause. As mentioned before, most breaches in security occur when an employee unwittingly compromises security because they have no idea that their actions are potentially dangerous.

Offering education and training programs to help employees familiarize themselves with security policies will make it easier for them to follow such policies. In the case of BYOD it may be necessary to include employees in the policy-making process. This will give them intimate knowledge of why the policies are in place and increase the likelihood that they will adhere to security protocols.

There are also apps available that can help separate the user’s personal life from business. These apps will help protect a company’s data from third-parties as they isolate information associated with business and deny third-party access from personal applications. A company may also elect to create a “blacklist” which informs employees of which apps to stay away from.

Due to their unparalleled access to company data and information, employees pose the biggest threat to security for an IT department. Employees often cause substantial damage to a company because they are careless or unaware of potential dangers. Although external hacking is always a threat and should not be ignored, the first place an IT department should start in regards to ensuring their company’s security is internally with its employees.

About The Author: Ilya Elbert is an experienced IT Support Specialist and Co-Owner of Geeks Mobile USA. When he’s not providing information on data security, he enjoys keeping up on the latest news and trends within the IT industry.

Virtualization Implementation—Taking It One Step at a Time

Leveraging virtualization technology has the potential to streamline processes, simplify management, and speed up system provisioning, but in order to take advantage of all the benefits of server virtualization, it’s important to have a well-thought out plan for both implementation and monitoring. The “virtualization lifecycle” can be thought of as an process with four key phases. Virtualization implementation should include continuing iterations of the technology, with the organization seeing progressively greater benefits as the cycle moves forward.

Dell_Virtualization_Lifecycle 3-22

Here’s a look at each of the four stages of implementation:

  • Plan phase – During this stage, you should identify long-term goals while prioritizing short-term projects that have the greatest potential to benefit from virtualization. You should also set goals and find metrics that will determine your success and conduct testing of your network to ensure that you have the necessary capacity and support to carry out the project. Each time you return to this phase, take the time to inventory applications and infrastructure with the best opportunities for improvements.
  • Provide phase – In this phase, you’ll begin to implement your virtualization plan. It’s important to allocate the resources necessary—from the processor to the hypervisor—to make the project successful. At this state, effective workload migration is critical.
  • Protect phase – The protect phase needs to be planned for in advance and is generally carried out in conjunction with the “provide” stage. This stage is where you should set up backup and disaster recovery systems. You should also do some testing at this stage to ensure the reliability and performance of your project.
  • Operate phase – During this phase, you should be basically done implementing the technology, though you’ll continue to monitor virtual machine performance and make adjustments as necessary. Modern virtualization technology offers live migration, or the ability to reallocate resources from one physical machine to another without disruption.

Compliance—Checking at Every Phase

One thing that you should be sure to do at every phase of this process is checking for regulatory compliance. Be sure that you are in line with audit and security measures and controls so that you don’t have to overhaul everything later. You’ll also want to make sure that you have taken the necessary security precautions to protect your network and your data—is there antivirus and firewall software installed, for example?

The process of implementing a virtualization strategy into your business should be an ongoing effort. As you achieve your goals in one area, you’ll want to plan for other short-term projects that could benefit from the effects of virtualization and then start the cycle over.

Where are you at in the virtualization lifecycle? What are your tips for virtualization success?

About The Author: Matt Smith works for Dell and has a passion for learning and writing about technology. Outside of work he enjoys entrepreneurship, being with his family, and the outdoors.

The Top 10 Trends Driving the New Data Warehouse

The new data warehouse, often called “Data Warehouse 2.0,” is the fast-growing trend of doing away with the old idea of huge, off-site, mega-warehouses stuffed with hardware and connected to the world through huge trunk lines and big satellite dishes.  The replacement is very different from that highly controlled, centralized, and inefficient ideal towards a more cloud-based, decentralized preference of varied hardware and widespread connectivity.

In today’s world of instant, varied access by many different users and consumers, data is no longer nicely tucked away in big warehouses.  Instead, it is often stored in multiple locations (often with redundancy) and overlapping small storage spaces that are often nothing more than large closets in an office building.  The trend is towards always-on, always-accessible, and very open storage that is fast and friendly for consumers yet complex and deep enough to appease the most intense data junkie.

The top ten trends for data warehousing in today’s changing world were compiled by Oracle in their Data Warehousing Top Trends for 2013 white paper.  Below is my own interpretation of those trends, based on the years of working with large quantities of data.

1. Performance Gets Top Billing

As volumes of data grow, so do expectations of easy and fast access.  This means performance must be a primary concern.  In many businesses, it is THE top concern.  As the amount of data grows and the queries into the database holding it gain complexity, this performance need only increases.  The enablement factor is huge and is becoming a driving force in business.

Oracle uses the example of Elavon, the third-largest payment processing company in the United States.  They boosted performance for routine reporting activities for millions of merchants in a massive way by restructuring their data systems.  “Large queries that used to take 45 minutes now run in seconds..”

Everyone expects this out of their data services now.

2. Real-time Data is In the Now

There’s no arguing that the current trends are in real-time data acquisition and reporting.  This is not going to go away.  Instead, more and more things that used to be considered “time delay” data points are now going to be expected in real-time.  Even corporate accounting and investor’s reports are becoming less driven by a tradition of long delays and more by consumer expectations for “in the now.”

All data sets are becoming more and more by just-in-time delivery expectations as management and departments expect deeper insights delivered faster than ever.  Much of this is driven by performance, of course, and metrics above will improve this, but with those performance increases come increases in data acquisition and storage demands as well.

3. Simplifying the Data Center

Traditional systems weren’t designed to handle these types of demands.  The old single-source data warehouse is a relic, having too much overhead and complexity to be capable of delivering data quickly.  Today, data centers are engineered to be flexible, easy to deploy, and easy to manage.  They are often flung around an organization rather than centralized and they are sometimes being outsourced to cloud service providers.  Physical access to hardware is not as prevalent for IT management as it once was and so “data centers” can be shoved into closets, located on multiple floors, or even in geographically diverse settings.  So while this quasi-cloud may seem disparate on a map, in use it all appears to be one big center.

4. The Rise of the Private Cloud

These simplified systems and requirements mean that many organization that once may have looked to outsource cloud data services are now going in-house because it’s cheaper and easier than it’s ever been before.  Off-the-shelf private cloud options are becoming available and seeing near plug-and-play use by many CIOs.  Outsourcing still has many advantages, of course, and allows IT staff to focus on innovation in customer service rather than on internal needs.

5. Business Analytics Infiltrating Non-Management

Traditionally, business analytics for a business are conducted by upper-level management and staff.  Today, the trend is to spread the possibilities by opening up those analysis tools and data sets (or at least relevant ones) to department sub-heads, regional managers, and even localized, on-site personnel.  This is especially true in retail and telecommunications, where access to information for individual clients or small groups of them can make or break a deal being made.  For sales forces, customer loyalty experts, and more, having the ability to analyze data previously inaccessible without email requests and long delays is a boon to real-time business needs.

6. Big Data No Longer Just the Big Boys Problem

Until recently, the problem of Big Data was a concern only of very large enterprises and corporations, usually of the multi-national, multi-billion variety.  Today, this is filtering down and more and more smaller companies are seeing Big Data looming.  In addition, Big Data is only one type of storage, with real-time, analytic, and other forms of data also taking center stage.  Even relatively small enterprises are facing data needs as volumes of information grow near-exponentially.

7. Mixed Workloads

Given the varieties of data, workloads are becoming more mixed as well.  Some services or departments may need real-time data while others may want deeper, big data analysis, while still others need to be able to pull reports from multi-structured data sets.  Today’s platforms are supporting a wider variety of data with the same services often handling online e-commerce, financials, and customer interactions.  High-performance systems made to scale and intelligently alter to fit the needs at hand are very in-demand.

8. Simplifying Management With Analytics

Many enterprises are finding that management overhead is cut dramatically when smart use of data analytics are employed.  What was once an extremely expensive data outlay for storage, access, security, and maintenance is now becoming an simpler, lower-cost system because the proper use of analysis to watch data use trends means more intelligent purchase and deployment decisions.

9. Flash and DRAM Going Mainstream

More and more servers and services are touting their instant-access Flash and DRAM storage sizes rather than hard drive access times.  The increased use of instant-access memory systems means less bottleneck in the I/O operations.  As these fast-memory options drop in cost, their deployments will continue to increase, perhaps replacing traditional long-term storage methods in many services.

10. Data Warehousing Must Be Highly Available

Data warehousing workloads are becoming heavier and demands for faster access more prevalent.  The storage of the increasing volumes of data must be both fast and highly available as data becomes mission-critical.  Downtime must be close to zero and solutions must be scalable.


There is no doubt, the Data Warehouse 2.0, with its non-centralized storage, high availability, private cloud, and real-time access is quickly becoming the de facto standard for today’s data transactions. Accepting these trends sooner rather than later will help you provide an adequate infrastructure for storing, accessing, and analyzing your data in the efficient and cost-effective ways that are also consistent with the global industry trend.

About The Author: Michael Dorf is a professional software architect, web developer, and instructor with a dozen years of industry experience. He teaches Java and J2EE classes at, a San Francisco based open source training school. Michael holds a M.S. degree in Software Engineering from San Jose State University and regularly blogs about Hadoop, Java, Android, PHP, and other cutting edge technologies on his blog at

Cloud Chivalry – The Self-Rescuing Company

While responsibility for protection in the cloud starts with a trusted provider, companies can’t ignore their role in keeping data safe. Much like the princess, smart and savvy enough to rescue herself, IT professionals need to take control of their own cloud environment to maximize tech security.

Consider the Kingdom

By accessing either software-as-a-service (SaaS) offerings through thin clients or using web-based apps, companies put themselves at risk. While it’s tempting to see this problem as purely technological – as something bigger, faster and stronger security systems will easily mitigate – this ignores one of the simplest (and most pervasive) problems in cloud security: employees.

The denizens of current technology kingdoms are far more tech-savvy than previous generations, able to bypass IT requirements as needed thanks to wi-fi and mobile technologies. To protect against unauthorized downloads or nefarious third-party programs, company admins have to start with local access. No matter how strict offsite requirements are for securing data, this good work can be undermined by easy user missteps.

First, admins must identify user needs and tailor access to specific tasks; no individual should ever have access to a server in its entirety. Next, it’s crucial for IT professionals to train employees in safe cloud use. Rather than trying to enforce a “no smartphones” rule, or tell workers they “can’t” when tech questions arise, admins need to develop sensible policies for access and back them up with solid authentication requirements. As a recent Dome 9 security article points out, two-factor authentication combined with detailed logs of all access requests make tracking and eliminating cloud weak spots a much simpler task.

Deepen the Moat

In addition to considering how users inside a cloud get out, it’s also important to think about how malicious attacks get in. Firewalls, for example, remain critical defensive structures, so long as they are properly implemented. Rather than leaving SSH open to, essentially giving hackers carte blanche, admins need to open ports on a case-by-case basis. Taking this a step farther are intelligent, next-generation firewalls. These solutions are able to scan incoming code and – if an unknown or malicious string is detected – isolate it in a virtual environment. The code is then permitted to run, but without the chance of harming company infrastructure, and the results are recorded for future use.

Through controlled user access and the proactive use of defensive technology, companies are able to provide their own form of cloud chivalry. In combination with dedicated provider oversight, this creates secure gates for user exit and deep moats for data entry.

About The Author: Doug Bonderud is a freelance writer, cloud proponent, business technology analyst and a contributor on the Dataprise Cloud Services website.

The Impact BYOD Has on Your Backup & Recovery Strategy

For years, mobile devices have increasingly helped employees around the globe access important documents and emails while sitting in cab, standing on line for coffee or waiting in an airport. Most recently the trend has turned towards Bring Your Own Device (BYOD) for businesses of all sizes.  As the name implies, BYOD gives employees the freedom to “bring in” and use their own personal devices for work, connect to the corporate network, and often get reimbursed for service plans. BYOD allows end-users to enjoy increased mobility, improved efficiency and productivity and greater job satisfaction.

However BYOD also presents a number of risks, including security breaches and exposed company data, which can result in extra money and resources to rectify the situation. What happens when that employee’s mobile device is lost or stolen? Who is responsible for the backup of that device, the employee or the IT department?

According to a recent report by analyst firm Juniper Research, the number of employee-owned smartphones and tablets used in the enterprise is expected to reach 350 million by 2014. These devices will represent 23% of all consumer-owned smartphones and tablets.

BYOD has a direct impact on an organization’s backup and disaster recovery planning. All too often IT departments fail to have a structured plan in place for backing up data on employees’ laptops, smartphones and tablets. Yet it is becoming imperative to take the necessary steps to prevent these mobile devices from becoming security issues and racking up unnecessary costs. Without a strategy in place, organizations are risking the possibility of security breaches and the loss of sensitive company data, spiraling costs for data usage and apps, and compliance issues between the IT department and company staff.

The following best practices can help businesses incorporate BYOD into their disaster recovery strategies:

  1. Take Inventory: According to a SANS Institute survey in 2012, 90% of organizations are not ‘fully aware’ of the devices accessing their network. The first step is to conduct a comprehensive audit of all the mobile devices and their usage to determine what is being used and how. While an audit can seem to be a daunting task for many organizations, there are mobile device management (MDM) solutions available to help simplify the audit process. Another integral part of the inventory is asking employees what programs and applications they are using.  This can help IT better determine the value and necessity of various applications accessing the network.
  2. Establish Policies: Once you have a handle on who has what device and how they are being used, it is important to have policies in place to ensure data protection and security. This is crucial for businesses that must adhere to regulatory compliance mandates. If there is not a policy in place, chances are employees are not backing up consistently or in some cases at all.  You may want determine a frequency for a backup schedule with employees or deploy a solution that can run backups even if employee’s devices are not connected to the corporate network.
  3. Define the Objective: Whether you have 10 BYOD users or 10,000 you will need to define your recovery objectives in case there is a need to recover one or multiple devices. .  Understand from each department or employee which data is critical to recover immediately from a device and find a solution that can be customized based on device and user roles. The ability to exclude superfluous data such as personal email and music from corporate backups can also be helpful.
  4. Implement Security Measures: Data security for mobile devices is imperative. Educating employees can go a long way in helping to change behavior. Reminders on password protection, WiFi security and auto-locking devices when not in use may seem basic but can be helpful in keeping company data secure for a BYOD environment. Consider tracking software for devices or the ability to remotely lock or delete data if a device is lost or stolen.
  5. Employee Adoption: The last best practice for a successful BYOD deployment that protects mobile devices across your organization is to monitor employee adoption. In a perfect world, all employees will follow the procedures and policies established. However if you are concerned about employees not following policies, you may want to consider leveraging silent applications that can be placed on devices for automatic updates. These can run on devices without disrupting device performance.

About The Author: Jennifer Walzer is CEO of Backup My Info! (BUMI), a New York City-based provider which specializes in delivering online backup and recovery solutions for small businesses.

6 Important Stages in the Data Processing Cycle

Much of data management is essentially about extracting useful information from data. To do this, data must go through a data mining process to be able to get meaning out of it. There is a wide range of approaches, tools and techniques to do this, and it is important to start with the most basic understanding of processing data.

What is Data Processing?

Data processing is simply the conversion of raw data to meaningful information through a process. Data is manipulated to produce results that lead to a resolution of a problem or improvement of an existing situation. Similar to a production process, it follows a cycle where inputs (raw data) are fed to a process (computer systems, software, etc.) to produce output (information and insights).

Generally, organizations employ computer systems to carry out a series of operations on the data in order to present, interpret, or obtain information. The process includes activities like data entry, summary, calculation, storage, etc. Useful and informative output is presented in various appropriate forms such as diagrams, reports, graphics, etc.

Stages of the Data Processing Cycle

1) Collection is the first stage of the cycle, and is very crucial, since the quality of data collected will impact heavily on the output. The collection process needs to ensure that the data gathered are both defined and accurate, so that subsequent decisions based on the findings are valid. This stage provides both the baseline from which to measure, and a target on what to improve.

Some types of data collection include census (data collection about everything in a group or statistical population), sample survey (collection method that includes only part of the total population), and administrative by-product (data collection is a byproduct of an organization’s day-to-day operations).

2) Preparation is the manipulation of data into a form suitable for further analysis and processing. Raw data cannot be processed and must be checked for accuracy. Preparation is about constructing a dataset from one or more data sources to be used for further exploration and processing. Analyzing data that has not been carefully screened for problems can produce highly misleading results that are heavily dependent on the quality of data prepared.

3) Input is the task where verified data is coded or converted into machine readable form so that it can be processed through a computer. Data entry is done through the use of a keyboard, digitizer, scanner, or data entry from an existing source. This time-consuming process requires speed and accuracy. Most data need to follow a formal and strict syntax since a great deal of processing power is required to breakdown the complex data at this stage. Due to the costs, many businesses are resorting to outsource this stage.

4) Processing is when the data is subjected to various means and methods of manipulation, the point where a computer program is being executed, and it contains the program code and its current activity. The process may be made up of multiple threads of execution that simultaneously execute instructions, depending on the operating system. While a computer program is a passive collection of instructions, a process is the actual execution of those instructions. Many software programs are available for processing large volumes of data within very short periods.

5) Output and interpretation is the stage where processed information is now transmitted to the user. Output is presented to users in various report formats like printed report, audio, video, or on monitor. Output need to be interpreted so that it can provide meaningful information that will guide future decisions of the company.

6) Storage is the last stage in the data processing cycle, where data, instruction and information are held for future use. The importance of this cycle is that it allows quick access and retrieval of the processed information, allowing it to be passed on to the next stage directly, when needed. Every computer uses storage to hold system and application software.

The Data Processing Cycle is a series of steps carried out to extract information from raw data. Although each step must be taken in order, the order is cyclic. The output and storage stage can lead to the repeat of the data collection stage, resulting in another cycle of data processing. The cycle provides a view on how the data travels and transforms from collection to interpretation, and ultimately, used in effective business decisions.

About The Author: Phillip Harris is data management enthusiast and he has written numerous blogs and articles on effective document management and data processing.


Four Tips For Successful Data Consolidation

Successful data consolidation can bring about a number of benefits to an organisation, and many go-getting firms are seeing consolidation as standard practice.  But the complexities of consolidation and the bringing together of the appropriate resources and people can require careful thought and consideration.  Here are four tips to make sure your data consolidation is a successful exercise.

Think long-term 

Any project that embarks on consolidating a company’s IT infrastructure needs to be well-planned and thoroughly analysed before setting the wheels in motion. There are a whole host of strategic considerations, based around fundamental aspects relating to your business.

Firstly, think about the financial savings you might gain from data consolidation.  This shouldn’t just be short-term savings, but consider how much you’ll benefit in the long-run.  With the savings you make, where would you want to steer investment with this extra income?

What are the company’s plans for future growth?  If you are considering expanding into new markets or making acquisitions, then this could impact on any data consolidation projects that you embark upon.

Take a holistic view of your IT suite and infrastructure.  What are your current (and future) needs?  Will you be planning any upgrades and how will your company integrate new changes in technology into your infrastructure?

Consider, also, the current licensing options for your software.  When will these need renewing and how will they impact on any data consolidation project your company might embark on?

Adopt a strategic approach 

A well-thought out approach to data consolidation should be considered strategically rather than tactically.  A strategic mind-set shows that you’ve thought about the long-term consequences of the outcomes of the project and the goals that you want to achieve.  Strategic thinking is more likely to gain approval by financial powers that be, within the organisation, giving the project the green light to go-ahead.

Review your current IT suite 

If you’re going to be embarking on migrating applications to new or upgraded environments as part of a data consolidation project, it makes sense to give your current hardware and software suite an overall audit at the same time.  If you’re not using the most up-to-date IT applications and infrastructure, then now could be the perfect time to consider upgrading, which can benefit the users and increase productivity within the organisation.

Improved hardware designs and network performance can provide efficiencies and cost savings within a firm.

Much-improved software comes with better functions and features, increasing ease of use and efficiency.

Follow the rules 

Successful data consolidation projects require sticking to some tried and tested rules, so stay on course if you want to reap the benefits.  For example, when designing a modernised platform build a balanced and aligned architecture, to avoid creating bottlenecks in your system.

Invest in training the appropriate team members who will be critical in the project.  Key players will need to learn about new features before they can design or build a platform prior to implementation.  Use professionals who know what they are doing and have experience of these kinds of projects.

Build a business critical and non-critical database solution, so that non-project team members can understand what the implications of the consolidation process will be.

Leave the trickiest aspects to the end.  By the time you get round to dealing with them, it could be that you have adapted plans to replace these aspects instead.

This article was written by Lauren R, a blogger who writes on behalf of DSP, providers of managed IT services in London.

Cloud X.0 (and Then Some?)

HP’s latest announcement that it was stepping away from an Intel-based chip model and planning a whole new GENUS (not generation) of servers raises the bar on the server race. And every bar raising event comes with some questions. Some questions are pure curiosity, and some more probative.

In its press relsase, HP noted: With nearly 10 billion devices connected to the internet and predictions for exponential growth, we’ve reached a point where the space, power and cost demands of traditional technology are no longer sustainable,” said Meg Whitman, president and chief executive officer, HP. “HP Moonshot marks the beginning of a new style of IT that will change the infrastructure economics and lay the foundation for the next 20 billion devices.” (Emphasis is mine.)

I’ve been waiting for something like this — and since long before the announcement of Moonshot 18 months ago. At Sperry, nearly 40 years ago, we were actually developing a mainframe that was dynamically software configurable … but it  got shelved after $50 Million based on two factors, one of which was that Fairchild, then leading the fab world, said the design was too advanced to be realizable in chips (THEN). The other was customer-base resistance to ANY manner or scale of actual conversion that might be required (and yes, some would be required for the very oldest of Sperry Systems). This next generation of HP servers (and its forthcoming competition)  soared over the first hurdle and made the second one moot.

Back in the late 70′s, Fairchild offered to help Sperry build the facility to R&D and develop the chips on its (Sperry’s) own. But that project got shelved and 800 hardworking people went on to do something else, and it led to the evolution of Sperry’s (later Unisys’) prdouct family. I was part of the future product planning group that periodically reviewed the product family and its intent, status, etc., and we collectively gave it a (giddy with anticipation) thumbs up. Sigh.

The impact of this new form of server can and should be seen been as positive. Having energy efficient multi-module functionally-specific servers is or should be a customer’s dream. Have racks of them, and cabinets of them. This means they can align the specific of their data center to be tuned to meet the actual nature of the system they run, not just have to have one “standard” server architecture approach meet all. And with greater cost/expense efficiency. AND if the workload changes, and you have to adapt, there likely is (or will be) one or more modules out there you can plug in to support it.

But did Intel just adopt the “cellular phone market and product” model, preparing to offer new versions of the servers every three to six months or so? “It’s a software-defined server, with the software you’re running determining your configuration,” said David Donatelli, the general manager of H.P.’s server, storage and networking business. “The ability to have multiple chips means that innovations will happen faster.”  That’s probably quite the understatement. NO slight intended, bit it sounds like the cell/smartphpone model to me.

Possibilities are endless, but the top two to me are: (1) those holding onto their own data centers will go berserk and the cloud data center guys are going to be having heart attacks; (2) or maybe everyone will step back and carefully plan their futures.  Servers used to be just servers, more or less. Now servers would or could seem to be anything you want them so be. That hits a lot of critical issues from efficiency in processing to surge processing accommodation to … (feel free to keep adding).

We have reached [yet] another branch in the “Cloud Journey,” as I call it. We can’t call it a revolution nor an evolution. We are on a fast-track journey that keeps picking up speed.  We’re not even noticing speed bumps anymore.

You can have ONE datacenter with a (potentially) constantly changing mix of servers to meet specific organization/user systems and user community needs, or maybe move to a SOA-like CLOUD services provider model in which specific cloud suppliers provide best-in-class services for specific applications types, e.g., graphics, scientific, data analytics, etc.

Now what this means to the software vendors (SaaS and others) is still to be fully defined, and I personally imagine a lot of head scratching and perhaps some serious angst over how to cope with this. Or some vendors might just go back about their business. But a shift to a consistent per-user pricing model versus a per instance model has some interesting implications, for example. Or create a whole new metric.

Personally, I see this as really re-dimensioning the evolution of cloud computing and [possibly rethinking what goes into the cloud and what stays in house — if anything, or even pulling some items back in house. It also leads to the potential for more comprehensive federations of linked special purpose cloud providers individually focusing on niche markets as firms try to slice and dice. SOA in the Cloud carried to the extreme — or maybe it is an evolution of the CLOUD AS SOA. IF you can get the energy and operaitonal economics back in tow, and the management of all of this is NOT back-breaking, AND you have software economics benefits, you might even see the migration to the cloud slow, or morph into something different.

This could really change the forecasts for % of the cloud held by the various players in the SaaS, PaaS and IaaS space, such as shown in the Telco2Research graphics and other prognostications/forecasts. It will open and close some doors, for sure.

In my recent book Technology’s Domino Effect – How A Simple Thing Like Cloud Computing and Teleworking Can De-urbanize the World, I tackled a simplified management overview of  the implications of JUST cloud computing and teleworking on American society (which can be scaled to a global appraisal as well).  My last article here on Enterprise Features, “Farewell Bricks and Mortar,” addressed the implications of mass (or even just materially increased) telecommuting/telework.

This server announcement re-dimensions the scope of that impact. And with that re-dimensioning comes an increased concern for many things and likely number 1 on that list is security, and number 2 will likely be technology-proficient skill sets for that (or each) generation of the servers as they come to market.

But a solid #3 is the need for companies to assess just what this does to THEM … how they will operate, what they will do with this new capability, how best to apply it … This technological baseline shift is a reason for firms to become even better at their on-going self-appraisal and dynamic planning.

The new servers reflect pressures on prices, and certainly the need to tune or enhance typical server performance.   The new capabilities need to be understood and effectively adopted by organizations. I can easily see vector and scalar computing plug-in modules; graphics; GIS; … in fact, a single cabinet could house up to hundreds of modules with a nice mix for many types of users in vertical areas. It opens up massively parallel computing and federated assets. It does many things.

But it points to something else.

Mr. Donatelli indicated that it “It’s the Web changing the way things get done.” Other firms will be launching their own versions of that type of server. With the plug in module approach they are using, the server platform will remain static but the modules can be changed. The mental analogy is replacing PC chip with a new one with ease, almost as if you were plugging in a USB device.

Eric Schmidt’s observation on the web ought to be stenciled onto every cabinet enclosure. (I was going to say engraved, but stenciling is cheaper.)   And I’m starting to think that Schmidt, like Murphy, was an optimist. I have to wonder when that type of dynamic software definability will hit PCs, laptops, tablets, smartphones … and even our TVs and other devices. We get firmware/software  updates all the time. This is the next step, more or less.

I’m not sure if the dog is biting the man or the man is biting the dog in this case.  But I do have a paw print on my chest and I am none too sure I am happy about that.  The paw print is the size of a stegosaurus’ footprint.

I asked a handful of colleagues what they would do with this. Some had no immediate response, some said they needed to assess the impact and sort out new planning models.    Several others just grunted.  One often-beleaguered soul smiled and asked me when someone was going to announce SkyNet and he could retire and not have to worry about it. (Really.) Let the machine run the machine.

And the Cloud marches on. But now, it  seems to have its own cadence.

Network Attached Storage for Startups

Networked attached storage (NAS) is becoming more and more common in small businesses that still require an effective file server. There are a number of options currently available, and companies could opt for cloud solutions or even direct attached storage solutions. However, due to the dedicated storage functionality and specific data management tools inherent to most modern NAS appliances, as well as the comparatively smaller price, many startups are migrating toward network storage.

Understanding NAS Systems

Modern businesses need a reliable and effective solution to securely store data while still keeping it immediately accessible to those who need it. In the simplest terms, these systems are data storage appliances that are connected to multiple computers over a network. These systems are not directly attached to any one specific computer, but instead have their own IP and function independently.

The hardware for these systems is also relatively simple to understand, and the comparatively compact appliances have a small footprint and can stay out of the way. The most common models have multiple hard drives for redundant storage and added security, and companies do not have to start with each bay filled. Smaller companies can start with just a couple HDDs and then add more as the business grows.

These are intended to be plug and play systems, and generally must be plugged into an open port on a router. Keep in mind, though, that most of these solutions don’t recommend a wireless connection. When reliability and uptime are so important, it isn’t worth risking a slower and less-consistent connection.

Startup Benefits

Network attached storage allows companies to centralize storage solutions and provide simple management tools. As more startups begin using them, though, they are discovering many other uses. A business could, for example, use them to share printers, stream media, and connect them to surveillance systems that support IP cameras.

Multiple and swappable hard drives make this a more affordable option for new startups because it is easier to fit the device into the budget and let it expand as the company grows. Also, backups can be made on a specific hard drive and then swapped out and stored somewhere else for added security. Whether you are creating entire system backups or just working with a lot of sensitive customer data, these features can be very important.

Matching the System to the Company

Despite its plug-and-play capabilities, one size does not fit all when it comes to NAS systems. Startups generally have a very limited budget, and that means these purchases need to be made carefully to ensure that the company gets everything it needs without overspending. So when you are ready to implement your own storage solutions, consider the following elements:

Access – Who can see what and when? Centralizing your data storage offers a lot of convenience for the entire company, but that doesn’t mean you want the entire company to see everything stored there. Some models only offer basic controls, allowing the manager to mark some data as read only, while more advanced systems will have tools that allow companies to set specific permissions for different individuals.

Connectivity – How will staff members connect with the device? How many connections can it support at once? Will it allow remote access? An NAS system will usually have multiple Ethernet ports that can be used simultaneously (for redundancy in the connection and better uptime), and remote access makes it possible for employees in other locations to access the data they need.

Capacity – The flexibility in storage space means there’s no reason to over purchase NAS systems. While it may be recommended to get as much storage as the startup budget allows, there’s no reason to stretch those dollars too far. As long as there are multiple internal hard drive ports, you will be able to expand later, or even swap out smaller HDDs for larger models.

What type of storage system are you currently using in your organization? Would you consider making the switch to NAS? Let us know in the comments.

About The Author: Paul Mansour is enthusiastic about start-ups along with consumer and small business technology. Working within, he needs to stay up-to-date on the latest products and solutions and best-in-class ecommerce strategies. In his spare time he can’t resist taking apart his latest gadget and forgetting how to put it back together.

A Data Centre Worthy of Any Metropolis

The internet is the current driving force behind industrialisation around the world. The internet is very important for Africa, as more governments and corporate entities are looking to invest in internet-enabled technology to drive their operations. One important issues in this area is the establishment of dependable facilities to satisfy the increasing demand for products and services. At the moment, IBM – in conjunction with various other corporate entities – has established its line of products through the implementation of a data centre in the heart of Lagos, Nigeria.

Some of the Typical Data Centre Beneficiaries

The banking sector is probably one of the major beneficiaries of this project. The Bankers Warehouse facility runs on IBM smart modular technology and is likely to go live by 2014. For IBM, Africa is next after China and India, and they will provide hosting for similar projects that are generating substantial revenue. It is no wonder then that India is among the leading destinations for off-shore business solutions. With an established data centre in Lagos, financial institutions, telecommunications companies, and energy sector players are likely to benefit immensely from efficient and improved service provision.

The Main Factors to a Successful Implementation of the Data Centre

The plans and infrastructure for a facility like the Bankers Warehouse data centre in Lagos required financing. Trico Capital International, through its chief executive officer Austine Ometoruwa, was helpful in structuring and funding the project while they worked closely with Chiltern Group as advisory partners. It is expected that this facility would have an uptime rating of no less than three gold stars.

What IBM’s and Google’s Focus on Africa Means

It is not only IBM that is looking to Africa. Google has also expressed interest in establishing serious internet technology facilities on the continent. This has been spurred by the availability of an affordable labour force with impressive, innovative skills.

With funding forthcoming through initiatives by CEOs like Austine Ometoruwa, the technological future of Africa could not be brighter. Although Africa may not be capable of single-handedly establishing such data centres, financiers are available and willing to outsource such services in order to improve business efficiency.

Some of the business elements that can be positively affected by the introduction of such data centres include:

Sales and marketing: whereby the current and future needs of their customers can be identified and factored into a business’s strategic plan. Technological innovations also suggest that better service provision can be attained, especially when a company outsources certain processes to professional service providers like with IBM and the Bankers Warehouse data centre in Lagos, Nigeria.

An efficient customer support system can be a direct outcome of the outsourcing process. At the moment, some of the big technology firms have off-shore data centres that provide customer support. With this kind of approach, these firms are making substantial savings in terms of labour costs.

Ultimately, establishing a data centre worthy of any metropolis requires planning and financing. Getting all the stakeholders on board for such a project is not an easy task and may require advisory services. It is most probable that when an operator has limited technological capacity to implement such a facility, outsourcing is a more viable option to consider.

Could Cloud-Based Virtual Desktops Infrastructure Die Before It Ever Takes Off?

Virtual Desktops Infrastructure (VDI) is the one of the hottest trends in the enterprise computing space.

By replacing physical machines with virtual desktops, you greatly reduce maintenance costs and security risks, while also extending the useful life of existing hardware and enabling employees to access their workplace desktop from home or from any device. And in the event that a laptop is stolen, a Virtual Desktop will help ensure that no critical data is lost or leaked. And finally, a Virtualized Desktop Infrastructure can help lower the increasingly costly electric bills associated with running computers.

Unfortunately, VDI is only accessible to larger companies with large IT budgets. The costs associated with network infrastructure, server hardware, and licensing are still too high for budget-sensitive businesses. But these costs will surely drop in the future.

Of course, we all know what the solution is. The market is currently waiting for the introduction of cost-effective cloud-based Virtual Desktop Infrastructure services to emerge. Although a few managed service providers have begun offering VDI, the costs can still be significant due to the same infrastructure and licensing challenges that made it expensive to implement in-house.

However, another important trend has emerged within the past few years which may effectively cut down cost-effective cloud VDI before it ever gets a chance to take off.

Mobile computing and tablets have taken off in popularity in recent years. And – on a lesser note – so has the popularity of Linux. It’s expected that 5% of all PCs sold will ship with Linux within the next year, and this doesn’t include the millions of Linux-based consumer electronics sold every year. So now, we’re seeing the emergence of mobile workers wanting to access their data on Android, Windows, Mac, Linux, and other operating systems.

As a result, demand for platform-independence has emerged  as consumers and workers are increasingly become accustomed to web-based interfaces which can access and manipulate their information from any location.

  • Email
  • Calendars
  • Chat
  • Project Management
  • Databases and Software Development
  • Spreadsheets
  • Word Processing
  • Photo Editing
  • Invoicing
  • Presentation
  • Video Editing

All of these are examples of applications which used to run as installed desktop applications, but now also offer an in-browser web-based option.

Although the quality of these applications can’t yet match the performance and functionality of the established software leaders, developers are hard at work in a race to catch up. And as Internet bandwidth becomes more accessible and RAM continues to become cheaper, we’ll begin to see web-based apps overtake installed software.

In addition to this, these cloud-based applications will benefit from added functionality which can only exist in a cloud-based delivery model.

And I’m not the only one that thinks this Microsoft is currently moving in this direction with Office 365. And in the future, most custom business applications will be designed with web-based interfaces in mind.

So this brings up an interesting question. Once employees can access all proprietary systems, collaboration and communication systems, and productivity applications through a web-based interface, what else would you need a Virtual Desktop for? It really doesn’t leave much else.

IaaS, PaaS and SaaS. What’s the difference and why are they important?

These days, it’s almost impossible to read any kind of technology publication without coming across the word “cloud computing” multiple times. Often, these cloud discussions come mixed up with strange acronyms such as IaaS, SaaS, PaaS and others.

My aim in this piece is to clarify some of the leading “aaS” acronyms you may come across.

SaaS – Software As A Service

The most common form of cloud computing would be Software-as-a-Serivce, or SaaS.As the name would imply, Software-as-a-Service is (usually) software that doesn’t need to be installed on your local machine. Instead, you access a remote system which hosts the software for you.

In essence, any interactive web site could technically be considered a SaaS application.  When you access Gmail, you don’t get access to the Gmail source code and you don’t need to install Gmail on your computer. You simply access a web site, log in, and start using the application.

Video players – which formerly had to be installed – were replaced by YouTube’s SaaS video player.

Although “ideally” SaaS should never require local software installations, you may sometimes need to install a “client” which serves as an interface to the remote server which performs most of the work. For example, you may need to install a small application to access a Virtual Desktop Interface which resides on a remote server. In this context, even your web browser could be considered a client application which must be installed to access your SaaS apps.

The important thing to keep in mind with SaaS is that the critical functions of the service are performed on the SaaS company’s remote servers.. and not on your local machine.

Today, there is a growing preference amongst developers to release business productivity applications through browser-based interfaces. This ensures that systems can be accessed through any OS, whether Windows, Linux, Mac, Smartphone or Tablet.

PaaS – Platform As A Service

Platform-as-a-Service is a slightly more complicated embodiment of cloud computing which is more suited to developers. In its essence, PaaS providers host development environment featuring a number of libraries and tools which can be used to develop and deploy custom cloud-based applications. PaaS simplifies development by eliminating the need to host and manage the underlying infrastructure which supports custom applications.

IaaS – Infrastructure As A Service

Infrastructure-as-a-Service is one of the fastest-growing flavors of cloud computing within the business space. It is also based on a very simple concept, which has wide-ranging implications and benefits for IT administrators.

Most servers today are hosted in “virtualized” environments. In other words, a single server may contain dozens of software-based “virtual machines” which trick the operating system in to believing that it resides on physical hardware. Virtualization offers a number of core advantages in terms of efficiency, cost-saving, disaster recovery and deployment. (But that’s a discussion for another article)

With the IaaS model, you simply host a virtual server in a remote third-party datacenter in the same way that you would host a virtual machine on your own hypervisor inside of your datacenter.

IaaS is attractive to small businesses because building your own server room can be expensive, and these IaaS facilities have lots of features which would simply be cost-prohibitive for smaller companies.

Larger companies also like IaaS because it allows them to obtain temporary on-demand capacity in the event of a sudden short-term spike in systems requirements. This is a much more cost-effective option than simply purchasing new hardware to fill a temporary need.

And there we have the 3 main flavors of cloud computing: IaaS, SaaS and PaaS. You’ll see other versions of this acronym such as BaaS, CaaS, DaaS, FaaS, ect… but these are usually more of a marketing ploy than anything else. If you break it all down to its essential elements, most “aaS” acronyms can be placed intro one of these 3 broad categories.

How to Leverage WAN Optimization for High-Performance Cloud Services

Driven by agility and convenience benefits, companies around the world and across all market sectors are in some phase of moving their business applications to the cloud. Many of those that have taken the plunge have experienced significant cost savings and productivity gains; however, some have also experienced the performance and availability issues inherent in many public cloud offerings. As a result of performance concerns, organizations have delayed migrating business-critical functions to the cloud in favor of testing the waters with applications and services that are not essential to the organization.

As the public cloud IaaS market continues to evolve, providers are beginning to recognize the need to increase performance levels by adding enhanced computing capabilities to their offerings. One area that these providers have turned their attention to in particular is the wide area network (WAN). The WAN is the foundation of any globally connected cloud service offering, enabling cloud providers to speed-up the time of data transfer between locations. With WAN optimization embedded as part of the cloud provider’s infrastructure, data transfers run faster and more efficiently, delivering consistent service levels to all cloud service consumers. This helps cloud users overcome the latency and bandwidth constraints often associated with public cloud services.

While some cloud providers allow organizations to install their own virtual WAN optimization client on a virtual server to optimize traffic and performance, it is more efficient to place WAN optimization between cloud data centers to transparently accelerate traffic for enterprise users. By having WAN optimization integrated into a cloud platform on a global scale, the speed of transfer between locations is optimized, resulting in efficient throughput. Additionally, by using compression, caching and optimizing network protocols (TCP, UDP, CIFS, HTTP, HTTPS, NFS, MAPI, MS-SQL) and databases (Oracle 11i and 12i), organizations can experience a more dramatic improvement in performance.

With a core that is optimized for acceleration, organizations can speed the process of migrating their data and applications to the cloud. This performance enhancement is particularly critical for resource-intensive enterprise functions such as database replication, file synchronization, backup and disaster recovery between data centers. When evaluating cloud services today, it is important to ensure that WAN optimization capabilities are offered as part of the core service and not provided as an add-on service with an additional charge.

Although WAN optimization is a key part of the equation, another key component when it comes to ensuring the performance of any cloud offering are SLAs. A strong SLA should include provisions around uptime, response time and latency guarantees. When it comes to uptime, the network and servers should be up and running more than 99% of the time. Response rates to any emergency incident should be less than 30 minutes, and latency should be less than one millisecond (1ms) for the transfer of data packets from one cloud server to another within the same cloud network. Organizations negotiating cloud contracts should make sure strong SLAs are part of the service.

As companies begin to tap into cloud offerings that seamlessly integrate WAN optimization, they should immediately realize significant increases in application performance while at the same time streamlining operations, improving performance visibility, reducing the overall cost of IT infrastructure – and most importantly, delivering on the business and technology promises of the cloud.

About The Author: Yogesh Rami is the senior director of the cloud solutions business unit with Dimension Data

Having the Enterprising Mindset

The mind is a powerful aspect of our body. It is the key ingredient in establishing a successful business and thriving career. It is also the secret factor in creating an attractive public brand that automatically attracts opportunities and clients.

In relation to enterprising, the enterprising mindset refers to the capability to completely focus on the opportunity and possibility to focus on every current situation, and take action as fast as possible. The success of productive professionals as well as industry moguls is often fuelled by the efficiency of enterprising mindset.

What Enterprising Mindset Does

Having the enterprising mindset allows an individual with the enterprising skills to completely rebound from failures by means of influencing lessons that are learned, and implementing lessons into projects and programs. It is an idea of the society needed to continuously and consistently propel action to produce success.

Without the right enterprising mindset, there will be a waste of energy and time lamenting on what has not happened. Therefore it is very important to have the ability to see different possibilities. This includes opening your mind to opportunities. There should also be the ability to focus on things. Setting focus and goals in accomplishing goals can work a long way.


Personal Commitment

Along with enterprising mindset, it is very important to have the commitment to discipline. Some businessmen create a schedule for themselves and sticking strictly to it. There should also be a good surrounding which reflects positive energy. Success if often restricted or propelled with the people around you. Therefore, it is very important to surround yourself with positive people and experiences.

You also need to be relaxed, not thinking too much of pressure surrounding you. Have time for yourself. Do things that you want to do, such as playing online gambling at and other hobbies that you may want to engage in. Study the lives of people who can become your role model. Identify people who have also reached success in your particular area of interest.

It is also very important to have decisive action. Success often results from good action. It is also helpful to have a strong and solid foundation of faith. Many times, a strong faith and confidence would do the trick in making sure that a healthy enterprising life is possible. In the end, this will become a very rewarding endeavor.


Emerging Tech: External Solid State Hard Drives

Understanding External Hard Drive Components

For quite some time, people with PCs have had the option of using an external hard drive to enhance their computer’s storage capabilities and to provide a backup for their important data. Of course, technology continues to advance, and a new kind of hard drive has made its way onto the scene, gaining in mainstream usage. Solid state drives (SSDs), available for both internal and external needs, have been around for a while, but only recently have they begun to gain more popularity. When contemplating what kind of external drive is right for your needs, it can be helpful to understand some of the main differences between SSDs and traditional drives.

Mechanical Differences

Though they do share common features, SSDs have some fundamental functional differences from traditional hard disk drives. Both pieces of tech are, of course, data storing devices that can hold data even when the device is not plugged in. However, whereas hard disk drives use electromagnetic spinning disks and movable parts, SSDs do not. This lack of moving parts makes an external hard drive with solid state tech less susceptible to physical trauma and mechanical failure and accounts for several other contrasts.

Speed Divergences

In general, SSDs work faster than do traditional drives, so if you need your external hard drive to be capable of high speeds, then a solid state external drive may be right for you. Let’s break it down: SSDs tend to start up quicker (within a few milliseconds) than traditional drives because they do not have those mechanical parts, which usually require a bit of time to start turning—a few seconds in most cases. SSDs also get you your information faster because random access time is lower and data transfer happens at speeds on par or greater than traditional drives.

Noise Distribution

Those moving parts in hard disk drives produce some noise while they are moving, as can the fans used to cool these devices. By contrast, a solid state drive’s lack of moving parts makes this device virtually silent. Incidentally, no moving parts also mean that SSDs create less heat and can withstand higher temperatures than traditional drives, even without fans to cool them off.

Fragmentation Discrepancies

If your external hard drive is a solid state device, then fragmentation becomes a negligible issue. Fragmentation is the process by which a storage device becomes less able to lay out data in sequence and is due, in large part, to files being rewritten. SSDs often suffer from very little fragmentation and don’t need to be regularly defragmented like hard disk drives do in order to keep performance levels up, meaning that you will have more usable storage space available.


Differences between mechanical parts, speeds, noise, and fragmentation are just a few of the disparities between an external hard drive with solid state technology as compared to a traditional hard disk drive. As you learn more about these devices, you will see that there are others, including a price jump.  As you might conclude from the points above, SSDs tend to be much more secure than hard disk drives, but their higher cost might not justify your need to simply increase your computer’s storage space. Unless you absolutely cannot afford to lose the data you plan to store, carefully consider the cost to benefits ratio when deciding whether or not solid state drives are right for you.

About The Author: Jared Jacobs has professional and personal interests in technology. As an employee of Dell, he has to stay up to date on the latest innovations in large enterprise solutions and consumer electronics buying trends. Personally, he loves making additions to his media rooms and experimenting with surround sound equipment. He’s also a big Rockets and Texans fan.

Start Your Cloud Journey with a Cloud Readiness Assessment

startingAs a CIO, you’re probably in the early throes of determining how best to move to the cloud.  You may have taken some preliminary steps such as using some SaaS offerings, done a small proof of concept, maybe even moved a function like test/dev to the cloud – but now you’re feeling the heat to embrace the cloud in a more substantial way.  Before taking the plunge, there are a myriad of activities that need to be considered and steps that need to be taken to ensure that any move you make will ultimately be the right one.  Take heart, as you are in good company with what you are facing.

Get Off on the Right Foot

Step one in any cloud journey should be a Cloud Readiness Assessment (CRA) where you do a frank evaluation of your actual level of cloud readiness (basic, moderate, advanced or optimized) compared to your desired level, determine a timeframe to achieve this, and the steps you need to take to get from A to B.  Without undertaking an initiative such as this, you are basically adopting a “hit or miss” approach as opposed to implementing a well thought out and actionable strategy.

Today, most companies are in the basic to moderate level of readiness, and most have a goal of moving to an advanced level of readiness over time.  In addition, most companies will admit they are not prepared when it comes to how to move to the cloud and could use help moving forward.

When undertaking a CRA, it is important to ensure that key business owners are included in the process.  While IT should lead the process, key personnel from the business side of the shop need to be involved to truly “legitimize” the process.

Your Current Level of Readiness

There are two key activities that should be carried out as part of a CRA.  The first is identifying your current level of cloud readiness. This is accomplished by evaluating your organization from four perspectives. Two are focused more on the business side of the shop – business alignment and organizational change, and two are associated with IT – infrastructure readiness and applications and workloads. Within each of these categories, there are five to six attributes that need to be assessed, rated against standard definitions of readiness, and then totaled to come up with a score for each category.  As an example, within business alignment, key attributes include strategy, finance, sourcing, compliance, metrics, and culture, while within infrastructure readiness, key attributes include network, virtualization, security, storage, backup/DR, efficiency/agility, and communications.

It is important to note that I use the term “standard” above rather loosely as there is no industry standard that has been established for cloud readiness, just fairly consistent definitions that have been put forward by companies that provide tools and/or consulting services for determining cloud readiness.

Your Future Level of Readiness

Once you have determined your current state, you are ready to go through the evaluation process again with emphasis on the future state – where you want to be in the short and longer term.  As was done when determining current state, you will use the “standard” definitions of cloud readiness to determine future state.  Once this is done, you will be in a position to build a preliminary roadmap focused on closing the delta from where you are today to where you want to be tomorrow. Activities and deliverables will need to be prioritized and broken down into monthly increments.

Moving to the cloud can seem like a daunting task, especially since many organizations don’t have the knowledge or resources to figure out where to start and how to ensure that all of the necessary business and technology related factors are taken into consideration. However, with a solid CRA plan enterprises’ migration to the cloud can be smoother, faster and deliver the expected benefits.
About The Author: Geoff Sinn is the principal cloud consultant for Dimension Data Americas.

Image Source:

The Taboo Danger of Cloud Computing

tabooCloud computing has been a godsend for the IT industry. Delivering applications in a hosted model does away with maintenance, hardware and licensing costs…. and makes it much faster and easier to launch and provision new services.

Cloud-based applications also provide rich access to collaboration features and API data feeds in a way that would’ve been impossible with stand-alone installed software. And in the case of browser-based SaaS software, applications can be accessed from anywhere – including tablets, smartphones, desktops and laptops – meaning that employees are mobile and empowered.

But there is one major risk which comes from cloud-based applications… and it’s particularly accplicable to Software-as-a-Servicee applications such as Gmail, Salesforce and others.

One of the most promoted benefits of SaaS applications is the ease of purchase and deployment. Anyone with a credit card can set up and launch their own CRM, ERP, email server, accounting system, collaboration suite, etc…

But I would argue that – rather than being a benefit – this is actually a very serious flaw which could have disastrous consequences for many companies.

With traditional IT deployment models, there was centralized control over every aspect of information management within the company. It would be impossible to create a new server without the involvement of IT.

But the reality today is very different. Any marketing manager with a credit card can set up a Salesforce account and implement a web server with ecommerce capabilities. And they can do it without the knowledge of IT or any other key people who should normally be involved in the process. This is simply a recipe for disaster.

  • What happens if the ecommerce system – although completely secure – is implemented in an insecure and non-compliant way by this marketing manager? This could lead to a major privacy breach which could put the company at risk of a major lawsuit?
  • And what happens if this marketing manager needs to be fired? They’ve spent years compiling valuable, critical and irreplaceable customer information. Now, they’re the only ones with the passwords. When they leave, they could potentially take the entire company down with them.

Cloud computing is great, and its benefits are substantial. But for Software-as-a-Service to truly provide value, it must be delivered in a secure way.

This means maintaining tight and appropriate controls over all information assets owned by the company. The company needs to set strict policies in place which forbids employees from using cloud services without explicit permission from those in charge of information governance within the organization.

If the IT department can’t have administrative control over a cloud application, then it should not be used. IT should have centralized insight, and the ability to easily close or lock any user accounts on applications being used for work purposes.

Otherwise, you run the risk of having a rogue employee run off with critical business information… and possibly misusing these critical resources. A single serious incident would be all it takes to put your company out of business.

Image Source: