Archives for : April2013

Cloud Chivalry – The Self-Rescuing Company

While responsibility for protection in the cloud starts with a trusted provider, companies can’t ignore their role in keeping data safe. Much like the princess, smart and savvy enough to rescue herself, IT professionals need to take control of their own cloud environment to maximize tech security.

Consider the Kingdom

By accessing either software-as-a-service (SaaS) offerings through thin clients or using web-based apps, companies put themselves at risk. While it’s tempting to see this problem as purely technological – as something bigger, faster and stronger security systems will easily mitigate – this ignores one of the simplest (and most pervasive) problems in cloud security: employees.

The denizens of current technology kingdoms are far more tech-savvy than previous generations, able to bypass IT requirements as needed thanks to wi-fi and mobile technologies. To protect against unauthorized downloads or nefarious third-party programs, company admins have to start with local access. No matter how strict offsite requirements are for securing data, this good work can be undermined by easy user missteps.

First, admins must identify user needs and tailor access to specific tasks; no individual should ever have access to a server in its entirety. Next, it’s crucial for IT professionals to train employees in safe cloud use. Rather than trying to enforce a “no smartphones” rule, or tell workers they “can’t” when tech questions arise, admins need to develop sensible policies for access and back them up with solid authentication requirements. As a recent Dome 9 security article points out, two-factor authentication combined with detailed logs of all access requests make tracking and eliminating cloud weak spots a much simpler task.

Deepen the Moat

In addition to considering how users inside a cloud get out, it’s also important to think about how malicious attacks get in. Firewalls, for example, remain critical defensive structures, so long as they are properly implemented. Rather than leaving SSH open to 0.0.0.0/0, essentially giving hackers carte blanche, admins need to open ports on a case-by-case basis. Taking this a step farther are intelligent, next-generation firewalls. These solutions are able to scan incoming code and – if an unknown or malicious string is detected – isolate it in a virtual environment. The code is then permitted to run, but without the chance of harming company infrastructure, and the results are recorded for future use.

Through controlled user access and the proactive use of defensive technology, companies are able to provide their own form of cloud chivalry. In combination with dedicated provider oversight, this creates secure gates for user exit and deep moats for data entry.

About The Author: Doug Bonderud is a freelance writer, cloud proponent, business technology analyst and a contributor on the Dataprise Cloud Services website.

The Impact BYOD Has on Your Backup & Recovery Strategy

For years, mobile devices have increasingly helped employees around the globe access important documents and emails while sitting in cab, standing on line for coffee or waiting in an airport. Most recently the trend has turned towards Bring Your Own Device (BYOD) for businesses of all sizes.  As the name implies, BYOD gives employees the freedom to “bring in” and use their own personal devices for work, connect to the corporate network, and often get reimbursed for service plans. BYOD allows end-users to enjoy increased mobility, improved efficiency and productivity and greater job satisfaction.

However BYOD also presents a number of risks, including security breaches and exposed company data, which can result in extra money and resources to rectify the situation. What happens when that employee’s mobile device is lost or stolen? Who is responsible for the backup of that device, the employee or the IT department?

According to a recent report by analyst firm Juniper Research, the number of employee-owned smartphones and tablets used in the enterprise is expected to reach 350 million by 2014. These devices will represent 23% of all consumer-owned smartphones and tablets.

BYOD has a direct impact on an organization’s backup and disaster recovery planning. All too often IT departments fail to have a structured plan in place for backing up data on employees’ laptops, smartphones and tablets. Yet it is becoming imperative to take the necessary steps to prevent these mobile devices from becoming security issues and racking up unnecessary costs. Without a strategy in place, organizations are risking the possibility of security breaches and the loss of sensitive company data, spiraling costs for data usage and apps, and compliance issues between the IT department and company staff.

The following best practices can help businesses incorporate BYOD into their disaster recovery strategies:

  1. Take Inventory: According to a SANS Institute survey in 2012, 90% of organizations are not ‘fully aware’ of the devices accessing their network. The first step is to conduct a comprehensive audit of all the mobile devices and their usage to determine what is being used and how. While an audit can seem to be a daunting task for many organizations, there are mobile device management (MDM) solutions available to help simplify the audit process. Another integral part of the inventory is asking employees what programs and applications they are using.  This can help IT better determine the value and necessity of various applications accessing the network.
  2. Establish Policies: Once you have a handle on who has what device and how they are being used, it is important to have policies in place to ensure data protection and security. This is crucial for businesses that must adhere to regulatory compliance mandates. If there is not a policy in place, chances are employees are not backing up consistently or in some cases at all.  You may want determine a frequency for a backup schedule with employees or deploy a solution that can run backups even if employee’s devices are not connected to the corporate network.
  3. Define the Objective: Whether you have 10 BYOD users or 10,000 you will need to define your recovery objectives in case there is a need to recover one or multiple devices. .  Understand from each department or employee which data is critical to recover immediately from a device and find a solution that can be customized based on device and user roles. The ability to exclude superfluous data such as personal email and music from corporate backups can also be helpful.
  4. Implement Security Measures: Data security for mobile devices is imperative. Educating employees can go a long way in helping to change behavior. Reminders on password protection, WiFi security and auto-locking devices when not in use may seem basic but can be helpful in keeping company data secure for a BYOD environment. Consider tracking software for devices or the ability to remotely lock or delete data if a device is lost or stolen.
  5. Employee Adoption: The last best practice for a successful BYOD deployment that protects mobile devices across your organization is to monitor employee adoption. In a perfect world, all employees will follow the procedures and policies established. However if you are concerned about employees not following policies, you may want to consider leveraging silent applications that can be placed on devices for automatic updates. These can run on devices without disrupting device performance.

About The Author: Jennifer Walzer is CEO of Backup My Info! (BUMI), a New York City-based provider which specializes in delivering online backup and recovery solutions for small businesses.

6 Important Stages in the Data Processing Cycle

Much of data management is essentially about extracting useful information from data. To do this, data must go through a data mining process to be able to get meaning out of it. There is a wide range of approaches, tools and techniques to do this, and it is important to start with the most basic understanding of processing data.

What is Data Processing?

Data processing is simply the conversion of raw data to meaningful information through a process. Data is manipulated to produce results that lead to a resolution of a problem or improvement of an existing situation. Similar to a production process, it follows a cycle where inputs (raw data) are fed to a process (computer systems, software, etc.) to produce output (information and insights).

Generally, organizations employ computer systems to carry out a series of operations on the data in order to present, interpret, or obtain information. The process includes activities like data entry, summary, calculation, storage, etc. Useful and informative output is presented in various appropriate forms such as diagrams, reports, graphics, etc.

Stages of the Data Processing Cycle

1) Collection is the first stage of the cycle, and is very crucial, since the quality of data collected will impact heavily on the output. The collection process needs to ensure that the data gathered are both defined and accurate, so that subsequent decisions based on the findings are valid. This stage provides both the baseline from which to measure, and a target on what to improve.

Some types of data collection include census (data collection about everything in a group or statistical population), sample survey (collection method that includes only part of the total population), and administrative by-product (data collection is a byproduct of an organization’s day-to-day operations).

2) Preparation is the manipulation of data into a form suitable for further analysis and processing. Raw data cannot be processed and must be checked for accuracy. Preparation is about constructing a dataset from one or more data sources to be used for further exploration and processing. Analyzing data that has not been carefully screened for problems can produce highly misleading results that are heavily dependent on the quality of data prepared.

3) Input is the task where verified data is coded or converted into machine readable form so that it can be processed through a computer. Data entry is done through the use of a keyboard, digitizer, scanner, or data entry from an existing source. This time-consuming process requires speed and accuracy. Most data need to follow a formal and strict syntax since a great deal of processing power is required to breakdown the complex data at this stage. Due to the costs, many businesses are resorting to outsource this stage.

4) Processing is when the data is subjected to various means and methods of manipulation, the point where a computer program is being executed, and it contains the program code and its current activity. The process may be made up of multiple threads of execution that simultaneously execute instructions, depending on the operating system. While a computer program is a passive collection of instructions, a process is the actual execution of those instructions. Many software programs are available for processing large volumes of data within very short periods.

5) Output and interpretation is the stage where processed information is now transmitted to the user. Output is presented to users in various report formats like printed report, audio, video, or on monitor. Output need to be interpreted so that it can provide meaningful information that will guide future decisions of the company.

6) Storage is the last stage in the data processing cycle, where data, instruction and information are held for future use. The importance of this cycle is that it allows quick access and retrieval of the processed information, allowing it to be passed on to the next stage directly, when needed. Every computer uses storage to hold system and application software.

The Data Processing Cycle is a series of steps carried out to extract information from raw data. Although each step must be taken in order, the order is cyclic. The output and storage stage can lead to the repeat of the data collection stage, resulting in another cycle of data processing. The cycle provides a view on how the data travels and transforms from collection to interpretation, and ultimately, used in effective business decisions.

About The Author: Phillip Harris is data management enthusiast and he has written numerous blogs and articles on effective document management and data processing.

 

Four Tips For Successful Data Consolidation

Successful data consolidation can bring about a number of benefits to an organisation, and many go-getting firms are seeing consolidation as standard practice.  But the complexities of consolidation and the bringing together of the appropriate resources and people can require careful thought and consideration.  Here are four tips to make sure your data consolidation is a successful exercise.

Think long-term 

Any project that embarks on consolidating a company’s IT infrastructure needs to be well-planned and thoroughly analysed before setting the wheels in motion. There are a whole host of strategic considerations, based around fundamental aspects relating to your business.

Firstly, think about the financial savings you might gain from data consolidation.  This shouldn’t just be short-term savings, but consider how much you’ll benefit in the long-run.  With the savings you make, where would you want to steer investment with this extra income?

What are the company’s plans for future growth?  If you are considering expanding into new markets or making acquisitions, then this could impact on any data consolidation projects that you embark upon.

Take a holistic view of your IT suite and infrastructure.  What are your current (and future) needs?  Will you be planning any upgrades and how will your company integrate new changes in technology into your infrastructure?

Consider, also, the current licensing options for your software.  When will these need renewing and how will they impact on any data consolidation project your company might embark on?

Adopt a strategic approach 

A well-thought out approach to data consolidation should be considered strategically rather than tactically.  A strategic mind-set shows that you’ve thought about the long-term consequences of the outcomes of the project and the goals that you want to achieve.  Strategic thinking is more likely to gain approval by financial powers that be, within the organisation, giving the project the green light to go-ahead.

Review your current IT suite 

If you’re going to be embarking on migrating applications to new or upgraded environments as part of a data consolidation project, it makes sense to give your current hardware and software suite an overall audit at the same time.  If you’re not using the most up-to-date IT applications and infrastructure, then now could be the perfect time to consider upgrading, which can benefit the users and increase productivity within the organisation.

Improved hardware designs and network performance can provide efficiencies and cost savings within a firm.

Much-improved software comes with better functions and features, increasing ease of use and efficiency.

Follow the rules 

Successful data consolidation projects require sticking to some tried and tested rules, so stay on course if you want to reap the benefits.  For example, when designing a modernised platform build a balanced and aligned architecture, to avoid creating bottlenecks in your system.

Invest in training the appropriate team members who will be critical in the project.  Key players will need to learn about new features before they can design or build a platform prior to implementation.  Use professionals who know what they are doing and have experience of these kinds of projects.

Build a business critical and non-critical database solution, so that non-project team members can understand what the implications of the consolidation process will be.

Leave the trickiest aspects to the end.  By the time you get round to dealing with them, it could be that you have adapted plans to replace these aspects instead.

This article was written by Lauren R, a blogger who writes on behalf of DSP, providers of managed IT services in London.

Cloud X.0 (and Then Some?)

HP’s latest announcement that it was stepping away from an Intel-based chip model and planning a whole new GENUS (not generation) of servers raises the bar on the server race. And every bar raising event comes with some questions. Some questions are pure curiosity, and some more probative.

In its press relsase, HP noted: With nearly 10 billion devices connected to the internet and predictions for exponential growth, we’ve reached a point where the space, power and cost demands of traditional technology are no longer sustainable,” said Meg Whitman, president and chief executive officer, HP. “HP Moonshot marks the beginning of a new style of IT that will change the infrastructure economics and lay the foundation for the next 20 billion devices.” (Emphasis is mine.)

I’ve been waiting for something like this — and since long before the announcement of Moonshot 18 months ago. At Sperry, nearly 40 years ago, we were actually developing a mainframe that was dynamically software configurable … but it  got shelved after $50 Million based on two factors, one of which was that Fairchild, then leading the fab world, said the design was too advanced to be realizable in chips (THEN). The other was customer-base resistance to ANY manner or scale of actual conversion that might be required (and yes, some would be required for the very oldest of Sperry Systems). This next generation of HP servers (and its forthcoming competition)  soared over the first hurdle and made the second one moot.

Back in the late 70′s, Fairchild offered to help Sperry build the facility to R&D and develop the chips on its (Sperry’s) own. But that project got shelved and 800 hardworking people went on to do something else, and it led to the evolution of Sperry’s (later Unisys’) prdouct family. I was part of the future product planning group that periodically reviewed the product family and its intent, status, etc., and we collectively gave it a (giddy with anticipation) thumbs up. Sigh.

The impact of this new form of server can and should be seen been as positive. Having energy efficient multi-module functionally-specific servers is or should be a customer’s dream. Have racks of them, and cabinets of them. This means they can align the specific of their data center to be tuned to meet the actual nature of the system they run, not just have to have one “standard” server architecture approach meet all. And with greater cost/expense efficiency. AND if the workload changes, and you have to adapt, there likely is (or will be) one or more modules out there you can plug in to support it.

But did Intel just adopt the “cellular phone market and product” model, preparing to offer new versions of the servers every three to six months or so? “It’s a software-defined server, with the software you’re running determining your configuration,” said David Donatelli, the general manager of H.P.’s server, storage and networking business. “The ability to have multiple chips means that innovations will happen faster.”  That’s probably quite the understatement. NO slight intended, bit it sounds like the cell/smartphpone model to me.

Possibilities are endless, but the top two to me are: (1) those holding onto their own data centers will go berserk and the cloud data center guys are going to be having heart attacks; (2) or maybe everyone will step back and carefully plan their futures.  Servers used to be just servers, more or less. Now servers would or could seem to be anything you want them so be. That hits a lot of critical issues from efficiency in processing to surge processing accommodation to … (feel free to keep adding).

We have reached [yet] another branch in the “Cloud Journey,” as I call it. We can’t call it a revolution nor an evolution. We are on a fast-track journey that keeps picking up speed.  We’re not even noticing speed bumps anymore.

You can have ONE datacenter with a (potentially) constantly changing mix of servers to meet specific organization/user systems and user community needs, or maybe move to a SOA-like CLOUD services provider model in which specific cloud suppliers provide best-in-class services for specific applications types, e.g., graphics, scientific, data analytics, etc.

Now what this means to the software vendors (SaaS and others) is still to be fully defined, and I personally imagine a lot of head scratching and perhaps some serious angst over how to cope with this. Or some vendors might just go back about their business. But a shift to a consistent per-user pricing model versus a per instance model has some interesting implications, for example. Or create a whole new metric.

Personally, I see this as really re-dimensioning the evolution of cloud computing and [possibly rethinking what goes into the cloud and what stays in house — if anything, or even pulling some items back in house. It also leads to the potential for more comprehensive federations of linked special purpose cloud providers individually focusing on niche markets as firms try to slice and dice. SOA in the Cloud carried to the extreme — or maybe it is an evolution of the CLOUD AS SOA. IF you can get the energy and operaitonal economics back in tow, and the management of all of this is NOT back-breaking, AND you have software economics benefits, you might even see the migration to the cloud slow, or morph into something different.

This could really change the forecasts for % of the cloud held by the various players in the SaaS, PaaS and IaaS space, such as shown in the Telco2Research graphics and other prognostications/forecasts. It will open and close some doors, for sure.

In my recent book Technology’s Domino Effect – How A Simple Thing Like Cloud Computing and Teleworking Can De-urbanize the World, I tackled a simplified management overview of  the implications of JUST cloud computing and teleworking on American society (which can be scaled to a global appraisal as well).  My last article here on Enterprise Features, “Farewell Bricks and Mortar,” addressed the implications of mass (or even just materially increased) telecommuting/telework.

This server announcement re-dimensions the scope of that impact. And with that re-dimensioning comes an increased concern for many things and likely number 1 on that list is security, and number 2 will likely be technology-proficient skill sets for that (or each) generation of the servers as they come to market.

But a solid #3 is the need for companies to assess just what this does to THEM … how they will operate, what they will do with this new capability, how best to apply it … This technological baseline shift is a reason for firms to become even better at their on-going self-appraisal and dynamic planning.

The new servers reflect pressures on prices, and certainly the need to tune or enhance typical server performance.   The new capabilities need to be understood and effectively adopted by organizations. I can easily see vector and scalar computing plug-in modules; graphics; GIS; … in fact, a single cabinet could house up to hundreds of modules with a nice mix for many types of users in vertical areas. It opens up massively parallel computing and federated assets. It does many things.

But it points to something else.

Mr. Donatelli indicated that it “It’s the Web changing the way things get done.” Other firms will be launching their own versions of that type of server. With the plug in module approach they are using, the server platform will remain static but the modules can be changed. The mental analogy is replacing PC chip with a new one with ease, almost as if you were plugging in a USB device.

Eric Schmidt’s observation on the web ought to be stenciled onto every cabinet enclosure. (I was going to say engraved, but stenciling is cheaper.)   And I’m starting to think that Schmidt, like Murphy, was an optimist. I have to wonder when that type of dynamic software definability will hit PCs, laptops, tablets, smartphones … and even our TVs and other devices. We get firmware/software  updates all the time. This is the next step, more or less.

I’m not sure if the dog is biting the man or the man is biting the dog in this case.  But I do have a paw print on my chest and I am none too sure I am happy about that.  The paw print is the size of a stegosaurus’ footprint.

I asked a handful of colleagues what they would do with this. Some had no immediate response, some said they needed to assess the impact and sort out new planning models.    Several others just grunted.  One often-beleaguered soul smiled and asked me when someone was going to announce SkyNet and he could retire and not have to worry about it. (Really.) Let the machine run the machine.

And the Cloud marches on. But now, it  seems to have its own cadence.

Network Attached Storage for Startups

Networked attached storage (NAS) is becoming more and more common in small businesses that still require an effective file server. There are a number of options currently available, and companies could opt for cloud solutions or even direct attached storage solutions. However, due to the dedicated storage functionality and specific data management tools inherent to most modern NAS appliances, as well as the comparatively smaller price, many startups are migrating toward network storage.

Understanding NAS Systems

Modern businesses need a reliable and effective solution to securely store data while still keeping it immediately accessible to those who need it. In the simplest terms, these systems are data storage appliances that are connected to multiple computers over a network. These systems are not directly attached to any one specific computer, but instead have their own IP and function independently.

The hardware for these systems is also relatively simple to understand, and the comparatively compact appliances have a small footprint and can stay out of the way. The most common models have multiple hard drives for redundant storage and added security, and companies do not have to start with each bay filled. Smaller companies can start with just a couple HDDs and then add more as the business grows.

These are intended to be plug and play systems, and generally must be plugged into an open port on a router. Keep in mind, though, that most of these solutions don’t recommend a wireless connection. When reliability and uptime are so important, it isn’t worth risking a slower and less-consistent connection.

Startup Benefits

Network attached storage allows companies to centralize storage solutions and provide simple management tools. As more startups begin using them, though, they are discovering many other uses. A business could, for example, use them to share printers, stream media, and connect them to surveillance systems that support IP cameras.

Multiple and swappable hard drives make this a more affordable option for new startups because it is easier to fit the device into the budget and let it expand as the company grows. Also, backups can be made on a specific hard drive and then swapped out and stored somewhere else for added security. Whether you are creating entire system backups or just working with a lot of sensitive customer data, these features can be very important.

Matching the System to the Company

Despite its plug-and-play capabilities, one size does not fit all when it comes to NAS systems. Startups generally have a very limited budget, and that means these purchases need to be made carefully to ensure that the company gets everything it needs without overspending. So when you are ready to implement your own storage solutions, consider the following elements:

Access – Who can see what and when? Centralizing your data storage offers a lot of convenience for the entire company, but that doesn’t mean you want the entire company to see everything stored there. Some models only offer basic controls, allowing the manager to mark some data as read only, while more advanced systems will have tools that allow companies to set specific permissions for different individuals.

Connectivity – How will staff members connect with the device? How many connections can it support at once? Will it allow remote access? An NAS system will usually have multiple Ethernet ports that can be used simultaneously (for redundancy in the connection and better uptime), and remote access makes it possible for employees in other locations to access the data they need.

Capacity – The flexibility in storage space means there’s no reason to over purchase NAS systems. While it may be recommended to get as much storage as the startup budget allows, there’s no reason to stretch those dollars too far. As long as there are multiple internal hard drive ports, you will be able to expand later, or even swap out smaller HDDs for larger models.

What type of storage system are you currently using in your organization? Would you consider making the switch to NAS? Let us know in the comments.

About The Author: Paul Mansour is enthusiastic about start-ups along with consumer and small business technology. Working within Dell.com, he needs to stay up-to-date on the latest products and solutions and best-in-class ecommerce strategies. In his spare time he can’t resist taking apart his latest gadget and forgetting how to put it back together.

A Data Centre Worthy of Any Metropolis

The internet is the current driving force behind industrialisation around the world. The internet is very important for Africa, as more governments and corporate entities are looking to invest in internet-enabled technology to drive their operations. One important issues in this area is the establishment of dependable facilities to satisfy the increasing demand for products and services. At the moment, IBM – in conjunction with various other corporate entities – has established its line of products through the implementation of a data centre in the heart of Lagos, Nigeria.

Some of the Typical Data Centre Beneficiaries

The banking sector is probably one of the major beneficiaries of this project. The Bankers Warehouse facility runs on IBM smart modular technology and is likely to go live by 2014. For IBM, Africa is next after China and India, and they will provide hosting for similar projects that are generating substantial revenue. It is no wonder then that India is among the leading destinations for off-shore business solutions. With an established data centre in Lagos, financial institutions, telecommunications companies, and energy sector players are likely to benefit immensely from efficient and improved service provision.

The Main Factors to a Successful Implementation of the Data Centre

The plans and infrastructure for a facility like the Bankers Warehouse data centre in Lagos required financing. Trico Capital International, through its chief executive officer Austine Ometoruwa, was helpful in structuring and funding the project while they worked closely with Chiltern Group as advisory partners. It is expected that this facility would have an uptime rating of no less than three gold stars.

What IBM’s and Google’s Focus on Africa Means

It is not only IBM that is looking to Africa. Google has also expressed interest in establishing serious internet technology facilities on the continent. This has been spurred by the availability of an affordable labour force with impressive, innovative skills.

With funding forthcoming through initiatives by CEOs like Austine Ometoruwa, the technological future of Africa could not be brighter. Although Africa may not be capable of single-handedly establishing such data centres, financiers are available and willing to outsource such services in order to improve business efficiency.

Some of the business elements that can be positively affected by the introduction of such data centres include:

Sales and marketing: whereby the current and future needs of their customers can be identified and factored into a business’s strategic plan. Technological innovations also suggest that better service provision can be attained, especially when a company outsources certain processes to professional service providers like with IBM and the Bankers Warehouse data centre in Lagos, Nigeria.

An efficient customer support system can be a direct outcome of the outsourcing process. At the moment, some of the big technology firms have off-shore data centres that provide customer support. With this kind of approach, these firms are making substantial savings in terms of labour costs.

Ultimately, establishing a data centre worthy of any metropolis requires planning and financing. Getting all the stakeholders on board for such a project is not an easy task and may require advisory services. It is most probable that when an operator has limited technological capacity to implement such a facility, outsourcing is a more viable option to consider.