Archives for : Interviews

The Biggest Disk Defragmentation Myths

Founded in 1981, Diskeeper Corporation is the technology innovator in performance and reliability technologies. The company’s products make computer systems faster, more reliable, longer lived and energy efficient, all with zero overhead.

Inventors of the first automatic defragmentation in 1986, Diskeeper pioneered a new breakthrough technology in 2009 that actually prevents fragmentation.

Diskeeper’s family of products are relied upon by more than 90% of Fortune 500 companies and more than 67% of The Forbes Global 100, as well as thousands of enterprises, government agencies, independent software vendors (ISVs), original equipment manufacturers (OEMs) and home offices worldwide.

Today, I’ll be interviewing Colleen Toumayan from Diskeeper.

What is disk defragmentation, and why is it important?

The weakest link in computer performance is the hard disk. It is at least 100,000 times slower than RAM and over 2 million times slower than the CPU. In terms of computer performance, the hard disk is the primary bottleneck. File fragmentation directly affects the access and write speed of that hard disk, steadily corrupting computer performance to unviable levels. Because all computers suffer from fragmentation, this is a critical issue to resolve.

What is fragmentation?

Fragmentation, by definition, means “the state of being fragmented,” or “something is broken into parts that are detached, isolated or incomplete.” Fragmentation is essentially little bits of data or information that are spread over a large disk area causing your hard drive to work harder and slower than it has to just to read a single file, thus affecting overall computer performance.

Imagine if you had a piece of paper and you tore it up into a 100 small pieces and threw it up in the air like confetti. Now imagine having to collect each of those pieces and put them back together again just to read the document. That is fragmentation.

Disk fragmentation is a natural occurrence and is constantly accumulating each and every time you use your computer. In fact, the more you use your PC, the more it builds-up fragmentation and over time your PC is liable to experience random crashes, freeze-ups and eventually the inability to boot up at all. Sound familiar? And you thought you needed a new PC.  Imagine if you had a piece of paper and you tore it up into a 100 small pieces and threw it up in the air like confetti.

That is fragmentation and it is what happens to the data on your hard drive every time you save a file. The question is simple. Why defrag your hard drive after the fact, when you can prevent the majority of fragmentation in the first place.

By intelligently writing files to the disk without fragmentation, your hard drive read/write heads can then read a file that is all lined up side by side in one location, rather than jumping to multiple spots just to access a single file.

Just like shopping, if you have to go to multiple stores to get what you want, it simply takes longer. By curing and preventing the fragmentation up front, and then instantly defragging the rest, you experience a whole new level of computer performance, speed and efficiency.

Fragmentation can take two forms: file fragmentation and free space fragmentation.

“File fragmentation causes performance problems when reading files, while free space fragmentation causes performance problems when creating and extending files.” In addition, fragmentation also opens the door to a host of reliability issues. Having just a few key files fragmented can lead to an unstable system and errors.

Problems caused by fragmentation include:

System Reliability:

  • Crashes and system hangs
  • File corruption and data loss
  • Boot up failures
  • Aborted backup due to lengthy backup times
  • Errors in and conflict between applications
  • Hard drive failures
  • Compromised data security

Performance:

  • System slows and performance degradations
  • Slow boot up times
  • Increase in the time for each I/O operation or generation of unnecessary I/O activity
  • Inefficient disk caching
  • Slowdown in read and write for files
  • High level of disk thrashing (the constant writing and rewriting of small amounts of data)
  • Slow backup times
  • Long virus scan times
  • Unnecessary I/O activity on SQL servers or slow SQL queries

Longevity, Power Usage, Virtualization and SSD:

  • Accelerated wear of hard drive components
  • Wasted energy costs
  • Slower system performance and increased I/O overhead due to disk fragmentation compounded by server virtualization
  • Write performance degradations on SSDs due to free space fragmentation

Why do operating systems need to break files up and spread their contents across so many places? Why doesn’t it just optimize when writing the file?

Files are stored on a disk in smaller logical containers, called clusters. Because files can radically vary in size, a great deal of space on a disk would be wasted with larger clusters (a small file stored in a large cluster would be only consuming a % of the available cluster space). Thus clusters are generally (and by default) fairly small. As a result, a single medium-sized or large file can be stored in hundreds (or even up to tens of thousands) of clusters.

The logic which fuels the native Windows Operating System’s file placement is hardly ideal. While it has made some advances in recent years, a Windows user at any level of engagement (home notebook all the way up to enterprise workstation) is faced with an ever-increasing level of file fragmentation.

This is largely attributable to a lack of consolidated free space as well as pre-existing fragmentation (essentially: fragmentation begets fragmentation). The OS will try to place a file as conveniently as possible.

If a 300MB file is being written, and the largest available contiguous free space is 150MB, 150MB of that file (or close to it) will be written there. This process is then repeated with the remainder… 150MB of file left to write, 75MB free space extent, write… 75MB of file left to write, 10MB free space extent, write… and so on, until the file is fully written to the disk. The OS is fine with this arrangement, because it has an index which maps out where every portion of the file is located… but the speed at which the file could optimally be read is now vastly degraded.

As an extra consideration, that write process is exponentially longer than if there had simply been 300MB of free space available to drop the file into, all connected.

If Windows already has a free disk defragmentation utility, why should I pay money for another one?

  1. Incomplete. Can’t get the job done.
  2. Won’t defrag free space.
  3. Resource intensive. Can‘t be run when the system is active.
  4. Servers? It can’t keep up with them.
  5. Hangs up on big disks. You never know what the progress is.
  6. Eats up IT admin time administering.
  7. Takes forever. May never finish.
  8. Effects of fragmentation still not eliminated!

Inefficient defragmentation means higher help desk traffic, more energy consumption, shorter hardware life, less time to achieve proactive IT goals, and throughput bottlenecks on key consolidation economies such as virtualization.

What are some of the biggest misconceptions that people have when it comes to disk defragmentation?

Myth #1: The built-in defragmenter that comes with Windows® is good enough for most situations because it can be scheduled.

After the fact defrag solutions, including the built-in, allow precious resources to be wasted by writing fragmented files to the disk. Once a file is written, the defrag engine has to go to work to locate and reposition that file. When that file later expands, this doubling effort has to repeat itself all over again — this approach remains a reactive band-aid to a never-ending problem. Not even the built-in defrag can keep pace with the constant growth of fragmentation between scheduled defrags. Manual defrags tie up system resources, so users just have to wait … and wait … and wait.

When you are only doing scheduled defrags (or nothing at all), your PC accumulates more and more fragmentation, which leads to PC slows, lags and crashes. This problem cannot be handled with a freebie utility even if it can be “scheduled”. Here’s why:

Systems accumulate fragmentation continually. When the computer is busiest (relied upon the most), the rate of fragmentation is highest. Most people don’t realize how much performance is lost to fragmentation and how fast it can occur. To maintain efficiency at all times, fragmentation must be eliminated instantly as it happens or proactively prevented before it is even able to happen. Only through fragmentation prevention or instant defrag can this be accomplished.

Scheduling requires planning. It’s a nuisance to schedule defrag on one computer, but on multiple PCs it can be a real drain of time and resources. Plus, if your PC isn’t on when the defrag process is scheduled; it will not run until you turn your PC on again. By then, you will need to use your computer and you will experience PC performance slows while you work – that is, if you are able to work at all.

Scheduled defrag times are often not long enough to get the job done.

Myth #2: Active defragmentation is a resource hog and must be scheduled off production times.

This was very true with regard to manual defragmenters. They had to run at high priority or risk getting continually bounced off the job. In fact, these defragmenters often got very little done unless allowed to take over the computer. When the built-in defragmenter became schedulable, not much changed. The defrag algorithm was slow and resource heavy. Built-in defragmenters were really designed for emergency defragmentation, not as a standard performance tool.

Ever since first released in 1994, Diskeeper® performance software has been a “Set It and Forget It”®, schedulable defragmenter that backed off system resources needed by computer operations. Times have changed and a typical computer’s I/Os per second (IOPS) has accelerated a hundred fold.

Because this drove the rate of fragmentation accumulation way up, Diskeeper Corporation saw the need for a true real-time defragmenter and developed a new technology, InvisiTasking® technology. This innovative breakthrough separates usable system resources into five areas capable of being accessed separately.

As a result, robust, fast defrag can occur even during peak workload times – and even on the busiest and largest mission-critical servers. In the latest version, Diskeeper incorporated a new feature called IntelliWrite® fragmentation prevention technology. This new feature prevents file system fragmentation from ever occurring in the first place.

By preventing up to 85% of fragmentation, rather than eliminating it after the fact, Diskeeper is able to improve system performance much more dynamically and beyond what can be done with just the automatic defragmentation approach.

Myth #3: Fragmentation is not a problem unless more than 20% of the files on the disk are fragmented.

The files most likely to be fragmented are precisely the ones relied upon the most. In reality, these frequently accessed files are likely fragmented into hundreds or even thousands of pieces. And they got that way very quickly. This degree of fragmentation can cost you 90% or more of your computer’s performance when accessing the files you use most. Ever wonder why some Word docs take forever to load? Without fragmentation, they load in a flash. Files load times are quicker and backups, boot-ups and anti-virus scans are significantly faster.

Myth #4: You can wear out your hard drive if you defragment too often.

Exactly the opposite is true. When you eliminate fragmentation you greatly reduce the number of disk accesses needed to bring up a file or write to it. Even with the I/O required to defragment a file, the total I/O is much less than working with a fragmented file.

For example, if you have a file that is fragmented into 50 pieces and you access it twice a day for a week, that’s a total of 700 disk accesses (50 X 2 X 7). Defragmenting the file may cost 100 disk accesses (50 reads + 50 writes), but thereafter only one disk access will be required to use the file. That’s 14 disk accesses over the course of a week (2 X 7), plus 100 for the defrag process = 114 total. 700 accesses for the fragmented file versus 114 for the defragged file is quite a difference. But in a real world scenario, this difference would be multiplied hundreds of times for a true picture of performance gain.

With the release of Diskeeper 2011, we are tracking how many I/O’s Diskeeper helps save your system, making it even easier to gauge the benefits of running Diskeeper on your computer.

In addition to proactively curing and preventing fragmentation, Diskeeper 2011 also now defaults to an optimized defragmentation method (for the fragments that were not prevented) to maximize efficiency and performance while minimizing disk I/O by prioritizing which files, folders and free space should be defragmented. This setting can be changed to a more thorough defragmentation option from the Defragmentation Options tab in the Configuration Properties for those who wish to do so.

How has disk defragmentation changed over recent years? What trends are going to change it in the future?

With new storage technologies such as SAN and Virtualization, defragmentation has changed to address their specific peculiarities.  We have specific product for Virtualization V-locity that goes well beyong defrag and we also have a SAN edition of Diskeeper 2011.  Cloud will bring new technologies  I am sure as well.

What should people look for when shopping around for disk defragmentation programs?

They will want one that prevents fragmentation before it even happens, one that has technology to instantly handle any fragmentation that cannot be prevented, one that is efficient and able to really target that fragmentation that is impeding system performance, one that has zero system resource conflict, one that is specially designed with new storage technologies in mind.

Which Programming Language Is Best? (Important Career Advice For Programmers and Developers)

Are you having trouble converting social media leads into paying customers? CoupSmart is an incredibly clever tool that lets you use coupon marketing with the “Insider’s Club” that is your social media contact list.

In addition to being the current CTO of CoupSmart, Troy Davis is an experienced developer with a long list of successes behind him. Through his years in the software industry, he’s noticed some key career trends that separate successful developers from the ones who never reach their potential.

Troy has been nice enough to sit down with me and share some of his insights. Enjoy the interview below.

Can you please give me a bit of background about yourself and CoupSmart?

I started out as a webmaster for ad agencies in 1995, worked as the software developer and default IT manager for a few companies as my career progressed. One product I worked on got some investment money in 2008 and I’ve been working for startup companies as CTO since then. I’ve concentrated mostly on Web applications most of my career, but have also developed a few desktop and embedded apps as well. In 2001 I started a group called the Cincinnati Programmers Guild, an educational non-profit that focused on broadening the knowledge of its members by endorsing no specific technologies, which is much different from most technical groups. Instead, the focus was placed on learning new ideas no matter what technologies were used. We had consistent monthly meetings for 5 or 6 years, and it was a great experience.

CoupSmart started in 2009 by CEO Blake Shipley. Originally centered around an iPhone app, we shifted focus at the end of last year to try some interesting ideas we came up with regarding the economic, business and social dynamics of coupons. We’ve been offering a Web/Facebook promotions system for a couple of months now, it allows people to share offers with their friends to earn a higher value offer. Our customers are mostly in Cincinnati at the moment. We’re focused on tying social media to the physical world for our customers, and have recently developed a hardware device for point of sale to assist in this effort. This is in beta testing with a few customers at the moment.

What is your biggest beef with mindset of the software development community?

It’s not so much the community of software developers that present a problem, most people who make the effort to seek out and talk with other programmers are seeking knowledge themselves, and usually want to learn how others meet their challenges. This often leads to trying multiple languages, vendors, platforms, etc. and that’s all good.

The problem I’m discussing is more often seen in solitary developers. The technologies they use are initially attractive simply because the job ads show higher starting salaries for developers with experience in them. After some classes and much trial and error, the developer becomes minimally competent in a narrow aspect of software development, and lands a job by interviewing (often) with a non-technical HR person that can’t screen developers well.

After a few years, the developer achieves a certain level of proficiency with the tasks often assigned to them, and are assumed to be software development professionals. And they often coast for however long they can at this level.

Some time later, a new person takes over the department and doesn’t care for the technologies used by their predecessor. So a migration effort begins, and everyone is expected to adapt to the new technologies quickly or find some other kind of work for themselves. If a developer had taken an interest in keeping up to date with their field of work, they’d have recommendations for the new systems to contribute, and would likely find a comfortable place in the new structure. But those who rested on their laurels often respond defensively, obstructing change because they simply fear it. And ultimately they get pink slips.

When I was active with the Cincinnati Programmers Guild, I saw many mainframe developers who had been laid off in waves, and once unemployed were desperate to pick up that one key idea they needed to get another job doing the same thing they were doing for the last 15 or 20 years. Many of these people came to their first Guild meetings having only written COBOL or Fortran their entire careers. They’d never bothered to learn anything else. And most of them seemed to have the idea that learning just one new language was all that they needed to regain their previous role and stature.

Some got certifications in .Net or Java, spending thousands of dollars to reboot their careers. A few got new jobs and I never saw them at Guild meetings again, they reverted to their solitary existences I suppose. But most of them lingered in unemployment for years, taking this class or that class as they could afford it with temp jobs. Very few of them tried to go freelance, either. They all wanted back into large companies, it seemed. The illusion of job security was prevalent, despite the obvious evidence to the contrary, meaning all of the people with similar career conundrums attending the meetings.

If a developer has had success under a certain platform or language, what’s wrong with specializing and becoming an expert in that particular area?

There’s nothing wrong with becoming an expert in a particular area of study, it’s overspecialization that’s the problem: Focusing on one set of technologies to the exclusion of all others. So just because you happen to like writing C++ on Linux doesn’t mean you should pretend like it’s the only way to write decent software. You might fool a few people into believing you, but ultimately you’re just fooling yourself.

An example: A programmer I used to work with just absolutely hated Windows and everything that went with it. Just couldn’t stand to be in the same room with it. I worked on a Mac, so I was exempt somehow. But we had a web application that needed to be compatible with all the major browsers, and that was the plan. This developer fought with just about everyone on the staff to simply not support Internet Explorer, which would have been an almost certain an act of suicide for any SaaS company. I was in charge of this group, so it was my job to try to persuade her to just get the compatibility work done despite her qualms. It didn’t work out very well, several fits of yelling and rage happened both in person and over the phone. I urged to work with her as a freelancer only for a while to see if some isolation would help, but it didn’t, and ultimately she was laid off.

Another example: A Linux developer worked with me at the first ad agency I worked at in the late 90s. He was young and very opinionated about how great his chosen technologies were. He frequently insulted coworkers for their technical incompetence as he saw it, he was not popular with the staff. But he wrote code nobody else at the company knew how to write at the time, and it was important code, so his social eruptions were tolerated. I decided to learn more about Linux and C at that point, and within a couple of months had a pretty good understanding of what this guy was doing every day. And it wasn’t much. His claims of technical superiority had become a crutch, and he used others’ lack of knowledge to justify not working very hard at all. Ultimately he was let go after a particularly nasty exchange with a few coworkers. The next day he logged into a client’s server from his home and deleted their entire website, along with several log files that might have implicated him in doing so. But he missed one, and it had an IP address, which we confirmed later that day with his ISP to be assigned to his login at that time. We lost the client anyway, but he got special oversight by law enforcement authorities for years afterward. Might still be monitored, I’m not sure.

Many would argue that it’s a smart bet and a smart career move to align your efforts with the strongest or most dominant platform. What’s wrong with this mindset?

Nothing, so long as you understand that what you’re focusing on is just the current flavor of the month, and will inevitably be replaced with some other technology at some point. So just be prepared by learning about the alternatives before it’s time to switch.

My point is that programming is a career path where any valued proficiency has an expiration date attached, and as time goes on, that expiration period grows shorter. This is commensurate with how fast hardware is changing, the microprocessor industry has stayed pretty close to the predictions of Moore’s law for over 30 years now, and many academics say that this is evidence that we are still in the infancy of computing. It would be unwise to assume that we’ve reached any kind of sustainable plateau with these technologies yet.

So the mainframe folks I mentioned earlier that had the same tasks for 15 or 20 years are likely to be the last of their kind. A lone, overspecialized programmer entering the field today may only get away with her current skill set for 5-10 years. This continual decrease in the longevity of newly learned computing skills agrees with the technological singularity concept, something that might be worth checking out:

http://en.wikipedia.org/wiki/Technological_singularity

What’s the worst that can happen? What’s wrong with sticking to techniques that are “good enough”, and more familiar?

I see possible dangers including a widespread deficit of capable programmers because of overspecialization in now-antiquated technologies (large companies have been using this claim to justify increasing numbers of tech worker visas for decades), and massive amounts of money spent unnecessarily to prop up an aging technology due to internal resistance to change. These ultimately make the entire economy less productive / profitable. That means fewer jobs for everyone and a smaller economy overall as valuable resources are spent in non-productive efforts, trying to catch tempests in cups of various sizes and compositions instead of inventing what’s actually needed for the foreseeable future.

And there’s a long-running developer philosophy that “good enough” techniques really are good enough, as long as you know your options well. That’s not at odds with the value of continual learning, however. Most of the time, “good enough” has to do with a judgement of how much time it would take to implement a more complex solution to a problem, versus choosing a simpler method which has known drawbacks, but will probably not manifest as a problem. Using an old technique to deliver desired functionality faster isn’t inherently wrong, it might be the best way to get the system working as desired. But being unaware of the alternatives for that decision can be costly for many more people than the developer and their employer. Software inadequacies get repeated over and over with growing numbers of people, so a bad decision of one developer can have a disproportionately large impact on the lives of many more people over time.

But more to the point, I don’t think it’s possible to back up a claim that any single software technology will be “good enough” to address a wide variety of problems over a long time span. We’re just not at that stage of technological development yet.

What was your development philosophy when working on CoupSmart, and what kind of results did it bring?

The software development work had already been started by two part-time developers when I joined CoupSmart, so the language had already been chosen, and it was PHP. It’s not my favorite language, but it’s perfectly suitable for modern web apps, so I wasn’t concerned. There’s also an advantage in that more developers fresh out of college have a working knowledge of PHP, whereas fewer are familiar with Ruby, which is the language I might have chosen if the variables had been different.

And although it’s probably too early to tell if our programming language choice had a direct impact on the success of CoupSmart as a company, we are often praised by other entrepreneurs in our circle of friends, their software development teams are working in .Net or Java, and are apparently far less productive given the same resources. So I’ll count that as a tentative win.

Why do you think other developers are so resistant to new ideas?

I think you can answer this with the same reasons people resist change in general. Fear of the unknown, self-doubt, overwhelming choices, etc. It’s really no different. We develop habits because it’s easier than rethinking every single past decision, it’s just faster. But when you fail to reevaluate your prior decisions for too long, there are always consequences as you and the rest of society drift apart ideologically.

One woman that I met through the Guild was a COBOL mainframe programmer who got laid off after over 20 years writing the same kind of code every work day. They replaced the mainframe with a more modern system, and she had not transferred to the team doing the new work. She thought their project would fail, apparently. It certainly could have, lots of software projects fail. But this one didn’t, and she was laid off shortly after the mainframe was decommissioned. She decided that what was so new in modern programming that had been absent in her work was object orientation (aka OO), a layer of abstraction that makes it easier to design large software systems. I encouraged her to learn a new language in order to become familiar with OO concepts, but she seemed afraid somehow. Months later, she told me that she had finally registered for a class in .Net. That would certainly cover OO topics, so I tried to give some positive reinforcement. But I think her conception of how novel this concept was may have gotten in her way of simply using it until she understood it. She remains out of work to this day, over 5 years later.

I’m acquainted with a developer who worked for a competing ad agency. I talked with him over lunch one time, and he admitted rather shyly that he was still doing most of his work in ColdFusion, a programming environment that is arguably not aging very well.  I asked him if he was thinking about trying anything more modern for new projects, and he claimed to have investigated a few other options, but just wasn’t willing to give up his favorite environment, which he just loved and felt very comfortable with. About two years later, I heard that his company had shut down his department and laid him off, not enough clients wanted their work done in ColdFusion any more, and the developer just wasn’t willing to try something else, so they stopped getting new projects, and after a while they couldn’t justify keeping him full-time just for maintenance work on old apps.

The Difference Between MLC (Multi Level Cell) and SLC (Single Level Cell) SSDs (Solid State Drives)

MDL Technology, LLC is a Kansas City IT company that specializes in worry-free computer support by providing solutions for around-the-clock network monitoring, hosting, data recovery, off site backup security and much more. MDL Technology, LLC is dedicated to helping businesses place time back on their side with quick and easy IT solutions.

Today, I’ll be interviewing TJ Bloom, who is the Chief Operations Officer at MDL Technology. And he’ll be giving us a brief overview of SSD technology.

What is the difference between MLC (Multi Level Cell) and SLC (Single Level Cell) solid state drives?

Multi-Level Cell is a memory technology that stores bits of information in multiple levels in a cell. Because of this, MLC drives have a higher storage density and the per MB manufacturing cost is less but there is a higher chance of error on the drive. This type of drive is typically used in consumer based products. Single Level Cell only stores bits of information on a single level per cell. This decreases power consumption and allows for faster transfer speeds. This technology is typically reserved for higher end or enterprise memory cards where speed and reliability are more important than cost.

How does the endurance of SSD drives compare to traditional hard drives? What factors contribute to SSD durability?

SSD drives are much faster than a traditional HDD. SSD drives use NAND flash memory , which means they have no moving parts. With the removal of moving parts it allows for faster data retrieval times and better stability and durability. Furthermore, SSD drives can withstand a much higher shock rate before sustaining damage to the drive than a HDD. This is due to no moving parts.

What types of solid state drives are best-suited to laptops or typical PC use?

Depending on what you are willing to spend on performance and reliability determines the best drive for your PC. If you replace or purchase a laptop or pc with a  MLC SSD you should see significant performance and reliability increases over a standard HDD in your machines. MLC SSD drives are typically used in PC or non-critical environments. In an enterprise environment and critical environments it is advised to use SLC SSD drives. This will increase performance over a HDD and add speed, durability. A  SSD is also much quieter than HHD.

What types of solid state drives are best-suited to servers?

Intel® X25-E Extreme SATA Solid-State Drive is one option for running your server on SSD drives. This will be much quieter, more stable and faster than your traditional HDD.

What advice can you give for someone looking to get the following benefits from their SSD purchase:

  • Overall Cost – At the server level, the cost of SSD drives is still very expensive compared to HDD.
  • Cost-per-gigabyte - Cost-per-GB for SSD drives can be as low as $1.87 per GB but HDD still makes SSD drives hard to justify coming in at under $.13 per GB.
  • Speed – If you are looking for speed SSD is the way to go.
  • Reliability –It has no moving parts so it has better reliability and longevity.
  • Longevity – It has no moving parts so it has better reliability and longevity.

What are some good general tips for picking the best solid-state drives, or deciding between SSD and traditional hard drives?

I like to read the reviews on what I am purchasing. http://www.ssdreview.com/

This will help you to ask the right questions and help you choose the best options for your application.

How do you see the future for SSD technology?

I think in the future you will start seeing more and more SSD drives built onto the motherboard of the computer. At some point we will be using just one big chip.

CRM Advice For Wholesale Telesales and Teleprospecting (B2B CRM)

Frank Hurtte is a a consultant, speaker and author of over 150 published articles and multiple books – including an ebook Telesales Prospecting for Industrial Distribution. His expertise is with wholesale distribution companies. These are people who sell things like electrical supplies, gears, belts, hydraulic machinery, automation, heating and air conditioning.. the list goes on and on.

Many of these companies tried to set up CRM systems in the past and the efforts collapsed. Now they are in what some would call CRM 2.0. They know they need one – the cost has gone down, the capabilities gone up – so now they are in the market again.

Today, I’ll be interviewing Frank Hurtte from River Heights Consulting.

Can you please tell me about yourself and your background as it relates to telesales and tele-prospecting?

I’m a consultant who works with wholesale distributors in the sectors that serve manufacturing in the US and Canada.  These are the people who sell things like automation products, electrical products, piping and related controls, hydraulics and mechanical components.

All of these companies are basically sales organizations, buying the products of others and combining them to solve the issues faced by their customers.  These companies are looking for ways to improve their sales process and CRM systems have been on their radar screen for many years.  Unfortunately many of those who purchased the systems had no plans as to how to use them.  They have the same issues that many companies face – software installed and nothing happens.  Not maintained, poor data entered, lack of salesperson cooperation with company marketing team ect.

Tele-prospecting is something I developed while working in the industry and then later turned into a book format.  The difference is that a prospector is assigned to keep the data in order and to mine for additional contacts within existing customers.  This is important, because sales people tend to continue to work with the same contacts at the accounts rather than broaden the scope of the company.  This provides additional value in that they line up “leads” for the salesperson to call on based on the needs of the selling organization and the issues within the customer account.

How does telesales for wholesale prospecting differ from direct sales? What are some of the unique challenges?

The main difference in selling in this wholesale world and selling in the “direct selling” world is the amalgamation of products from various suppliers into a complete package.

Additionally, wholesalers tend to maintain relationships with companies over a longer haul – the sale is not a one-time event.  Further, a single account may contain dozens of sales contacts.  For instance the salesperson might call on the head of maintenance, several engineers, the plant manager and the head of safety – each person has their own specific needs and drivers for purchasing the product.

The unique challenges of this industry come from the fact that often there are multiple distributors serving the same geographical territory.  In order to keep the sale from becoming a “price driven event” the distributors must learn to sell their own version of service and work on the value they add in addition to the product.

What are some of the biggest problems that your clients have had when it came to setting up their CRM systems?

The biggest issues that everyone has with CRM systems comes from not fully understanding how they intend to use the information once they have it.  If the data gathered is incorrectly configured it becomes almost un-usable.  Further, if data is incorrectly configured as it is entered by the sales people it becomes impossible to draw conclusions or make decisions.  For instance, something as simple as company names creates issues.  As an example, Deere and Company might be entered by various people as:  Deere, John Deere, Deere and Company, Deere & Company, or Deere Company.  All of these make sense to a human, but sorting information by the CRM System becomes troublesome.

How has the new generation of SaaS CRM systems changed the game when it comes to wholesale distribution companies?

The SaaS allow for easier implementation and a greater ease of entering data – for instance, the salesperson no longer must be in the office to access his/her information.  These often open the CRM system to the world of smart phones and portable devices which adds a level of convenience.  The ability to quickly access information from the parking lot of a customer is huge.

What are some tips that wholesale distribution companies should keep in mind when implementing a CRM system?

Start out with a plan.  How do you intend to use the data?  What is the end game?  Without this you will certainly be backtracking later on in the process.  Nothing will turn off your sales people more than starting up a system – spending hundreds of man hours loading data – only to discover the wrong information is loaded or the right information is loaded in a way that requires major rework.

Here are some points that need to be in the plan:

  1. Customer segmentation – determine how you should segment customers based on size, industry, geographical areas and sales territories
  2. Customer contact segmentation – not every person a specific customer will be interested in every product you offer.  Determine how you plan to segment the customer contacts based on their own personal and professional interests.  Examples: Engineers may be interested in product features, Plant Managers will be interested in results but not in the specific features, etc.
  3. Create drop downs lists – Accounts, product lines, and other commonly used terms should be part of a user specified drop down list.  This minimizes errors in sorting and using the data at a later point.
  4. Involve your selling team in setting up the system – this group will understand not only current segmentation needs but will also understand potential future needs.
  5. Before implementation develop your sales process.  What are reasonable expectations for loading and updating information and activities.
  6. Training – Start the training prior to the go live date.  Your people need to know what their roles will be in the system.
  7. Training after the fact – plan to review activities and determine where your people are having issues.  These need to be addressed on an ongoing basis for several months after the system is starting to be used.
  8. Publicize the areas where the system helped you get more business or better business.  The sales team needs to understand that the system works and how it has helped make their job go more smoothly.
  9. Plan to scrub the data.  Nearly every CRM system has data that must be purged or changed.  Customers change names, contacts shift in their responsibility at the customer.
  10. Consider assigning a person as the CRM system expert.  This person is the go to for mining data, determining if new fields should be added and other tasks.
  11. Have your people sign a non-disclosure agreement that prohibit the use of the data if they should leave the company.  CRM systems offer an easy method for someone to steal a number of your companies trade secrets.

What sorts of processes should be put in place, along with the CRM?

A CRM system has the power to become the center of a sales process.  If you do not have a sales process laid out, you should do so.  Define responsibilities, time frames, and other customer interactions as part of laying out the system.

How do you measure the success of a CRM implementation, and how do you identify opportunities for refinement?

Measures of success:

  1. Marketing is easily able to engage in activities such as email blasts which are sent to specific narrowly targeted groups.
  2. Outbound customer activity is easily measured and compared against sales results.
  3. Success within specifically designed customer segments is easy to identify and replicate in other territories.
  4. Information can be easily shared with selling partners – vendor partners, systems integration firms and others.
Image Source: http://www.flickr.com/photos/haiflymachine

Future CIOs and CTOs: The Secret To Creating And Executing A Winning Information Technology Career Plan

Mark Herschberg is a CTO who has hired over 100 people, interviewed over 1000, and taught career management to engineering students at MIT and mid-career people at SUNY.  He also ran the Job Discussion section of www.JavaRanch.com (a 200,000 person website for software engineers).

Mark knows what it takes to reach the highest levels within an IT career. And today, I’ve been lucky enough to have him share some of his insightful career wisdom, and to share it with my readers.

First, a bit more background on Mark.

Mark Herschberg is a smart guy who was educated at MIT (with degrees in physics, EE/CS, and a masters in cryptography)

Mark has spent his career launching and fixing new ventures at startups, Fortune 100s, and academia.  Mark has worked at and consulted to number startups typically taking on roles in general management, operations, and technology.

He has been involved from inception and fundraising through growth and sale of the company. Mark was instrumental in launching ServiceLive.com Sears online home services labor market; he also helped fix NBCs online video marketplace (now Hulu.com).

In academia Mark spent a year at HBS working with two finance professors to create the upTick system now used to teach finance at many of the top business schools.

I’ve heard you use a “Ship in the Ocean” metaphor when it comes to career planning. Can you elaborate on this?

Imagine a ship in the middle of the ocean.  Left to itself the ship will drift with the currents.  you may wind up in Boston or you may wind up in Rio.  If you leave yourself to the current you don’t control it.  Most people will choose to steer their ship.  Sometimes they’ll sail with the currents and sometimes against it.  A storm may ultimately blow you of course.  But if you don’t steer your ship, the odds of having the currents take you were you want to go are pretty slim.  Your career is at the whim of many currents; you best learn to steer your ship if you want to wind up somewhere.

Most career planners suggest thinking about the next 3 to 5 years. But I’ve noticed that you actually suggest planning your entire 50 year career in advance.  Why such an extreme position?

This goes to the ship analogy.  When sailing you may turn the wheel based on the conditions of the moment but you also think miles ahead and ultimately plan hundreds of miles ahead.  Whether steering a ship planning your career you have more clarity in the near term than long term, but you still need to think ahead.

For the past 10 years I’ve been telling software developers, “Watch out.  Writing good code will get you a job today and it will get you a job tomorrow.  But someday–maybe 5 years from now maybe 20 years from now–when communication tools shrink distances even further, and when students in developing nations have access to the same tools you do, they’ll write the same good code for less. If you want to have a successful software career 20 years from now you need to offer things someone 5,000 miles away can’t.  Learn the business and understand it in a way remote contracts can’t; that’s your competitive advantage.”

I’ve noticed that many C-level executives come from a Finance or Marketing background. But technical fields seem to be a dead-end for many people. Why do you think this is? What are some of the career challenges that are unique to IT?

This is what we focus on in my MIT teaching at UPOP ( http://upop.mit.edu/about/ ).  In engineering there are well defined problems with right and wrong answers.  Being good at solving those problems makes you a great engineer.  Executives solve a different set a problems, usually ill defined and without clear right and wrong answers.  Engineers typically haven’t been taught or encouraged to think that way.  The path from developer or sys admin to the corner office begins by getting better at those engineering skills and then suddenly shifts to being better at fuzzy skills.  If you don’t realize that, your career runs smack into a brick wall.

What should go into a career plan? What sort of questions should be asked?

  • Professional & personal interests
  • Needs & desires (financial, familial, geographic, and other responsibilities and constraints)
  • Personality type
  • Cultural preferences
  • etc.

If one of my readers wanted to put together a career plan for their IT careers, who else should they seek input from?

Ask everyone for help–your manager, HR, friends, mentors, family, co-workers.  Everyone should create a personal “board of advisors” who can help guide them.  But remember: “No one is more committed to your career than you.”  Your manager/company has goals that are best for them; your significant other may want you to take more risk or less risk, or spend more at home, or not move, etc.  The recruiter wants to place you in that new job to get his commission.  They may mean well but many also stand to gain or lose from your choices.  That doesn’t mean they are insincere but recognize their bias.

How often should the career plan be revised?

It should be revised whenever new opportunities appear. This might mean revising every 12 months, at company reviews, or when changing jobs.

What are some key career skills that IT professionals generally need to work on?

They often don’t focus enough on the soft skills (or the term we use at MIT, “firm skills”).  This includes leadership, communication skills, networking, conflict resolution, negotiations, etc.

What are some special areas of concern that IT professionals should focus on when putting together their career plan?

Having one. :-)

Beyond that recognizing what each rung of the ladder is and what skills are needed for each rung.  (This goes to the earlier comments about different skills and later in the career.)

What advice can you give when it comes to networking for an IT career?

Do it!

Network,network, network.

I have a talk on this too–but that’s a whole other topic.  Basically always be networking.

Remember that networking is about building relationships, not simply getting someone’s contact info or adding them on LinkedIn.

What are some key concepts to keep in mind when executing the career plan?

Be flexible.  It’s never going to work out exactly as planned, but odds are if you plan well you’ll wind up where you want to be.

Anything else you’d like to add?

Never stop learning.  The world is constantly changing, if you’re not, you’re going to get left behind.

Attract PC Repair Clients And Promote Your IT Consulting Business Using Game Shows

gameshowTed Jordan has a Masters in Engineering from UC Berkeley, so he’s very strong in technology.

Ted used to run a computer repair company called JordanTeam Computing LLC and would do a lot of presentations to promote his company. A Mom was impressed, and asked if he knew how to teach kids how to make computer games.

That started it all.

Today, I’ll be interviewing Ted Jordan – of Funutation Tekademy – about a killer marketing tactic that he used to use for attracting clients to his old IT consulting business.

Can you tell me more about your “Family Feud” promotional stunt? How did it work? Where did you get the idea? Where would you do it?

I belong to a group called BNI (www.bni.com).  It’s  a non-competitive referral-based marketing group.  We have to give a 10 minute presentation once or twice a year and I wanted to make an impact.

I used flip chart paper to prepare for the “Family Fued” game with only 5 answers and these were covered by paper so that the participants would have to guess what was under the “hidden answers”.

There were several variations but one of the best was to have the group guess the 5 of the most popular websites in the top 10.  I would split the room into two halves and  & ask for a volunteer from each half.  I had a clacker on a table that one of the two people would grab if they thought they had the answer.

If they had one right I would uncover the answer and that side of the room would get  points.  I then would vary from the real game.  Each side had a chance to choose an answer.

That part would go fast and so we would play the game again but this time they would guess our 5 most popular classes.  As the answers were uncovered I would tell a 30-second story of what the kids learned, and how they enjoyed the class.

If one of our readers wanted to promote their own IT consulting services using a similar theatrical tactic, how would they get access to an audience?

Chambers of Commerce in their area.  They are always looking for speakers.  I also did this for an association of accountants and lawyers.  I just searched for organizations on the web and contacted them one by one.

I prepared a short paragraph to email to each group after a phone introduction with speaking topics.

Do you have any interesting stories that took place during one of these events?

Attendees had a great time, and there were a lot of laughs.  The quietest people in the room would get up & grab the clacker sometimes wanting to take over the show.

What were some of the biggest lessons that you learned from this stunt?

Once I got to our services section, I didn’t realize that people didn’t know what I did, especially at BNI where we do a 60 second promo every week.  It really opened my eyes to what can be done if you offer more awareness

How successful was this tactic for you?

Ultimately it led to more business referrals.

Do you want to attract more local customers to your IT consulting business? We can help.

Image Source:http://www.flickr.com/photos/pushbuttonart/2959716138

Optimizing And Personalising Online Customer Experiences Through Predictive Analytics

24/7 Customer is the pioneer in Predictive Customer Experience solutions. They help companies, with several thousand agents, to move their phone contacts to online through a unique integration of predictive SaaS technology and contact center operations.

Consumers are constantly looking for smarter, better solutions, often getting frustrated that they could not solve their problem in a better way. 24/7 Customer started with the question “with so much interaction data available, why can’t we predict and solve a customer’s problem even before they ask, be it in sales or service?”

Using their strong operational background, combined with their analytics and software background (These were the same people who founded and ran BEI, an eCRM/chat software company, in the early 90s) helped 24/7 Customer to create a number of patented and patent-pending systems that power their predictive customer experience solutions.

Today, I’ll be interviewing PV Kannan, the CEO and cofounder of 24/7 Customer.

Can you please explain what Predictive Experience means? How does this technology “change the game”?

Predictive experiences are all around us. Google’s predictive search and Amazon/ Netflix/ Facebook all provide predictive recommendations on what a consumer may be interested in based on his/her behavior and profile. However, the harsh reality is that the same has not been true in customer service. The service experience on wesbites fails to meet consumer expectations. Predictive Customer Experience addresses that growing need in customer service and sales interactions.

By continuously analyzing, identifying and predicting consumer behavior on the website and call center, we help companies understand which customers are unable to resolve which specific problems online that result in calls to the call center.

Then for those specific problems, we provide a personalized, predictive service interaction that resolves it step by step. However, that is not enough.

After we predict and start guiding the consumer through their journey, it is important to provide a helping hand should they get stuck. We must also learn from the resulting interaction so we can understand where self service failed and fix it – all in an automated fashion. The result is that consumers do not need to call the 1-800 number.

Online chat has been around for a long time. How have you improved on it?

Online chat has been around for more than a decade.

However, its role in customer service is very low, channeling less than 10% of customer interactions at best, even in large implementations. This is due to the perception that chat is simply a “good to have” channel on websites, not a critical proactive or reactive conduit for customer service.

In the reactive online service world, the customer has to reach out and pull, and in the proactive online service world, the technology reaches out, but does not specifically predict the issue and take it to resolution.

In the predictive customer experience world, companies can figure out which customer, for what issue, and will look for the form of assistance, when and solve it in the device. In addition, we can mine interactions constantly to make them smarter both to the consumer and operations.

The typical difference between the traditional proactive chat/ self service and predictive customer experiences is anywhere between 15%-30% better performance. Since it is a very sticky experience, there is higher consumer adoption.

For example, one of our clients had less than 3% of their service online, in chat, email and web self service, and the rest were in phones. By applying our technology and operations, we moved it to 40% online in just 15 months.

How is it possible to predict the who, what, where, why and how of online customer needs? How is this accomplished without violating customer privacy?

We analyze customer behavior and customer journeys tied to specific issues across the customer lifecycle and across channels, instead of storing or accessing specific customer data which is the area of customer privacy.

Many companies argue that having limited information on a web site is good because it gets customers to call in. But I see that you promote a lower call rate as a benefit. Can you elaborate on this?

Research shows that 6 out of 10 customers go to a website first to resolve the problem and only 1 out of the 6 are able to do so. Think of it from a consumer’s perspective, it is very frustrating that in the day and age of the second internet, consumers are not able to solve their issues where they want it and when they want it.

Even for companies, higher calls are not good, especially if these are issues that can be solved online; It only increases costs and also impacts customer experience. Predictive customer experiences solve a dual problem, both for the consumer as well as the company.

How do you see the future of online chat, as a means of providing customer service and support?

Chat has been around for a decade and will continue to be around, but chat, if not done right, will not deliver the expected benefit. Chat will undergo a lot of significant changes in the coming years as a channel that can transform customer experience.

IBM Reveals Their Predictions About The Future Of Social Customer Relationship Management (CRM)

Sandesh Bhat is the VP of Web and Unified Collaboration Software for IBM Collaboration Solutions.

He has some unique insight into emerging trends in CRM. Particularly, how social media is changing the way businesses manage customer interactions.

For more information on social CRM – including a recent study on the topic – you can visit the IBM Institute for Business Value.

Sandesh has been nice enough to sit down with me and anwer a few questions. I think you’ll find his input very insightful.

CRM has been around since the 1990s, and IBM was there from the very start. What are some of the biggest changes that you have seen in the CRM space over the past 20 years?

Maintaining relationship information has moved quickly from spreadsheets and rolodexes carried by sellers to sophisticated systems that connect sellers/users with customers and partners ubiquitously on smartphones & handheld devices. CRM system is shifting from a rigid packaged application, heavy on process and forced execution, and placing a greater emphasis on the value to the seller.

There is also less of a belief that everything has to be absorbed into one monolithic application with all the data in one place. Especially in enterprises with significant investments in existing applications, the notion of a single CRM system is not really possible – whether because of scale, geography or separate units. Data is now shared between sales systems to marketing environments, even to contact centers. There is also more emphasis on delivering analytics and dashboard capabilities to make the seller productive in the field and up-line managers operating remotely.

What are some of the biggest trends that you see in the next 10 years within the CRM space?

Socialization of business and analytics is a major trend, which will have a large-scale impact on evolving sales processes, marketing and customer support alike. It’s a promising new direction: The idea of a customer with a public network having visibility into the account – which can be combined in real-time with what a business knows of that contact, into a robust, collaborative conversation.

Today, when a seller/user retrieves a CRM record on a contact or a company, they would only know what their own systems have previously captured. Extending that to the social fabric of the company (and externally) brings more value and concurrency to relationship data and company information.

In traditional legacy/CRM environments, capturing status, giving updates and reporting to up-line management can be a time-consuming manual process. Use of predictive analytics and BI/dashboards to analyze and report on data, forecast sales, and manage pipelines efficiently takes this burden off of high-value sales resources. Another promising trend involves using advanced collaboration and Unified Communications technologies embedded in the CRM environment to create a culture of effective real-time teaming across sellers, their management and executives, individuals managing customer sales transactions, and partners.

What is “Social CRM”? And how will it force companies to change the way they think about their sales and marketing?

We believe Social CRM – the integration of social media and analytics with customer relationship management strategies – is the next frontier for organizations that want to exploit the power of social media to get closer to customers, old and new. Social networking sites (e.g. Facebook, LinkedIn), microblogging capabilities (Twitter, Jaiku), media sharing capabilities (YouTube, SlideShare), social bookmarking sites (Digg, Delicious), and review sites (Yelp, Trip Advisor) will play a crucial and important role in successfully transforming sales.

Traditional CRM strategy focuses on internal operations designed to manage customer segments based on value and profitability. But with social media, customers are in control of the relationship. Social CRM strategy is about meeting the needs of the customer within the context of a virtual social community, while also meeting the objectives of the business.

With Social CRM, companies manage the customer relationship through the experience of the engagement itself. It will trigger pro-active suggestions to generate leads, connect individuals to experts and resources, create influential relationships and close opportunities quickly.

In what ways will companies need to integrate social tools within their business processes?

Social tools become the business process. For example, the running view of a deal can be captured in a social dialog, rather than a static update. Analytics can then use this data to give indications on where the deal stands.

This is already happening. Take, for example, the case of an airline that’s started to pro-actively monitor Twitter feeds to identify problems with travelers in real-time – and address them immediately. Any business could monitor and analyze social media feeds for indicators and predictions on market shifts, customer satisfaction issues, customer/brand loyalty – or to identify other upcoming challenges for pro-active response. Besides business process transformation, companies will also need to focus on employees’ cultural and behavioral social media aspects to ensure they are ‘socially responsible’.

What are some of the main reasons that a company might want to communicate with customers using social media vs. phone or email?

Being “social” is really about empowering the customer – and not as much what a company wants to gain from it. According to a study by the IBM Institute for Business Value, customers engage with companies via social media primarily to get discounts, purchase products, and for product reviews/ratings.

From a company perspective, social media is useful in:

  1. Driving brand awareness
  2. Testing new messages/early market research
  3. Getting information out about new services/products
  4. Driving down cost through cost deflection (think about a community member who is not on your company’s payroll helping someone solve a technical issue)
  5. Collaboration around new products and services

In addition, “Crowdsourcing” seems to have more influence nowadays. The value of communicating via social media is that it persists. It can be commented on, appended to, and it evolves in full view. Instead of a series of 1-1 discussions, we get group collaboration.

How will the integration of social tools help with building customer loyalty and customer support?

Social tools will help build loyalty through having conversations and making the customer part of the solution. Whether as a leading indicator of product issues in the marketplace (monitoring twitter feeds, for example), support through call deflection, or providing better behavioral data on the customer (to help answer the question, should I being trying to sell to them or save them?). Social tools will help reinforce customer expectations of “know my business.”

What effect will this new “social business”trend have on privacy and confidentiality?

Social business implies being more open inside and outside the company. For many large enterprises this will be a challenge, as they will need to reexamine policies without compromising privacy and confidentiality. Companies need to educate their associates/employees and enhance ‘business conduct guidelines’ for employees. They must emphasize some of the unique risks associated with social media and ensure social responsibility when sharing statements, data, content, etc., externally – as well as how they represent their company, business, customer information etc.

In what ways will CRM evolve if it’s to become more “social”?

Today, CRM is built around a point in time. It is very static. Social tools change the conversation from being transaction focused to more relationship focused. CRM will rely deeply on social media, but it will use analytics capabilities to filter ‘noise’ and bring transformation and value to sellers, marketers and customer support communities alike. Traditional Sales, Service and Marketing roles aren’t going anywhere – but how we work and collaborate, both internally and externally, is going to change.

What can a customer’s preference of communication channel tell you about their needs and buying motives? And how can companies capitalize on this?

It is the entire collection of a customer’s interactions that gives information about their needs and motives. For example, consider someone who just bought a new iPhone. If they are researching how to synch the iPhone with a headset online, and then they call in, I should be able to 1) solve their problem faster, because I already know what their intent was online and 2) think about selling them protection on their recently purchased phone. So ultimately, it’s more about using all the data from all the channels to deliver what the customer wants when we connect.

How To Optimize Your Datacenter To Reduce Energy Consumption By 60% Or More

Dimension Data is a specialist IT services and solutions provider that helps clients plan, build, support and manage their IT infrastructures. They were founded in 1983 and headquartered in Johannesburg, South Africa. Currently, Dimension Data operates in 49 countries across six continents.

Today, I’ll be interviewing Kris Domich, who is the principal data center consultant for Dimension Data Americas.

What sorts of problems are companies facing when it comes to the power requirements of their data centers?

Most data centers have or will experience constraints on how much conditioned power they can provide to some portion or all of the racks. This is commonly due to the increasing power densities of late model equipment and the adoption of such technologies without properly planning and anticipating the need for an adequate power distribution system.

What are some of the biggest mistakes that companies which leads to inefficient data center power usage?

The most visible mistakes tend to be overcooling, or overuse of the wrong type of cooling. An inefficient cooling strategy will lead to the deployment of more cooling components than may actually be required — this will drive use of unnecessary power. Other examples include poorly planned space configurations or ones that were initially planned well but degraded over time. Lastly, a lack of a capacity planning and management regimen will also lead to energy and space inefficiencies. Much of this can be remedied by gaining alignment between IT and facility departments, and ensuring that capacity planning and management is a joint effort between both groups.

What are some tips that you can give when it comes to designing an efficient data center?

Understanding what critical loads must be supported over the planning horizon of the data center is paramount when designing power distribution and cooling subsystems. It is also critical to understand the availability requirements of your organization. This will allow you to choose the right amount of redundancy and reduce the risk of over-provisioning and driving inefficiency. Organize the physical data center based on load profiles, and if possible, create an area that is designated for higher density equipment and other areas that are not. This will allow you to prescribe the proper cooling regimen for the various equipment types and lessen the chance that the entire data center is designed to support a single power density – resulting in over-cooling and potentially less cooling that some of your equipment may require.

What are some good best practices that companies can adhere to when it comes to implementing heating and cooling systems within their data centers?

Baseline critical load today and work with IT to develop a growth plan over at least a five year horizon. Develop notional power and cooling designs that meet the requirements over the entire planning horizon. Translate those designs to specific technologies that are modular enough to be expanded beyond the planning horizon.

What are some of the most efficient data center heating and cooling systems? On what criteria do you base this decision?

Precision and variable output systems are the most efficient. These systems are designed to provide cold air where it is needed (i.e. at the equipment air inlets) and evacuate hot air from its origin (i.e. equipment exhaust points). This precision design is more efficient than the traditional means of “flooding” the room with cold air and having no deliberate means to evacuate heat. The variable aspect of these systems means that as loads increase and recede, so does the amount of cooling supplied, which results in a more efficient use of energy.

How can companies get a better idea of their data center resource utilization?

Implementing a meaningful set of instruments and paying attention to what they tell you will provide intelligence on a number of levels. Such instrumentation can initially provide visibility into what loads are being generated by specific equipment at a given time. This intelligence can aid in meeting the needs of nominal loads and not designing to support maximum loads 100% of the time, which is a rare occurrence.

What are some common power-hungry services which could easily be made more efficient?

On the average, the cooling subsystem — including everything from the chiller/condenser through to the computer room air conditioners (CRACs) — is among the most power hungry subsystems in the data center. They can be made more efficient through the use of precision and variable-output components. This will ensure that the cooling subsystem is not operating at levels which exceed the amount of critical load being generated by data center systems.

What kind of cost savings can a company expect from refining the efficiency of their data centers? Are there any other benefits besides cost savings?

Actual cost savings and avoidance will vary by organization and vary by the degree of inefficiency that is corrected. Historically, we have seen reductions of overall power consumption in excess of 30% and as much as 60% or more. Another way to look at savings is to examine the cost to run a given workload. When introducing efficient power and cooling strategies, organizations can more easily capitalize on more efficient computing platforms such as blade servers.

Adding virtualization to these platforms can dramatically reduce the cost to run a given workload and allow for the running of more simultaneous workloads. As a whole, the IT “machine” becomes more efficient and can respond to business requirements faster.

Will Your Network Evolve To Meet Tomorow’s Bandwidth Requirements?

Evolve IP was founded in 2006 to change the way that organizations buy, manage and secure their vital communications technologies. At that time, there were multiple point solutions providers, but few integrated providers for all unified communications, including Managed Telephony, Managed Networks, Security & Compliance and Hosted Business Applications (such as Microsoft Exchange, SharePoint and Hosted Data Backup and Recovery). Evolve IP was launched to address this gap in the marketplace.

Today, I’ll be interviewing Joseph Pedano, Vice President, Data Engineering at Evolve IP.

Can you please tell me about your Network Management System? How does it work, and how does it detect and repair potential problems that could lead to service degradation?

Most NMS’s are built to be reactive instead of proactive. However, EvolveIP has developed a number of ways to ensure that we are responding to events before they become incidents. Technologies such as IP/SLA and Voice Health monitoring via Mean Opinion Scores (MOS) allow us to see performance issues before they turn into incidents that degrade network performance.

Can you please explain Managed Networks (WAN and LAN) for my readers? What types of companies require managed networks, and what are some of the key business benefits?

Managed Networks are a higher touch offering of the standard telecommunications line or Ethernet cable (WAN and LAN). Routers and switches are the technology providing the WAN and LAN Services.

A Managed Network typically means configuration, monitoring and fault resolution on these technologies.

Any company can utilize Managed Networks, as this frees up internal resources to work on in-house projects instead of managing the network.

What are some best-practices that companies should take in order to ensure secure, optimal performance amongst the various elements of their networks?

With the advent of MPLS technology, most of the security and performance issues have receded to the background. Best practices can be identified in the proper configuration of the LAN and WAN infrastructure and implementing services like QoS, SNMP, IP/SLA and Monitoring.

What are some of the biggest advantages of using private bandwidth over the Public Internet?

Cost, first and foremost. However, the disadvantages of using private bandwidth over the Public Internet include the addition of the complex configuration of VPN’s and the limitation of upload speeds compared to download speeds.

What are some of the leading causes of network performance problems?

Improper configuration is by far the leading cause of performance issues. Improper use or lack of VLAN’s, lack of QoS (Quality of Service), or other physical faux pas (like daisy chaining) are the most common causes of performance problems.

What are some of the biggest mistakes that companies make when managing their own network infrastructure?

Neglect. Most customers that manage their own network tend to install it and forget about it, because that is not their core competency. Configurations become outdated, the infrastructure is not properly monitored, and most businesses grow and continue to add to the existing infrastructure without planning for that growth initially.

What should business owners look for in a Managed Network provider?

The key factors to look for in a Managed Network provider are knowledgeable support personnel that can visually diagram and explain the current status of the WAN and LAN and lay out a solid plan to stabilize and manage the infrastructure moving forward.

Other considerations are the breadth and depth of their Network and Security Operations Center (NSOC), as they will become your lifeline for support moving forward. Can you call in and get to a technician without waiting? Are they helpful? Do they understand your network on the first call? These are key considerations to evaluate.

What are some of the greatest networking issues that companies will begin facing in the near future?

Bandwidth constraints.

Every application has moved to IP Transport. For example, the bandwidth requirements for Voice, Video Conferencing, File Sharing over multiple locations have shot through the roof. While some LAN’s have kept pace, the WAN Infrastructure has not necessarily caught up and is starting to fall behind.

Some customers look at Gigabit Ethernet on the LAN side, which is a one-time investment. On the WAN side, big monthly dollar spends are usually needed to bring in adequate bandwidth to support all of these bandwidth-hogging applications.

While technology, such as EoC (Ethernet of Copper) and Fiber based networks (traditional or something like Verizon FiOS), is starting to become more mainstream, it is still hard to cost effectively ensure high-speed WAN for your business.

Anything else you’d like to add?

As part of being a Managed Network provider, we see some customers and prospects that struggle with anticipating their network infrastructure needs.

In today’s economic environment, more and more customers/prospects fail to identify future opportunities for growing or fixing their existing infrastructure. I’ve seen many customers upgrade their LAN, and then a year later, rip out all the new switches because they support PoE (Power over Ethernet) for VoIP Handsets. One thing to ask a potential partner/provider is what other applications are they working with and what other technologies need to be taken into consideration.

We understand that IT budgets are shrinking, but I would hate to see a company have to rip out gear that’s a year old because there wasn’t a detailed plan of action in place.

Optimizing Your WAN And Minimizing Distance Induced Latency

Certeon is the application performance company. aCelera, Certeon’s suite of software WAN optimization products, delivers automated secure and optimized access to centralized applications for any user, accessing any application, on any device, across any network.

With Certeon, enterprises and cloud providers successfully realize key initiatives, including consolidation, virtualization, replication and application SLA’s. Certeon’s aCelera is the only software-based product that is hypervisor agnostic, hardware agnostic and cost effective. aCelera’s ability to provide automated, secure and optimized access to centralized applications at the lowest possible TCO make aCelera a cornerstone of enterprise and cloud-provider infrastructures.

In 2008, Certeon shifted from a hardware-based solution to a software-only approach. The company re-architected the aCelera product platform from scratch to focus on optimizing all applications running over the WAN. As a software-based solution, it now provides optimal performance flexibility and TCO reduction.

Today, I’ll be interviewing Donato Buccella, CTO and VP of engineering at Certeon.

Can you please explain what WAN optimization is? Can you provide some real-life examples?

WAN optimization is a phrase used to describe products that optimize access to applications via the WAN when users are remote and that application server has been centralized. Some of the specific techniques used in WAN optimization include data deduplication, compression and protocol optimization. WAN optimization can be used for many initiatives, including optimizing access to applications like SharePoint, reducing back up and replication times, and more.

A specific example of WAN optimization is Certeon’s customer Pathfinder, who uses the aCelera software-based WAN optimization solution to enhance global communications. Pathfinder has close to 900 employees globally, across more than 40 offices in 24 countries – all of whom need access to optimized application performance. aCelera has increased Pathfinder’s application performance dramatically across all its applications, from finance and data to communications and file sharing.

An example of reducing replication and backup is our customer, HellermannTyton, which is using aCelera to optimize its global manufacturing business. Business continuity processes such as backup and replication are often crippled over the Wide Area Network (WAN) leaving most organizations to throw bandwidth at the problem. HellermannTyton instead chose to deploy Certeon’s software solution to cut its replication window times from 13 hours to less than one hour.

What should companies look for when evaluating WAN optimization solutions?

WAN optimization should not be considered as a point solution to solve pain at a couple of remote sites; instead it should be thought of a strategic initiative for the entire infrastructure.

When deployed across organization, CIOs and IT administrators will no longer have to worry where the data is and where the users are. Data will be available instantly at LAN like speeds from anywhere in the world.

Given that, the right WAN solution should:

  • Be a best of breed WAN optimization and application acceleration technology that delivers the best application acceleration performance and data reduction
  • Have a cost that is value based such that enterprises and cloud providers can afford to deploy it at every point of access in the network.
  • Be scalable to be installed in the largest enterprise or a cloud provider’s infrastructure.
  • Support all use cases:
    • Branch-to-datacenter
    • Datacenter-to-datacenter
    • Mobile User
  • Be future proof to grow in capacity as an organization’s needs increase
  • Be extendable to cloud provider networks as you leverage those

How is the new trend towards global business – including the trend towards distributed teams – changing the way companies work with their WAN optimization solutions?

Cloud services look like a $100 billion-plus opportunity by mid decade, but is cloud computing worth this level of excitement? Think, Internet 1997. Companies were excited about the technology potential and worried about security, privacy, bandwidth, standards and more. In spite of those questions, what transformed communication and commerce? The ability to deliver Business value!

In 2010 and beyond, cloud successes will be measured in business value. The units of measure will be the ability to increase business agility, decrease cost through on-demand provisioning and teardown of infrastructure and services, speed development, and improved reliability. It must be utility-based, self-service, secure and most importantly, have levels of application performance that improve productivity. With as many as 90 percent of workers scattered across the globe, away from datacenter sources of information, their teammates and management, user adoption of collaboration applications and its centralized data is the linchpin of any business value equation.

Cloud success requires integrating network services that are very far away and often owned by strangers.

Leveraging cloud computing and maximizing its business value requires full-featured, scalable, high-performance WAN optimization software that allows applications to perform as expected, and can be part of any organizations’ on demand architecture, rather than part of a farm of tactical hardware or limited virtual appliance solutions.

Business information and resources are increasingly being accessed at global scale distances, from enterprise and cloud sources using Internet or VPN or MPLS connections. At the same time, expectations for application performance remain the same and are even rising.

Enterprises embracing the cost and scalability benefits of cloud computing and service providers delivering consumption, utility-based models, balance the need for security and user expectations for access and application performance. Users don’t care if the resource is in a cloud or on the moon, they expect their applications to work quickly and flawlessly.

Bottom line: the success of cloud computing is irreversibly linked to software-based WAN optimization and application acceleration technologies as the result of distance induced latency and the need to provide ad-hoc secure and multi-tenant access. aCelera software WAN optimization’s ability to provide secure access, application performance and global scale make it the ideal cornerstone of cloud environments, from private to public to hybrid.

Why should companies implement application acceleration? Why not just continue working same way as always, with the aid of a remote collaboration system such as SharePoint?

Enterprises must align their IT infrastructure with their business strategy. As such, the ability to provide agility, contain IT costs and deal with regulatory changes, means adopting a number of initiatives, including:

  • Consolidating hardware and centralizing data centers
  • Increasing globalization with more telecommuters, road warriors and other remote workers
  • Transitioning to network based backup and disaster recovery network replication
  • Leveraging public cloud services

All of these initiatives are ultimately moving end users further away from the applications they need to do their everyday job. While applications like SharePoint are certainly a way to aid in remote collaboration, they get so bogged down with data that communicating over the WAN and storing the information become an extremely slow process.

This in turn decreases employee productivity, ultimately affecting a company’s bottom line. Application acceleration helps companies to successfully take on those business agility initiatives.

Anything else you’d like to add?

WAN optimization is particularly important as more and more companies leverage the cloud. Resources will increasingly be accessed across the Internet or virtual private WAN clouds; and expectations for application performance will increase. Enterprises that embrace the cost and scalability benefits of cloud computing must simultaneously continue to meet standards for employee productivity and application performance. Users don’t care if the resource is in a cloud or on the moon; they expect their applications to work quickly and flawlessly.

The key to deriving value from WAN optimization in cloud environments is to integrate it with the underlying physical infrastructure and virtualization. It needs to be part of an enterprise’s virtualization stack in order to be cost effective and flexible enough to deliver real business value. In short, it must be software residing on a virtual machine.

Can Deduplication Really Reduce Data Storage By 95 Percent?

CA is a well-known world leader in the Enterprise IT Management Software space, and they have a lot of deep insight into the business issues surrounding deduplication.

Today, I’ll be interviewing Frank Jablonski, Senior Director of Product Marketing for the Data Management business at CA Technologies.

Backup deduplication is a hot topic in IT right now. What factors are causing businesses to embrace this technology?

It’s about having to do more with less. Companies need to reduce costs, and cutting backup-related storage is the low hanging fruit. It’s an easy place to start.

Deduplication helps companies defer CAPEX spending, as they postpone purchases and get more out of their existing infrastructure. What’s more, storage prices are continually on the decline, so the longer a purchase is deferred, the cheaper the storage becomes.

Deduplication also works well. Companies can trust deduplication to work as advertised, and data is recovered reliably and efficiently. Deduplication is in the general acceptance stage of the technology lifecycle.

Aside from storage costs, what are some other drawbacks of storing too much duplicate data?

With duplicate data, backup processing takes much longer, as all the data needs to be written to the disk for backup storage.

Reduced retention time periods are also an issue with duplicate data. With deduplication, storage space frees up which allows for more recovery points on-line for fast recovery. It’s also possible to retain a longer history of recovery points.

I’ve heard you mention that deduplication can reduce data storage by as much as 95%. How is this kind of compression achieved?

The deduplication method is key to maximizing data reduction. Some backup and recovery vendors deduplicate data on a per-backup job basis as this allows comparison of like data for greater reductions. For example, comparing Exchange email data to Exchange email data will typically identify more duplicate data than comparing Exchange email to a SQL database.

Vendors might also employ a block-level, variable length target-side data deduplication technique. This provides a very granular comparison of the data which results in identifying more duplicate data and hence greater reductions in the overall data set.

What is target-sided deduplication, and how is it different from other methods?

Target sided deduplication is performed at or behind the backup server. Target side deduplication can be performed on the backup server, or on a hardware device recognized as a backup target by the backup server. The big benefit of target sided deduplication is that no performance degradation occurs on the production server, as all deduplication processing is done on the backup server or the backup target device.

Many companies go to great lengths to make sure that their users can work without interruption or degraded performance with their key business applications. An alternative to target-sided deduplication is source-side deduplication, whereby all deduplication processing takes place on the production server before the data is sent over the network to the backup server. While this results in reduced network traffic, many companies don’t view backup network traffic as much of a problem as compared to reduced application performance. Companies often use dedicated backup networks or a Storage Area Network (SAN) to reduce network traffic so there is no need to perform deduplication on a production application server.

What advice can you give when it comes to choosing and implementing a backup deduplication system?

Some companies sell deduplication separately from their base backup product. It may be possible to save money and benefit from tighter integration and ease-of-use by considering vendors who offer built-in deduplication functionality.

I’ve heard you mention that ease of deployment was the most important factor in selecting a deduplication solution. What are some key criteria that would indicate ease of implementation?

An ideal option would be built-in deduplication that does not require any extra software installation, configuration or licensing. Companies should also look for simple, wizard-driven setup interface, no more than a couple of minutes. Having the flexibility to seamlessly phase the deduplication into existing backup processes – without having to buy additional hardware – is also a plus.

Anything else you’d like to add?

Customers should also look for reporting features. The more advanced deduplication products will also offer reporting that graphically illustrates the volume of deduplicated data, which servers are running deduplicated backup jobs, and other information.

What Are The Business Benefits Of Mac Servers In The Datacenter?

Today, I’ll be interviewing Ben Greisler, owner and founder of Kadimac Corp.

Kadimac is Ben’s second technologies company. His first was founded back in 1999 after spending 8 years in the publishing field.

The first company was Mac oriented also, but he joined another firm to become their CTO.

He decided to leave that company to found Kadimac with the goal to provide enterprise style solutions for integrating the Macintosh and other Apple technologies into other environments be it Windows, Linux or other Unixes.

He saw the increase in OS X uptake by industry and it was a great area to be in.

Here are some highlights from my interview with Ben Greisler:

Why are companies starting to rediscover Apple products? (Servers in particular)

A few reasons:

  • Less expensive than Windows Server, especially in the area of client licensing.
  • Easier to manage. You don’t need an MCSE to get an OS X Server running.
  • Features people wanted for their systems.
  • Easy integration with existing systems. It doesn’t have to be one or the other.

How can companies benefit from deploying Snow Leopard within their IT environments?

Since many of the office applications for Windows are also available on the OS X platform, workers can do their jobs in an environment they like. There is some resistance from IT staff about supporting Macs, but after a while they realize they need to provide less support for the Macs once they are deployed in a proper manner.

What are some key areas or scenarios where Mac servers are able to outperform Windows or Linux servers, or can serve as a better alternative? Can you give me some examples?

Lower cost is a big item. With no CAL costs you can add services for very little money. The trick is to match the server implementation with the OSXS solution. There is a sweet spot and it isn’t appropriate for every situation. OSXS is easier to manage that Linux and that can translate into lower costs too. Don’t forget that there are zero viruses for OSX still after 10 years of being around.

Mac hardware seems to be quite a bit more expensive than other systems. Why is that, and how are these extra costs justified?

Apple has chosen to play the game at a certain level. The hardware is quite cost competitive with other systems if you do the proper comparison.

The standard comparison with cars still stands:

You can say BMW is more expensive than Hyundai, which is true, but it is hard to compare a Sonata to a 5 Series. A fun comparison is to build a Mac Pro on Apples site then build a truly comparable machine on Dell or HP’s site.

More often than not the Apple hardware comes out cheaper. Apple just chooses to play at a different level than the other guys. In that level they are very much a value.

Go price the 12 core machines and see.

What are some of the ROI advantages that a Mac Servers would have over a typical server? Any examples you can share?

Ease of setup and lower cost. Example: I set up a Mac Mini Server ($999 and comes with a full version of OS X Server) and connected it to a small RAID unit also sold by Apple (4TB for $799) for a marketing firm. This unit was replacing another Mac server that had been running almost 7 years (good ROI there!).

The original purpose was simple file sharing, but I demo’ed the built-in wiki/blog software and the calendaring solution. They loved it and 30 minutes later they were populating those services with data. Had that been Windows, the cost to configure it would have been crazy.

I’ve noticed that Apple will be discontinuing Xserve, and is asking their users to transition to Mac OS X Server Snow Leopard. Can you comment on this? What are the advantages of Snow Leopard over Xserve?

Xserve is hardware and Snow Leopard Server (OSXS) is the operating system.

OSXS can run on the Xserve, Mac Pro or Mac Mini. Apple is discontinuing the Xserve true, but you can purchase OSXS preinstalled on a Mac Pro or Mac Mini. In fact, the Mac Mini server is one of the best selling servers in Apples line up.

What isn’t to like about it: $999 for the hardware and server license. The server licenses is worth $499 by itself! The prior versions of OSXS cost $999 so it is like buying the server license and getting the hardware for free.

I recently did a job in North Carolina for a school district replacing all 34 DNS servers with Mac Mini Servers.

They get an EDU discount so the hardware is even less than $999 and they get a fully supported server that is about an inch and a half high and 8 inches square. They take up very little room and little power.

A perfect solution to replace all the aging gear they are currently using. They will probably save enough in power and cooling costs to pay for much of the hardware.

What do you see in the future for Apple servers?

I see a significant change coming down the pike, but it is too early to comment. What I will say is that it will probably be a disruptive change and change the way we look at the product.

Anything else you’d like to add?

Security: There has been much said about security in OSX and that it has more vulnerabilities than other OS’en, but if you look at actual exploits, OSX stands head and shoulders above Windows.

Ten years later and still no viruses. This is not because Apple has a lower market share, but a more secure OS. I can go on and on about this and the craziness you read about the topic.

So there you have it. Mac servers aren’t just for students. They also provide substantial business value in the server room. For more information about Ben Greisler and Kadimac, check out their web site.

(Sorry, I couldn’t resist)

Virtual Servers Are On The Verge of Outnumbering Physical Servers

Enterasys Networks, a Siemens Enterprise Communications Company, is a premier global provider of wired and wireless network infrastructure and security solutions. With roots back to their founding as Cabletron in 1983 their solutions enable organizations to drive down IT costs while improving business productivity and efficiency through a unique combination of automation, visibility and control capabilities.

Today, I’ll be interviewing Mark Townsend, who is the Director of Solutions Management for Enterasys Networks.

What kinds of changes do you see in the virtualization space for 2011?

Server virtualization is going to increase in adoption and virtual servers will outnumber physical servers. This shift will stress organizations that have not adopted processes and tools to manage the lifecycle of virtual servers.

Companies should be concerned about server sprawl and the effect it has on performance of the virtualized environment; and the potential for compliance issues by the unintentional hosting of virtual machines with different security postures/profiles on the same host.

There are solutions available today that connect the virtualization platform with existing lifecycle workflows and compliance tools. Enterasys Data Center Manager is a great example of such a solution.
How can companies use virtualization to help contain operational costs?

There is an immediate benefit from the reduction in the number of systems that previously served the organization. Consolidation of virtual machines provides consolidation of hardware, not only the physical servers but also adjacent systems such as the network.

This consolidation provides annual OPEX benefits in lowered recurring maintenance and operational (power, cooling) costs. There are also benefits in staffing costs with the reduction in managed devices.

How does virtualization improve business agility?

The elasticity virtualization provides offers companies the ability to expand and contract services in the data center based on demand cycles. Virtualization also accelerates the ability of an organization to add new services in near real-time.

For example, in the past, time would be spent specifying, ordering and implementing the hardware to run a particular service. Today, it is often as easy as cloning an existing system and deploying the new service within minutes or hours versus what was previously days or weeks.

It is important, however, to not lose sight of good process. The ease with which new services are added can degrade virtualization’s performance. Good lifecycle processes should be brought forward from the physical to the virtual environment.

How does virtualization help with business continuity?

The ability to move virtual machines from one host to another, often without powering down the VM, removes the fallibility of hardware. Larger data centers benefit from orchestrating the virtualization platform with the network ensuring services are available to end-users.

How can companies maximize the ROI on their virtualization investment?

Companies can maximize the ROI on their virtualization investment by having server and network teams improve their collaboration in order to reduce problems associated with the new systems, utilize existing infrastructure and stay on top of compliance standards.

How can companies ensure that their virtualized environment works in harmony with other external systems? Can you give some examples?

The larger the environment, the larger the benefit by integrating the virtualization environment with adjacent systems such as network and storage. The dynamic nature of the virtualization environment encumbers these systems with the ability to “keep up” with mobility of VMs.

Enterprises need the assurance the systems model of the virtual environment is consistent with the physical network it replaced. Compliance and security models must be replicated. This can be difficult as these controls have traditionally been static and lack the malleability needed to keep pace with the dynamic nature of virtualized environments. The traditional physical controls need to be able to bend with the virtual environment but not break.

Anything else you’d like to add?

While the focus has been principally on server virtualization, desktop virtualization initiatives will strain data centers that fail to integrate the virtualization environment and external systems such as network infrastructures. We’ve seen desktop to server ratios for large businesses at 25:1.

Companies that fail at implementing good processes for server virtualization are not setting a good foundation for future desktop virtualization initiatives.

More About Enterasys:

Enterasys Data Center Manager integrates with leading virtualization platforms today from VMWare, Citrix and Microsoft. The Enterasys Data Center Manager product provides the utility needed to unify the virtual and physical environments to ensure a harmonious experience for both. As virtual machines are added, redacted or moved within the data center; Enterasys Data Center Manager orchestrates the configuration of both virtual and physical networks. This orchestration ensures that proper life cycle controls are followed (adding new machines), that posture/profiles are kept in sync with original design (not mixing VMs from different domains on a single host) and that service level agreements can be met.

The Healthcare Industry Is Going Paperless And Mobile To Serve Patients Better – What Does This Mean For Your Privacy?

SAFE-BioPharma is the industry IT standard developed to transition he biopharmaceutical and healthcare communities to paperless environments.

The SAFE-BioPharma standard is used to verify and manage digital identities involved in electronic transactions and to apply digital signatures to electronic documents. SAFE-BioPharma was developed by a consortium of biopharmaceutical and related companies with participation from the US Food and Drug Administration and the European Medicines Agency. www.safe-biopharma.org

Today, I’ll be interviewing Safe-BioPharma CEO Mollie Shields-Uehling to get more insight on this issue.

What are some of the biggest changes that will affect the healthcare industry in 2011?

From an IT perspective, we expect that 2011 will be a watershed year in migrating paper away from medicine and healthcare.

For the first time, millions of licensed medical personnel in the US will be able to download to their cell phones and other electronic devices digital credentials that are uniquely linked to their proven identities. They will give physicians and others the ability to participate in systems that allow controlled access to electronic medical records.

These credentials also will facilitate application of legally-binding digital signatures to electronic forms, prescriptions and other documents.

What are some of the biggest opportunities and pitfalls associated with mobile computing in the healthcare industry?

In order to comply with a host of patient privacy and other regulations, it is essential that a system can trust the cyber-identity of its participants.

Digital credentials that comply with the SAFE-BioPharma standard are uniquelylinked to the individual’s proven identity.They are also interoperable with US Federal Government credentials and those of other cyber-communities.  As such, they mitigate legal, regulatory and other business risk associated with electronic transactions.

When used to apply digital signatures to electronic documents, the signature is legally enforceable, non-repudiable, and instantly auditable.

How will digital credentials help licensed medical professionals offer better services to patients?

Because they establish identity trust within a cyber-context, digital credentials eliminate paper-reliance (think of the forms, records, etc. when you see a doctor) and accelerate processes associated with patient care. Patient privacy will be better protected. Patient records will be available across different health systems. The prescription-issuance process will no longer be paper-based.

What is the current state of digital signatures and other forms of electronic credentials in the medical industry, and how do you see this changing in the next few years?

Use of digital credentials in health care is nascent. However, in a move designed to advance electronic health data sharing, in January VerizonBusiness will begin issuing medical identity credentials to 2.3 million U.S. physicians, physicians assistants and nurse practitioners.

This first-of-its kind step will enable U.S. health care professionals to meet federal requirements contained in the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act that call for the use of strong identity credentials when accessing and sharing patient information electronically beginning in mid-2011.