Archives for : January2011

7 Tips To Build Your Server Room With Future Growth In Mind

Mark Dryer has been the founding president of MDL Technology since 2003. Dryer has been an IT professional for 13 years and has a number of certifications including a Bachelor of Science in Accounting, MCSE, CNE, CNA, MCP, MCDBA, MS Dynamics Installation Specialist, VSP, VTSP and is currently working toward his VCP.

MDL Technology, LLC is a Kansas City IT company that specializes in worry-free computer support by providing solutions for around-the-clock network monitoring, hosting, data recovery, off site backup security and much more. MDL Technology, LLC is dedicated to helping businesses place time back on their side with quick and easy IT solutions.

Recently, I had a chance to interview Mark about his thoughts on planning a server room. Here’s what he had to say:

What are some of the most important things that a company should think about when designing and building a new server room?

One important factor to evaluate when designing and building a server room is the location.

If possible, build the datacenter in a location that is on two different power grids. This way, if one power grid goes out, the datacenter is not completely in jeopardy. This is a simple idea that can save a datacenter from many potential setbacks.

What should a company take into account when planning for future growth?

Some of the most important areas to consider when planning for future growth are space, power, cooling and bandwidth requirements.

How will they benefit from space, cooling, power and bandwidth requirements planning?

  1. Space – Plan for a space that can be easily secured and monitored. Make sure that the space will allow for future growth as a company. This should include room for cable management ladders, equipment and future rack space. Many people buy or build a new building, and the last thing on their mind is where the IT guys will put their equipment. Planning out space in a datacenter environment will save time and money in the long run. It will prevent a company from remodeling the space later and other departments from losing space out of the offices to the datacenter.
  2. Power – Make sure that the datacenter has enough power to double or triple over the next few years. When building a data center try to have more outlets in the server. Take into consideration the capacity of the service to the building, backup generators and Uninterruptible Power Supplies (UPS’s).
  3. Cooling – Plan for the future with a scalable cooling system. A cooling system should be easily accessed in order to add more capacity and redundancy if required. Plan for ways to make the cooling more efficient, this will save money.
  4. Bandwidth – Bandwidth can be expensive and a point of failure for a datacenter. Make sure that there are multiple Internet service provider (ISP) options available. Usually, the more providers available, the less the service will cost. A company can provide redundancy if there are multiple providers at a datacenter location.

What should a company take into account when planning their future needs for:

  1. Space - Try to find an area that will be free of dust, debris and foot traffic.
  2. Power – Power requires a building service capacity, UPS capacity and generator capacity.
  3. Cooling - Plan for the future with a scalable cooling system.
  4. Security – Make sure that the datacenter is in an area with low foot traffic. This will allow a company to control who and when people access it. This becomes increasingly important as companies grow.
  5. Disaster Protection – Make sure to have good offsite backups and a disaster recovery site that can be moved if needed.
  6. Networking/Bandwidth- Make sure to have multiple ISP options available to you. This will save money and increase redundancy.
  7. Maintenance –Build more than what will simply get the company by; build the datacenter to take the company well into the future.

It’s hard to believe that just 5 or 6 years ago, nobody had even heard of YouTube or Facebook. Given how fast technology changes, how can companies prepare future unforeseen technology trends when designing their datacenters?

If technology is important to you and remains a priority, it is easier to plan for the possibilities.

I’ve heard you talk about the “importance of doing simple things right”. What exactly do you mean by this? Can you share some examples?

  • Cable management is a simple thing and when done right, can make a big difference in a datacenter. If cables are well organized, then they will not have to be re-routed. This will save time when trying to grow a datacenter.
  • Label everything. This is a simple way to keep organized in a datacenter. This is especially handy in a disaster or during downtime. It allows anyone to go straight to the cable, or piece of equipment, even if they are not familiar with the datacenter.
  • Take the time to document the datacenter. Create a diagram of each piece of equipment; where each piece is located; and where all of the pieces or wires connect.
  • Don’t unpack boxes in your datacenter. This will eliminate dust and prevent fire.

What are some tips that you can give which will help organizations conserve space in their server room?

Virtualization of physical servers allows a company to consolidate many physical servers onto a single physical piece of hardware. This allows a company to maximize ROI on physical hardware while reducing space requirements, power consumption, cooling requirements and lowering future hardware costs.

What Are The Business Benefits Of Mac Servers In The Datacenter?

Today, I’ll be interviewing Ben Greisler, owner and founder of Kadimac Corp.

Kadimac is Ben’s second technologies company. His first was founded back in 1999 after spending 8 years in the publishing field.

The first company was Mac oriented also, but he joined another firm to become their CTO.

He decided to leave that company to found Kadimac with the goal to provide enterprise style solutions for integrating the Macintosh and other Apple technologies into other environments be it Windows, Linux or other Unixes.

He saw the increase in OS X uptake by industry and it was a great area to be in.

Here are some highlights from my interview with Ben Greisler:

Why are companies starting to rediscover Apple products? (Servers in particular)

A few reasons:

  • Less expensive than Windows Server, especially in the area of client licensing.
  • Easier to manage. You don’t need an MCSE to get an OS X Server running.
  • Features people wanted for their systems.
  • Easy integration with existing systems. It doesn’t have to be one or the other.

How can companies benefit from deploying Snow Leopard within their IT environments?

Since many of the office applications for Windows are also available on the OS X platform, workers can do their jobs in an environment they like. There is some resistance from IT staff about supporting Macs, but after a while they realize they need to provide less support for the Macs once they are deployed in a proper manner.

What are some key areas or scenarios where Mac servers are able to outperform Windows or Linux servers, or can serve as a better alternative? Can you give me some examples?

Lower cost is a big item. With no CAL costs you can add services for very little money. The trick is to match the server implementation with the OSXS solution. There is a sweet spot and it isn’t appropriate for every situation. OSXS is easier to manage that Linux and that can translate into lower costs too. Don’t forget that there are zero viruses for OSX still after 10 years of being around.

Mac hardware seems to be quite a bit more expensive than other systems. Why is that, and how are these extra costs justified?

Apple has chosen to play the game at a certain level. The hardware is quite cost competitive with other systems if you do the proper comparison.

The standard comparison with cars still stands:

You can say BMW is more expensive than Hyundai, which is true, but it is hard to compare a Sonata to a 5 Series. A fun comparison is to build a Mac Pro on Apples site then build a truly comparable machine on Dell or HP’s site.

More often than not the Apple hardware comes out cheaper. Apple just chooses to play at a different level than the other guys. In that level they are very much a value.

Go price the 12 core machines and see.

What are some of the ROI advantages that a Mac Servers would have over a typical server? Any examples you can share?

Ease of setup and lower cost. Example: I set up a Mac Mini Server ($999 and comes with a full version of OS X Server) and connected it to a small RAID unit also sold by Apple (4TB for $799) for a marketing firm. This unit was replacing another Mac server that had been running almost 7 years (good ROI there!).

The original purpose was simple file sharing, but I demo’ed the built-in wiki/blog software and the calendaring solution. They loved it and 30 minutes later they were populating those services with data. Had that been Windows, the cost to configure it would have been crazy.

I’ve noticed that Apple will be discontinuing Xserve, and is asking their users to transition to Mac OS X Server Snow Leopard. Can you comment on this? What are the advantages of Snow Leopard over Xserve?

Xserve is hardware and Snow Leopard Server (OSXS) is the operating system.

OSXS can run on the Xserve, Mac Pro or Mac Mini. Apple is discontinuing the Xserve true, but you can purchase OSXS preinstalled on a Mac Pro or Mac Mini. In fact, the Mac Mini server is one of the best selling servers in Apples line up.

What isn’t to like about it: $999 for the hardware and server license. The server licenses is worth $499 by itself! The prior versions of OSXS cost $999 so it is like buying the server license and getting the hardware for free.

I recently did a job in North Carolina for a school district replacing all 34 DNS servers with Mac Mini Servers.

They get an EDU discount so the hardware is even less than $999 and they get a fully supported server that is about an inch and a half high and 8 inches square. They take up very little room and little power.

A perfect solution to replace all the aging gear they are currently using. They will probably save enough in power and cooling costs to pay for much of the hardware.

What do you see in the future for Apple servers?

I see a significant change coming down the pike, but it is too early to comment. What I will say is that it will probably be a disruptive change and change the way we look at the product.

Anything else you’d like to add?

Security: There has been much said about security in OSX and that it has more vulnerabilities than other OS’en, but if you look at actual exploits, OSX stands head and shoulders above Windows.

Ten years later and still no viruses. This is not because Apple has a lower market share, but a more secure OS. I can go on and on about this and the craziness you read about the topic.

So there you have it. Mac servers aren’t just for students. They also provide substantial business value in the server room. For more information about Ben Greisler and Kadimac, check out their web site.

(Sorry, I couldn’t resist)

The Difference Between Disaster Recovery and Business Continuity

These days, there’s a lot of talk about Disaster Recovery and Business continuity planning.

Although most of these discussions tend to group the 2 terms together as a single “BC/DR Planning” unit, it’s important to understand that these are – in fact – 2 very different and separate processes.

In order to illustrate this better, we’ll start with a discussion of Disaster Recovery.

Disaster Recovery (DR)

Disaster recovery is the older of the 2 functions. DR planning is an essential part of business planning that – too often – gets neglected.

Part of this has to do with the fact that making a Disaster Recovery plan requires a lot of time and attention from busy managers and executives from every functional department within the company.

But without a proper DR plan in place, your company has very little chance of surviving a major technological, human, natural, or political catastrophe.

Too many companies think that DR is simply backup. But it’s not.

Disaster recovery plans detail all of the precautions that a company must take before a disaster, during a disaster, and after a disaster. It’s especially important to outline these plans well in advance so that you can assign and rehearse these roles to key people within the company.

When a disaster such as a fire, power outage, computer virus or civil unrest takes place, structure begins to break down and trying to organize a team effort can feel like herding cats. Also, this lack of authority structure can open your company up to fraud and exploitation. (As we saw with FEMA after Hurricane Katrina)

If a vendor sees that you’re desperate, they may take advantage of the situation to gouge you on price. (I’ve seen this in the manufacturing industry when it comes to rush replacement hardware for broken machinery) That’s why it helps to create vendor relationships and establish emergency purchasing plans in advance.

Business Continuity (BC)

Business continuity is a newer term which was first popularized as a response to the Y2K bug.

Customers are much less patient than they used to be 30 years ago. As soon as your company stops providing services, your clients will stop paying and leave you for competitors. But your creditors will still remain loyal and continue to charge you for their services.

In order to stop your company from bleeding money in these situations, you need a plan that will allow the organization to continue generating revenue and providing services – although possibly with lower quality – on a temporary basis until the company has regained its bearings.

The trick to good Business Continuity planning is to only focus on the most critical business functions in order to reduce costs, and to enable these systems using the most cost-effective means possible without significantly affecting quality of service.

Many call centers will keep paper versions of their CRM and call tracking systems in case of emergency. If IT systems are ever taken offline, the call center can continue to process calls and track customer issues using this paper-based system. And these logs can then be transcribed into the CRM and call tracking systems once the servers come back online.

Of course, you can see that – although these 2 processes are very different – there is still quite a lot of overlap between the 2 areas. That’s why companies will tend to group Disaster Recovery and Business Continuity together as 2 halves of a single plan.

This is fine as long as you understand that these are 2 separate functions, and that your plan satisfies the requirements of each.

Virtual Servers Are On The Verge of Outnumbering Physical Servers

Enterasys Networks, a Siemens Enterprise Communications Company, is a premier global provider of wired and wireless network infrastructure and security solutions. With roots back to their founding as Cabletron in 1983 their solutions enable organizations to drive down IT costs while improving business productivity and efficiency through a unique combination of automation, visibility and control capabilities.

Today, I’ll be interviewing Mark Townsend, who is the Director of Solutions Management for Enterasys Networks.

What kinds of changes do you see in the virtualization space for 2011?

Server virtualization is going to increase in adoption and virtual servers will outnumber physical servers. This shift will stress organizations that have not adopted processes and tools to manage the lifecycle of virtual servers.

Companies should be concerned about server sprawl and the effect it has on performance of the virtualized environment; and the potential for compliance issues by the unintentional hosting of virtual machines with different security postures/profiles on the same host.

There are solutions available today that connect the virtualization platform with existing lifecycle workflows and compliance tools. Enterasys Data Center Manager is a great example of such a solution.
How can companies use virtualization to help contain operational costs?

There is an immediate benefit from the reduction in the number of systems that previously served the organization. Consolidation of virtual machines provides consolidation of hardware, not only the physical servers but also adjacent systems such as the network.

This consolidation provides annual OPEX benefits in lowered recurring maintenance and operational (power, cooling) costs. There are also benefits in staffing costs with the reduction in managed devices.

How does virtualization improve business agility?

The elasticity virtualization provides offers companies the ability to expand and contract services in the data center based on demand cycles. Virtualization also accelerates the ability of an organization to add new services in near real-time.

For example, in the past, time would be spent specifying, ordering and implementing the hardware to run a particular service. Today, it is often as easy as cloning an existing system and deploying the new service within minutes or hours versus what was previously days or weeks.

It is important, however, to not lose sight of good process. The ease with which new services are added can degrade virtualization’s performance. Good lifecycle processes should be brought forward from the physical to the virtual environment.

How does virtualization help with business continuity?

The ability to move virtual machines from one host to another, often without powering down the VM, removes the fallibility of hardware. Larger data centers benefit from orchestrating the virtualization platform with the network ensuring services are available to end-users.

How can companies maximize the ROI on their virtualization investment?

Companies can maximize the ROI on their virtualization investment by having server and network teams improve their collaboration in order to reduce problems associated with the new systems, utilize existing infrastructure and stay on top of compliance standards.

How can companies ensure that their virtualized environment works in harmony with other external systems? Can you give some examples?

The larger the environment, the larger the benefit by integrating the virtualization environment with adjacent systems such as network and storage. The dynamic nature of the virtualization environment encumbers these systems with the ability to “keep up” with mobility of VMs.

Enterprises need the assurance the systems model of the virtual environment is consistent with the physical network it replaced. Compliance and security models must be replicated. This can be difficult as these controls have traditionally been static and lack the malleability needed to keep pace with the dynamic nature of virtualized environments. The traditional physical controls need to be able to bend with the virtual environment but not break.

Anything else you’d like to add?

While the focus has been principally on server virtualization, desktop virtualization initiatives will strain data centers that fail to integrate the virtualization environment and external systems such as network infrastructures. We’ve seen desktop to server ratios for large businesses at 25:1.

Companies that fail at implementing good processes for server virtualization are not setting a good foundation for future desktop virtualization initiatives.

More About Enterasys:

Enterasys Data Center Manager integrates with leading virtualization platforms today from VMWare, Citrix and Microsoft. The Enterasys Data Center Manager product provides the utility needed to unify the virtual and physical environments to ensure a harmonious experience for both. As virtual machines are added, redacted or moved within the data center; Enterasys Data Center Manager orchestrates the configuration of both virtual and physical networks. This orchestration ensures that proper life cycle controls are followed (adding new machines), that posture/profiles are kept in sync with original design (not mixing VMs from different domains on a single host) and that service level agreements can be met.

The Healthcare Industry Is Going Paperless And Mobile To Serve Patients Better – What Does This Mean For Your Privacy?

SAFE-BioPharma is the industry IT standard developed to transition he biopharmaceutical and healthcare communities to paperless environments.

The SAFE-BioPharma standard is used to verify and manage digital identities involved in electronic transactions and to apply digital signatures to electronic documents. SAFE-BioPharma was developed by a consortium of biopharmaceutical and related companies with participation from the US Food and Drug Administration and the European Medicines Agency. www.safe-biopharma.org

Today, I’ll be interviewing Safe-BioPharma CEO Mollie Shields-Uehling to get more insight on this issue.

What are some of the biggest changes that will affect the healthcare industry in 2011?

From an IT perspective, we expect that 2011 will be a watershed year in migrating paper away from medicine and healthcare.

For the first time, millions of licensed medical personnel in the US will be able to download to their cell phones and other electronic devices digital credentials that are uniquely linked to their proven identities. They will give physicians and others the ability to participate in systems that allow controlled access to electronic medical records.

These credentials also will facilitate application of legally-binding digital signatures to electronic forms, prescriptions and other documents.

What are some of the biggest opportunities and pitfalls associated with mobile computing in the healthcare industry?

In order to comply with a host of patient privacy and other regulations, it is essential that a system can trust the cyber-identity of its participants.

Digital credentials that comply with the SAFE-BioPharma standard are uniquelylinked to the individual’s proven identity.They are also interoperable with US Federal Government credentials and those of other cyber-communities.  As such, they mitigate legal, regulatory and other business risk associated with electronic transactions.

When used to apply digital signatures to electronic documents, the signature is legally enforceable, non-repudiable, and instantly auditable.

How will digital credentials help licensed medical professionals offer better services to patients?

Because they establish identity trust within a cyber-context, digital credentials eliminate paper-reliance (think of the forms, records, etc. when you see a doctor) and accelerate processes associated with patient care. Patient privacy will be better protected. Patient records will be available across different health systems. The prescription-issuance process will no longer be paper-based.

What is the current state of digital signatures and other forms of electronic credentials in the medical industry, and how do you see this changing in the next few years?

Use of digital credentials in health care is nascent. However, in a move designed to advance electronic health data sharing, in January VerizonBusiness will begin issuing medical identity credentials to 2.3 million U.S. physicians, physicians assistants and nurse practitioners.

This first-of-its kind step will enable U.S. health care professionals to meet federal requirements contained in the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act that call for the use of strong identity credentials when accessing and sharing patient information electronically beginning in mid-2011.

Will Tape Storage Make A Datacenter Comeback This Year? What The Experts Have To Say Might Surprise You

It’s that time of year where lots of experts reflect on the changes in the previous twelve months and examine these trends to predict what’s to come. I expect to see the following major and minor revolutions in tape, disk and cloud storage in 2011.

For the first time in years, tape will be reclaiming a leading role in the major storage market, though a confluence of technologies, requirements, economics and demand. Tape will grow in deployed capacity for data archive due to developments in:

  • Technology: The interface to data archived on tape increasingly will look just like data users access on their computer—the data will displayed through standard file systems as files and directories with the standard look and feel of pulling up any file a user might view or edit. This interface technology will accelerate tape’s move into a platform for actively used data. This method of data storage and access is widely referred to as Active Archive.
  • Economics: Active use of data stored on tape offloads expensive primary disk. File interface technology speeds retrieving data from tape, so that it takes minutes instead of hours, and makes it unnecessary to restore data from tape to make it searchable. As economic pressures continue, the affordable nature of tape will drive approximately 80 percent of all electronic archival data to be retained on tape. Tape remains the greenest available storage; it doesn’t take any power to store data once the data has been written to tape.
  • Requirements: Use of tape ensures data integrity over the long-term to meet retention requirements. Rapid accessibility of data stored on tape supports meeting e-discovery requirements, and encryption options ensure privacy in accordance with HIPAA and other legislation and regulation. As the use of tape for active data storage advances, hardware-based data verification will become standard for ensuring data written to tape archives remains accessible.
  • Demand: As the business climate remains intensively competitive, access to increasing amounts of data becomes increasingly important. Active Archive serves as an affordable and intuitive method of providing access to all of an organization’s data. At the same time, open systems storage models, will continue to overtake the market from propriety technologies. LTO will expand its market dominance.

Disk will continue its presence in data protection and archival markets through widespread adoption of SAS (SCSI-attached storage) and increasing use of SSD (solid state disk). SSD will become more commonly available in commoditized arrays with lower cost per GB. At the same time, deduplication will increasingly be implemented through applications instead of appliances. Further, X86 architectures will put extreme stress on traditional high-end storage providers by 2013.

Not more than 10 percent of the data stored in the US will be in the Cloud by the end of 2011. As users of the Public Cloud move from early adopters to mainstream users, technology and network limitations, costs, and data security pitfalls will become increasingly evident and limit the speed of adoption.

Here are Spectra Logic’s Top 10 Data Storage Predictions for 2011:

  1. Data centers will be implementing Active Archives to tape as a means of offloading primary disk storage.
  2. Tape will be the preferred medium for 80 percent of all data in electronic archives.
  3. Hardware-based data verification will be considered a requirement in all archive storage platforms.
  4. SAS disk will be replacing SATA for archive/backup storage by end of FY11
  5. SSD will be arriving in commoditized arrays, dropping cost per GB, moving into a position as being the preferred medium for performance disk.
  6. Dedicated Deduplication appliances will be falling out of favor. Deduplication will preferably occur in file systems and backup software applications.
  7. By the end of 2011, public “cloud storage” will contain no more than 10 percent of the US data storage.
  8. Lower cost disk storage solutions based on X86 architectures will put extreme stress on traditional high-end storage providers by 2013.
  9. Enterprise tape drives will continue to give way to LTO based architectures.
  10. Traditional backup practices will begin to shift. Data centers will begin to move to online, file-based archives for their long term data retention instead of utilizing offline backups in proprietary formats.

About The Author:

Molly Rector is the vice president of product management and marketing for Spectra Logic Corporation, where she leads channel and market development, creative and marketing communications, and product management. For over 30 years, Spectra Logic has been defining, designing and delivering innovative data protection through tape, deduplication and disk-based backup, recovery and archive storage solutions. For more information, visit www.SpectraLogic.com.