Archives for : July2010

7 Reasons to Dump Your External Hard Drive For NAS

Too often, people will confuse NAS with external hard drives. However, NAS is much more than just a storage device. It’s more like a like SUPER-INTELLIGENT external hard drives on STEROIDS.

 

If you’ve been using an external hard drive to store your data, here are some reasons you might want to switch to a NAS device.

  • NAS devices support gigabit networks, which are much faster than USB cables. This comes in handy when you need to upload or download large files.
  • NAS devices are software-independent. You can use it on Windows, Mac, Linux or any other operating system without having to worry about compatibility issues.
  • Instead of the low-cost consumer disks used in most external hard drives, NAS devices use more reliable high-end hard drives that are arranged in RAID formation. This means that a single drive can crash without losing any data.
  • NAS devices are built for data protection, and have built-in tools for backup automation. You can even set up another NAS device at a remote location and have your system backed up over the internet!
  • Once your external hard drive fills up, you need to buy a new one. But NAS boxes are designed so that storage can be quickly and easily upgraded.
  • NAS devices allow you to securely share information across your network, and even assign storage space to different users based on security rules that you put in place. You can even access your data from the NAS from the Internet while on the road.
  • A NAS can act as the hub of your network, without having to invest in expensive server software.

Trust me. Once you’ve tried NAS, you’ll ditch that wimpy external hard drive and never look back.

The Importance of Web Services

You’ve likely noticed that vendors are increasingly touting the wonders of Web Services or other associated technologies such as SOAP, XML, WSDL, etc… And it’s with good reason.

Web Services Lets Apps Work In Unison

I really feel that this is an important feature to keep in mind when purchasing new business systems. This is especially true for earlier-stage companies that expect to grow in the near future.

Today’s article isn’t intended to be a technical overview of how Web Services works. I’ll save that for a future post.

Instead, I just want to devote this post to telling you why your company should be concerned about Web Services, and what this new technology can do for you and your company.

When you first start growing your company, you’ll probably implement a number of independent systems that are designed with very specific functions in mind:

  • Accounting
  • Invoicing
  • Marketing Automation
  • CRM
  • Etc…

And this is great. These programs work “out of the box” and start benefiting your company immediately. You get proper tool for each job.

But over time, your staff, product line and customer base will all grow.

When this happens, the act of sharing and processing information becomes more cumbersome. Employees have to switch between applications, re-enter data multiple times, and spend more time searching for key data. This leads to less productivity, higher costs, mistakes, and reduced overall customer satisfaction.

When a company gets to this size, they would normally hire developers to create custom patches and tools to help make these processes more efficient. But this could be very costly since each of these legacy applications were written by different companies.

  • If these legacy applications weren’t open-source, it would be impossible to alter the code for your purposes.
  • Even if the applications are open-source, each system is written in a different language (PHP, C++, COBOL, Visual Basic, etc…)
  • All of them might run on different operating systems (Windows, Mac, Linux, iSeries, etc…)
  • All of them store data in different formats (SQL, Access DB, Flat Files, Spreadsheets, etc…)

It’s easy to see how custom coding can get very expensive in these scenarios.

With Web Services, each application must adhere to a set of standardized protocols for sharing and accessing data. This way, 2 programs can speak to each other, regardless of operating system, database or programming language compatibility. Instead, everyone agrees on a set of rules by which these interactions will take place.

This is important for young businesses, because it means that you can reap all of the productivity benefits today and still leverage the capabilities of these applications as you grow.

When your internal processes change, you can easily develop custom tools at a very low cost. And because you can keep using these applications longer, you delay the cost and headache of having to migrate your data to a new system.
But here’s where I see the REAL value of Web Services…

I personally believe that phone support is a dying beast. Right now, your customers expect a 24/7 self-service portal from their cable companies, phone companies, web hosts, and just about every other service offered by large companies.

And I personally believe that, within the near future, clients are going to start demanding self-service consoles from even smaller companies. Organizations that fail to meet this demand on time will lose business.

From your client’s perspective, it’s really a no-brainer. Why pick a provider that expects you to call a 1-800 number between 9 and 5, when their competitor makes fast, accurate service available 24/7… without having to wait on hold or argue with anyone?

From a strategic perspective, this is a HUGE trend. And by picking solutions with web-services, you can create and implement customer-facing service consoles quickly and inexpensively using data feeds from your off-the-shelf software systems.

Those are just a few of the key business benefits that I see driving Web Services.

What’s the Difference Between Grid and Cloud Computing

“Grid computing” and “cloud computing” are terms that get confused and mixed up a lot. Although they’re somewhat similar in theory, they’re also every different… and not interchangeable.

Cloud computing is the one that most of us are familiar with. This is where you get access to the resources of an independent 3rd party over the internet. In other words, they are remotely hosted applications.

Common examples include:

  • Web-based email
  • Web-based accounting systems
  • Web-based credit card software

Also within the “cloud computing” family, are what’s known as “Software as a Service” or SaaS applications. These are lightweight software clients that are installed on your server, but do all of their heavy lifting using someone else’s infrastructure. Some common examples include:

  • VOIP phones
  • Online backup
  • Video games with online play capability

Grid computing, however, occurs when the processing power of an application or service is distributed across multiple systems. This is usually done in order to increase processing capacity or improve system resiliency.

Some common grid computing examples include:

  • SETI @ Home
  • Peer-to-Peer file sharing

Grid computing also has special applications within the enterprise space. For example, a company may decide to virtualize their database system so that the application could be hosted across multiple redundant datacenters. In the event that one of these systems should fail, the application could keep running without interruption.

This may seem like a small difference, but it’s important.
If, instead of hosting their database across multiple servers, they decided to host it with a third-party company that specialized in hosting these types of databases… then it would be a Database-as-a-Service. This is a type of SaaS.

It’s no longer distributed because their role has changed from systems operator to client. Someone else is handling the entire infrastructure for them.

With cloud computing, your company gets cost-savings and convenience. With grid computing, your company gets power and flexibility. When developing your IT strategy, consider the benefits of both approaches.

When applied with the appropriate strategic focus, each of these strategies can be effective tools in helping to meet your cost, quality, security and efficiency objectives.

The Difference Between SaaS (Software as a Service) and SOA (Service-Oriented Architecture)

Silk FactoryThe terms “Software-as-a-Service” and “Service-Oriented Architecture” get thrown around a lot when it comes to discussions about business computing. And much too often, people use these terms in an improper context. Although they might seem similar, they could not be more different.

Explaining the difference can be a bit hard, since some of the concepts involved are a bit difficult to understand. But the most important thing to know is that:

  • SaaS is a software delivery method, and that’s all
  • SOA is a methodology for designing and automating business processes
  • One is tactical, and the other is strategic

Software as a Service

This is the one that everyone should be familiar with by now. SaaS simply describes a business system where everything – or almost everything – is hosted remotely on the provider’s server. This can be a public cloud where you pay a monthly fee to access an application on a service provider’s system. Or it can be a private cloud where employees do all of their work on a primary server through remote terminals.

Each one of us has used a SaaS application at some point. Most of us have web based email, web hosting accounts, and we probably also manage our taxes online as well.

I won’t go into too much detail, since this concept is fairly easy to understand.

Service-Oriented Architecture

This one right here is the hard part.

SOA is a fairly difficult to grasp, and many people have required months of education just to get their heads around it. In order to touch on some of the core concepts behind SOA, I’d like to present a short story that illustrates some of its more practical applications.

Bill’s Clothing has been in business for over 5 years, and they’ve grown to the point where they can no longer operate efficiently as a single business unit. So Bill splits the company into 4 subsidiaries.

  • Bill’s Clothing Group Of Companies (Head office)
  • Bill’s Shirts
  • Bill’s Pants
  • Bill’s Hats

When it was just one company, all customers were treated the same regardless of what they were purchasing. So the company only needed a single invoicing and shipping system. But, after the company had split up, the nature of the work in each division changed drastically. And these changes became more pronounced as the company continued to grow faster.

In order to work more efficiently, each division adapted their invoicing/billing systems to suit their new business processes. Although this created efficiencies and cost-savings across each division, it also caused a lot of problems between the different business units.

Because there was a lack of standardization, similar transactions were given different names, treated differently, and recorded differently to the financial systems. As a result, errors were common and the companies wasted a lot of money performing duplicate tasks.

This also made budgeting and strategic planning difficult, since the company had a hard time understanding where they stood financially.

With Service-Oriented Architecture, each division could develop and adhere to their own processes, while also sharing a common invoicing and shipping system with the rest of the company. Instead of giving each division their own systems to modify, “invoicing and shipping” becomes a “service” which can be made accessible to developers.

A service is similar to a “class” or “function” that a programmer would use. Except that a service is not a block of computer code. Instead, it’s a business process that has been computerized.

All of the business units share the same service on the corporate system, but they modify it to suit their purposes. If one business unit needs to make a change to the shared service, this change is applied to the service itself. The end-user does not get their own isolated custom-application.

Because everyone shares the same service, any new features must be added based on a set of principles that apply across all units, and all similar information is treated in the same manner.

Business rules can be applied and enforced more effectively, the company runs more efficiently, and applications can be deployed and modified much more quickly.

This example gives a very rough overview of how SOA works.

The main thing that should stick out is that SOA makes no reference to whether or not these services are deployed over the web. You could deliver it via the Internet if you wanted to, but that would have nothing to do with whether or not it’s a true Service-Oriented Architecture.

Also, the invoicing and shipping system mentioned at the beginning of this story could very well be hosted through a SaaS provider. But because this isn’t true SOA, individual units could modify these systems themselves and define data differently without adhering to any agreed-upon standards. For this reason, you can’t call it SOA. It’s just a web-based application.

I know I’ve gone into a lot of detail over this article. So I’ll cut it short for now.
But if you have any other questions relating to these concepts, please leave a note in the comments below and I’ll gladly try to answer them to the best of my abilities.

EDIT: The comments are temporaroly disabled due to spam/time constraints. Please feel free to contact me directly with your questions. I will try to respond in a timely manner.

Image Credit: http://www.flickr.com/photos/ethnocentrics/213327033/sizes/s/

The Cloud Saved My Life Today

Panic AttackSo there I was, ready for a busy day of work. When suddenly… paf! My whole system went down, wiping out my hard-drive. I was in a big rush, and I didn’t have time for problems like this.

It could’ve been much worse. But thankfully, I use SaaS software for just about every area of my business.

  • My email is through Gmail
  • I use Google Docs for writing and spreadsheets
  • I back up all of my data online with an automated backup service
  • My CRM is web-based
  • I write the Enterprise Features blog using WordPress

All I had to do was swap my machine for an old junker I had lying around, and I was back in business again within minuted.

My personal life, however, did not fare so well. I had a number of entertainment, graphic design, gaming, programming, system tools and miscelaneous programs sitting on my hard drive that needed to be fully installed. It took me over 4 hours to reload windows, download & install each program again, and reconfigure the settings to the way they were before.

How much would it cost you if one of your employees were taken out of commission for 4 hours when they had an important deadline coming up? How would your business be affected if a virus knocked out every machine in your company, and you had to wait several days for every individual laptop and desktop to be reonfigured again?

Say what you want about the threats of working in the cloud, but I just find it so incredibly convenient.

SaaS is awesome!

Image Credit: http://www.flickr.com/photos/strep72/4262185609/sizes/m/

Reducing Maintenance Costs With Server Virtualization

Punch ClockWe’re living at an interesting time in computer history. With hardware costs rapidly dropping, maintenance costs are now emerging as a leading contributor to the Total Cost of Ownership for IT systems.

This presents unique opportunities for forward-thinking companies that are willing to take a more strategic approach to their IT planning. By taking advantage of the time-saving features of virtualization technology, companies can free up IT time while reducing maintenance, upgrade and energy costs.

By consolidating multiple virtual servers to a single device, you can now configure new servers in 1/3 of the time that it would normally take to set up a physical box. No more going from machine to machine with a CD ROM. Now you just point, click, and the server installs itself based on the settings you’d preconfigured or the image you’d copied.

Instead of installing new hardware components such as disk, virtualization allows you to easily assign system resources on-the-fly as needed. This leads to greater overall system resource utilization, more energy efficiency and lower hardware costs.

Also, many virtualization systems will allow you get a consolidated view of the physical and virtual system resource usage across all your servers. This is much more efficient than doing it on a machine-by-machine basis. It also greatly simplifies the process of performance monitoring, vulnerability scanning, and patch management.

Another advantage of virtualization – when it comes to system maintenance – is that you can now deploy servers & systems in bulk using a single mouse click. This means new servers can be added very quickly, and servers can also be replaced just as easily.

Since IT departments are already overworked, this will free up IT resources and reduce the need for hiring additional IT staff.

And finally, virtualization makes migration easier than ever. A virtualized server can be easily migrated from one physical system to another, with much less headache and downtime than would be involved in transferring a standard physical device.

If your IT staff is overworked, or your company is struggling to attract & budget for new qualified IT staff, you may want to consider virtualization as a cost-effective way of reducing the workload of your existing team instead.

Image Credit: http://www.flickr.com/photos/onefish2/3189204362/sizes/s/

Quick Links: 6 Great Articles on How To Set Up Your Own Apache Web Server (Windows and Linux)

First WebserverYes folks, today’s a double-poster.

A common question that I get is “How do I set up my own Ubuntu/RedHat/Windows web server?” And my answer us usually… YOU DON’T.

There are other people & companies who are very good at this sort of thing, and can do it better/more cheaply than you can in-house.

But if you want to tinker with a box to sooth your curiosity or to serve as an internal development environment, you should at least have access to some good information.

Below, you’ll find my list of favorites when it comes to this topic.

Using Linux (Free)

Using Windows (Cha-Ching)

And PLEASE folks… if you’re going to run your own web server… do not install it on a mission-critical machine. As soon as you turn this puppy on, you’ll immediately become the target of bots and hackers.

You’re essentially creating a new, extremely vulnerable hole in your network.

Web server security is a very complicated thing to understand and manage. And if you’re not an expert, you’re probably going to get burned. So make sure your box is expendable.

Ignore my advice at your own peril. :-)

FYI: The image at the top of this post is of a label written by Tim Bernes-Lee… on the very first Web Server at CERN.

Photo Credit: http://www.flickr.com/photos/scobleizer/2251820987/

Guest Bloggers – Write For Us!

Do you have a strong opinion or special insight into the world of business technology? I want to hear what you have to say!

Get in touch to have me review your writing. If I like what you have to say, I will gladly post in on our blog with a link to your site.

What is Forward-Referencing Deduplication?

Forward-referencing deduplication is still a fairly new technology, but it’s been gaining a lot of press lately. The main benefit of this methodology is that it allows for more efficient decompression in the event that you need to restore a recent backup.

Traditional deduplication methods are optimized for restoration of the oldest full backup. If you perform full backups on a monthly basis, a 30-day-old backup would restore much more efficiently than a backup from yesterday evening. This is because the deduplication process is based on the original backup, and all subsequent backup deduplications were “seeded” from this original.

This becomes a problem because most backup recoveries don’t go more than 1 or 2 versions into the past. That creates a lot of extra work for the system, which leads to slower (and more costly) recoveries.

Forward-referencing deduplication takes the opposite approach. Once a full backup has been performed, deduplication is performed in such a way that the most recent full backup version becomes the “seed”, and all of the older versions are deduplicated based on this newest version.

Compressing in this way requires a lot more work. But when it comes time to restore in an emergency, all of your most recent data is already decompressed and ready to be transferred. Since downtime is much more expensive than backup window time, this is a good trade.

If your company has been evaluating data deduplication as a means of controlling rapid data growth, but you’ve been hesitating because of potentially long restoration times… then forward-referencing deduplication might be the right solution for you.

Underground Hackers Have Trouble Reaching The Cloud

HackersIn the tech industry, there is constant debate about the safety and security of SaaS. This is a long and complicated debate that would require more than just this article.

For this post, I would just like to isolate one important factor for you to think about. Hopefully, this can help put some of these issues into context.

One of the most common arguments against cloud computing is that… by hosting the application in house… you will have better control over security.

However, IT departments always have great hopes about future projects that they want to implement. But time and resources are limited… and important business processes often take first priority over regular maintenance. Daily routines become established, and habits sometimes fall to the wayside. Or, other details might get overlooked.

Unless you have a large budget that allows you to hire a dedicated security person, it’s very likely that each employee within your IT department will wear many hats. When you have several people who are jacks of all trades and masters of none, it can often lead to minor slip ups or procrastination on import and tasks.

These types of organizational vulnerabilities are what hackers and viruses prey on.
SaaS providers – on the other hand – are a completely different beast. For them, security is a core business process. Their entire reputation depends on their ability to secure their internal networks and protect their client data.

Just one slip up can destroy their company overnight.

Although you do lose some control over the applications when your hand your data over to a company like this, you can rest assured that network security is a much greater concern for them when handling your data then it would be to one of your overworked internal IT staff. And you also know that these companies will probably spend much more money and resources on security than you would internally.

For them, there is simply more to lose in the event of a breach.

Of course, no system is perfect. Once in awhile, a cloud provider will get exploited… just as ordinary companies get hacked into every day. But your odds of a security breach when dealing with SaaS are still much lower than they would be if you’d hosted the application yourself. (In my opinion)

Photo Credit: http://www.flickr.com/photos/lhl/241180672/

Importance Of Load Trends When Planning For Server Consolidation

load-peakThe benefits of virtualization have already been stated here many times, so I won’t go into them again. It’s really no wonder that server consolidation has become one of the hottest business IT trends of all time.

Instead, I’d like to draw your attention to one of the challenges faced by virtualized environments.

IT professionals like to brag about what percentage of their systems are virtualized, and often envision themselves increasing this percentage in a linear manner… over time. But the reality is that there will be a plateau very early on in the migration process.

Usually, the first 20% of consolidation is comprised mainly of less critical systems such as DNS or test servers.

When it comes to the really demanding or critical systems such as databases and email, the challenge becomes much more difficult. In order to consolidate these systems, you need to think about future trends in resource usage.

  • What would happen if 3 systems experienced a spike at the same time? This is a common occurrence during major events such as the Christmas Eve rush. Could you accurately predict the resources required to support such a spike? And what effect would this over-provisioning have on your ROI?
  • What are the future trends for usage within your company? Has usage growth stagnated, or is it growing exponentially? How accurately can you predict your trends over the next week, month or year? Do you have visibility into these trends on a server-by-server basis?
  • How far can you push your implementation before you risk violating your Service Level Agreement?

Part of the “magic” of virtualization is that it lets you exploit peak periods, and use them to your advantage. For example, email systems experience a major spike at 9:00 AM, then the load is much lower for the rest of the day.

In this case, a failure at 9:00 would be catastrophic. That’s why email servers are over-engineered to support a much higher load than they would ever be faced with.

There would be no point in consolidating your email server with another system that spikes at 9:00 am. You’ll just end up building a server that’s twice as over-engineered. It’s simply too much work for too little benefit. A better approach would be to virtualize your systems in order of ROI, starting with the ones that offered the greatest cost savings.

That’s why it pays to know what the loading trends are for your servers. Consolidation of your email server and backup systems would be a perfect synergy, since one spikes in the morning… and the other spikes in the evening.

But this example just covers a very simple example involving just 2 basic services. This problem becomes much more complicated when we’re dealing with 200 servers across 10 locations.

Before you can begin consolidating, you need to gain insight into resource usage. But how? What tools or techniques has your company been successfully using? I’d love to hear your comments.