Archives for : July2011

Top 10 SaaS Software Providers In Every Category For August 2011

Top 10 SaaS Customer Relationship Management (CRM)

  1. Infusion Software
  2. CLP Suite
  3. Commence CRM
  4. Appshore
  5. NetSuite
  6. SageCRM.com
  7. Salesboom
  8. Microsoft Dynamics
  9. eGain
  10. SofFront

Top 10 Hosted Exchange Providers

  1. NetNation
  2. Hostirian
  3. FuseMail
  4. SherWeb
  5. Apptix
  6. MyOfficePlace
  7. Kerio Mail Hosting
  8. Apps4Rent
  9. Exchange My Mail
  10. Evacloud

Top 10 SaaS Invoicing Software Services

  1. Zoho Invoicing
  2. Bamboo Invoice
  3. SimplyBill
  4. BlinkSale
  5. LessAccounting
  6. Intuit Billing Manager
  7. CurdBee
  8. BillingBoss
  9. CashBoard
  10. FreshBooks

Top 10 Managed Web Hosting and Dedicated Servers

  1. AYKsolutions
  2. Limestone Networks
  3. Rackspace
  4. Hostgator
  5. KnownHost
  6. The Planet
  7. aplus.net
  8. FIREHOST
  9. MegaNetServe
  10. ServInt

Top 10 Online Backup Services For Servers

  1. Zetta
  2. CoreVault
  3. SecurStore
  4. Backup-Technology
  5. Novosoft Remote Backup
  6. CrashPlan
  7. LiveVault
  8. AmeriVault
  9. BackupMyInfo
  10. DSCorp.net

Top 10 Web Analytics Services

  1. GetClicky
  2. Woopra
  3. OneStat
  4. WordStream
  5. At Internet
  6. Logaholic
  7. MetaSun
  8. AdvancedWebStats
  9. Extron
  10. Piwik

Top 10 Virtual Private Network Providers (VPN)

  1. VPN Tunnel
  2. Always VPN
  3. Personal VPN
  4. Black Logic
  5. Pure VPN
  6. DataPoint
  7. Golden Frog
  8. Strong VPN
  9. Cyberghost VPN
  10. Anonymizer

Top 10 SaaS Accounting and Bookkeeping Software Providers

  1. Wave Accounting
  2. Netsuite
  3. Xero
  4. Clear Books
  5. Accounts Portal
  6. NolaPro
  7. Skyclerk
  8. Envision Accounting
  9. Yendo
  10. Merchant’s Mirror

Top 10 SaaS Online Payroll Software Providers

  1. Paylocity
  2. Evetan
  3. Simple Payroll
  4. Amcheck
  5. Perfect Software
  6. Paycor
  7. WebPayroll
  8. Compupay
  9. Triton HR
  10. Patriot Software

Biggest Challenges When Backing Up Databases

Database backups are very different from traditional flat-file backups, and there are a number of important points you should consider when protecting the databases that support your critical systems.

Database Data is More Vulnerable

Because databases typically experience much higher load volumes than other types of data, they are more vulnerable to failure. And a database failure can often have much more serious consequences than failure of types of IT systems.

Database Data is More Complex

Database data is stored as a series of logical data elements. This means that – unlike a simple Word document – a database table might be spread out across many files or locations on the hard drive. In addition to this, all of the elements within a database relate to each other and must be perfectly synchronized. This complexity makes database backups especially difficult, since it’s easy to end up with referential integrity problems.

Database Data is More Critical

Databases are densely-packed with the most important information that a company needs for its daily operations. If your organization loses a GB of word documents, it might cause a nuisance. But losing a GB worth of database tables could spell the end of your business.

Database Data Is Hard To Back Up

Thanks to Microsoft SBS server, now every small company can set up and run their own database servers. But backing up these databases requires special expertise and training. If done improperly, this process can lead to major problems when it comes time to restore.

Databases Read and Write More Frequently

Because databases read and write data more frequently, there is a greater chance that the drives will encounter an error. This means that databases have a higher chance of experiencing hard drive problems than other low-performing servers.

Database Data Must Comply With Legal Requirements

There are a number of legislations – such as HIPAA and SOX404 – which strictly dictate how customer information should be handled and issue severe penalties for companies that fail to comply. Since databases usually store the most sensitive and critical information within a company, they are particularly at-risk.

Databases Must Often Be Running On 24/7 Schedules

Because so many companies now operate on a twenty-four hour schedule, serving clients all over the world, there is now much less tolerance for downtime. This is particularly true of database servers which support the most critical revenue-generating processes for these companies.

Database Data Is Growing Exponentially

Moore’s law predicts that our capacity to consume and produce information will grow at a rate of 50-100 percent per year. And this certainly applies to database data as much as it does to flat files.

Tips For Securing Virtual Servers In The Cloud

When moving virtualized servers into the cloud, you free up a lot of the security burden for your hardware and underlying hypervisors onto an experienced third-party with lots of resources and trained on-staff expertise.

But this brings up another security concern

By moving to the cloud, you are giving hackers a single, centralized point of attack. All they have to do is snoop around and pick off any poorly-secured servers they come across. That’s why it’s especially important to pay extra attention to the security of your virtualized server configurations in the cloud.

Below, I’ve included a few pieced of advice that should help keep you safe:

Create a hardened cookie-cutter image

Create a secure system image that has only the most basic services and features that would be required for most applications. This secure configuration should be the default template on which you base all of your virtual servers. And only enable new capabilities as required for individual instances.

Keep track of the number of virtual machines create

Virtual server sprawl is one of the new problems that were created as a result of virtualization. Because it’s now so easy to create and deploy new servers, it’s now common to see people creating new servers and forgetting about them. And deleting these forgotten servers can be difficult and risky, since you face the chance of accidentally deleting a virtual machine which serves a critical function.

Limit the number of virtual servers you deploy

For the same reasons outlined above, you want to avoid virtual server sprawl by avoiding the creation of unnecessary servers. Not only does this help reduce your overall “attack footprint”, but you’ll also save on licensing and maintenance costs.

Run a host firewall

You should block off any ports – such as printing, FTP or database network file services – that aren’t absolutely necessary to the operation of the virtual server.

Protect private keys and passwords

This one should go without saying. You need to take precautions to ensure that only the right individuals can gain access to the private keys and authentication used to access your virtual servers. And – whenever possible – avoid storing any authentication information on the cloud server.

Keep logs and run a host Intrusion Detection System

Because of the increased exposure, it’s important to keep an eye out for threats. You should enable event logging and system auditing, and have a host IDC installed. If you have any suspicions that a server has been compromised or targeted, you should shut down that server immediately and take a snapshot and backup for further examination.

Are there any additional tips that should be on this list? Leave a message below and let us know.

Beyond Deduplication: Implementing a Comprehensive Data Protection Strategy that Meets Today’s Data Demands

IDC predicts that the total volume of data will reach 35,000 exabytes in 2020, compared to 1,200 exabytes in 2010, representing a 29 fold increase in the next ten years. Yet, organizations are still using technology designed to backup and recover megabytes of data rather than terabytes.  Data protection strategies are in need of an overhaul.

 

In August of 2010, Gartner released a research report, entitled “Broken State of Backup,” concluding that enterprises both large and small are now facing enormous challenges in protecting data. But haven’t we been protecting data for decades? Why are these challenges arising now?

 

The core of the problem is data growth. Structured data grew as more and more business processes were computerized. End-user productivity applications like word processing and spreadsheets reached every desktop and unstructured data files grew like weeds. Yet, the central backup technology remains the same: copy files from a source to a target. It has never been an efficient model, but we’ve managed to get by because our IT infrastructures have increased greatly in capacity – servers have vastly more computing power, disk drives and tape drives are much faster, and network capacity has increased by orders of magnitude.

 

However, data quantities have increasingly made data protection cost prohibitive. Even though hard drive prices trended downward, disk was still far more costly per gigabyte than tape. The nature of file backup exacerbated the problem. Put bluntly, file backup is not smart and is highly inefficient. It copies the same content over and over again and stores great quantities of redundant data.

 

So why not just get rid of the duplicate data? A simple and rather ingenious idea, and it worked exceedingly well once disk-based deduplication technology became commercially viable. What used to take 100 terabytes to store now takes only 10 or 15 terabytes. Deduplication lets users reduce or even eliminate the use of tape. Replication transferred data off-site to a second device, eliminating the need for mobile media and truck transport.

 

So with deduplication technology such a success, why is backup still broken? Although, disk-based deduplication fixed a problem that backup created, it is analogous to applying sunscreen at the end of the beach party. Many issues remain unresolved and users find they still have numerous pain points.

 

In a traditional setting, deduplication allows organizations to reduce data sets and decrease the processing time of backups simply by only backing up the data that has been changed. However, as organizations move to virtual environments, add unstructured data such as social media analytics and utilize more data each day, deduplication’s impact becomes diminished.

 

While consolidating multiple workloads onto a single physical server offers substantial cost savings, it stresses traditional backup systems that rely heavily on excess system resources. Many organizations overlook considering the backup process and impact in its entirety.  Lifting and moving data from source servers to a target device impacts everything in the backup path: server CPU and memory, disk or tape usage, and network I/O. Growing data volumes and virtualization amplify the problems, however deduplication only addresses one aspect – the disk/tape target. Backup data has a huge impact on the amount of tape/disk you need and the associated costs, so while deduplication minimizes disk/tape cost and usage, it is ignoring all those other points in the backup cycle.

 

Organizations must move away from antiquated backup and restore models that can’t keep up with today’s data volumes, and turn their attention to strategies that attack the root cause of the broken state of backup and move beyond deduplication. While organizations need to move beyond deduplication, it should not be eradicated, but incorporated into a larger strategy.

 

Organizations that determine a strategy that takes growing data volumes and impact reduction in mind will come out ahead. Many solutions are incorporating the concept of block-based backup, which blocks data together and only changes disk blocks that have changed in between backup cycles.  This method has consistently demonstrated 95% less CPU and disk I/O impact. It also allows for multiple recovery points, allowing organizations to recover information faster so a file set in its entirety doesn’t need to be restored to access one block of data.

 

At a time with growing data volumes and the need for instant recovery, organizations should look for a comprehensive data protection strategy that addresses the inefficiencies of current backup processes. The problems with backup and recovery don’t reside in the IT department alone. In fact, the impact it has on data access can affect virtually every department across an organization and even negatively influence bottom line business results in extreme cases. There is a better way for businesses to address data protection challenges. By challenging and rethinking the way backup and recovery has always been done and taking a more holistic approach that improves efficiencies at every step of the backup cycle, organizations just might be surprised by the results.

About The Author: Peter Eicher is the Senior Product Manager for Data Protection at Syncsort. Syncsort is a smart backup company, that allows organizations to re-think the ecconomics of data.

Why Are Companies Resistant To Implementing Cloud Technology?

Although I’m personally a huge fan of cloud computing, I think it’s important to show both sides of the argument and let you decide for yourself.

There are a number of counter-arguments against the adoption of cloud technology. But they mostly revolve around a few common themes. Below, I’ve listed some of the most important ones.

Security

Since cloud computing is a very new technology, many people are still apprehensive about trusting their business data in the hands of a third party and having it stored in a multi-tenant environment.

Although I agree with this argument to a certain extent, I think mainly applies to very large companies. If you have a smaller organization, your cloud provider will probably have a better security infrastructure than that of your in-house datacenter.

Privacy

When you hand over your private customer data to a third party, you open up a whole series of legal questions surrounding your legal privacy obligations.

You should talk about this with your lawyer and your cloud provider, since there are a number of tricks you can use to ensure the privacy of your data in the cloud. .

Connectivity

Because cloud computing requires a reliable internet connection, companies can run into trouble if there is a break in the connection between the cloud provider and the company.

Since most cloud providers have multiple redundant connections in their datacenters, the break in connection is more likely to happen on the client’s end. This is going to become less of an issue in the future as worldwide internet connectivity is improving at a healthy rate.

Another point to take into consideration is the fact that employees are increasingly accessing internal business applications through external networks using laptops and mobile devices. In this case, the  risk of a break in the connectivity would have less of an impact if the servers were hosted in the cloud.

I must admit – however – that DDOS attacks will probably continue to be a problem. As more companies consolidate their servers to just a few cloud providers, it creates a single point of failure which serves as a tempting target.

Reliability

In my experience, most cloud services have been pretty good about maintaining uptime and reliability. However, Amazon seems to crash a lot… and this is giving the rest of the cloud industry a bad name. I realize this is to be expected since they are in such a rapid growth phase, and I hope they get this fixed soon.

Political and Legal Issues

As discussed in the section about privacy, you have to be careful about not violating any information compliance regulations, contracts, or partner agreements.

For example, many organizations will store their data in a foreign country without considering the possibility that the privacy regulations in the host’s country might conflict with their own. As a result, there may come a day when the host’s country forces them to take actions which violate the law in their home country.

This is why it helps discuss the cloud with an intellectual property lawyer before moving your whole datacenter.

Just thinking off the top of my head, these seem to be the major objections that people use when advising against the cloud. Are there any that I’m missing? Leave a note below.

15 Neato Things You Didn’t Know Microsoft Exchange 2010 Could Do

Exchange is a great resource with a plethora of functions that would probably take you months of testing before you figured them all out. So I did instead. Here are 15 neat features you may not have known Exchange 2010 can do.

1. MailTips

Mailtips offers administrators the opportunity to display messages remotely to the user. So let’s say the Sender is about to compose a message to a certain contact, MailTips will actually give the recipient’s status information if the recipient is Out of Office, the message size exceeds sender’s limit and other pertinent information that saves you the hassle of writing an undeliverable message.

How to configure MailTips

2. Remote Powershell

One of the biggest changes to come with Exchange 2010 is that now all the Exchange Management Shell administration is carried out through PowerShell remoting. Basically it allows you to connect to remote server running Microsoft Exchange Server 2010 to perform administrative tasks anytime, anywhere without any Exchange admin tools installed on your local computer.
3. Voicemail Preview

Voicemail Preview uses automatic speech recognition so you can read your voicemail. This means you can have your voicemail sent to you via texts on your cell phone. While the voice recognition is not perfectly accurate, it is a really handy feature in emergency situations when you may not be around a computer.

4. Ignore Conversation

This can be set up ahead of time to ignore certain emails that waste your precious time, i.e. a mute option for your email.

5. Database Availability Group (DAG)

DAG is essential in a database failure situation. It provides automatic database-level recovery and could even be called a “fail-safe” for Exchange. When disaster strikes any server in a DAG can host a copy of a mailbox database from any other server in the DAG and provide automatic recovery from any failures affecting mailbox databases.

6. Conversation View

Another great feature for admins-on-the-go is Conversation view.  On your mobile device, you can sync and scroll through conversations and get the administration work done remotely.
7. Outlook Web

Outlook Web (OW) allows you to access everything from instant messages, calendars, contacts, voicemail, email, and SMS text anywhere you go. This makes it easier for you to manage your email more efficiently.
8. Remote Move

With Remote Move, whenever you need to move a mailbox, you can specify each and every detail of the process. This level of micro management is essential to running a smooth system on the server.
9. Automation

This reduces the possibility of human error. Improve efficiency and reduce issues that waste time and energy. This works with email to send and automate actions that previously would require hours of work. The efficiency enables a professional to handle everything without any detailed knowledge.

10. Powershell

Powershell offers scripting options that use WMI to automate complex procedures on Exchange 2010. It helps ensure routine tasks are always accomplished in a consistent way.
11. Collaboration

Your team can use this to work together across the network. This is beneficial for obvious reasons. The staff can work through issues easily while directly connected to the server.
12. Remote Connectivity Analyzer

Gone are the days of problem issues with connectivity. Now you can use this tool to locate problems, and fix them quickly and efficiently. Remote connectivity analyzer is internet-based and extremely helpful.

13. Performance Monitor

Performance monitor is a cool tool that gathers information about the performance of your messaging system and then enables you to create graphs, log performance metrics and perform historical trend analysis to track changes in your exchange environment.

14. Unified Messaging Tools

Get statistics in a flash with unified messaging tools. This is a lifesaver when you need to track calls and logs of messages. You can query and find exactly what you need in no time. This organizational tool is an efficient way to manage communication within the network server.

15. Details Templates Editors

This tool allows you to control the appearance of the object properties using Outlook. This can be a time saver as well. Keep everything organized and now, you can keep everything looking tidy and neat. Think of it as a janitor for the server.

About The Author: Vanessa is an IT blogger who reviews Exchange Server 2010 from Sherweb.com.

What Is Cloud Data Privacy, and How Is It Defined?

The questions of “What exactly is privacy?” is a very difficult one to answer.

Part of this difficulty comes from the fact that this it has a fluid and constantly-changing definition. Privacy requirements and standards can vary based on technology, regions, and industries. And new events, regulations and precedents are coming into play every year which make this definition even more difficult to nail down.

So if data privacy is so hard to define, how do you measure the effectiveness of your privacy policies and practices?

There is a popular data lifecycle framework – which is usually attributed to KPMG – which serves as a model for many organizations. This lifecycle can be broken down into the following 7 stages:

Stage 1: Generation

Who owns this data and how will it be classified? This will help limit potential access and exposure of the PII. And what rules and controls will be in place to verify that this data will remain protected during use and storage throughout its lifecycle.

Stage 2: Use

Will this information only be used internally, or will external third-parties have access to the data? And how will you define “appropriate” use of this data? When considering third-party access, you should also think about what would happen if this data were part of a subpoena or discovery request.

Stage 3: Transfer

Will this data be transferred exclusively over private networks, or will it also travel outside of the internal network infrastructure? And what measures will be taken to encrypt this data and ensure that it can only be accessed by people with proper authorization?

Stage 4: Transformation

Will this data be preserved in its original form, or will it be further processed or aggregated with other data sources? Also, what measures will be taken to ensure the integrity of the data during processing?

Stage 5: Storage

Will this data be stored in a structured or unstructured format? Will it be stored in encrypted or unencrypted format? And what measures will be taken to prevent improper/unauthorized access or modification of the data while in storage?

Stage 6: Archival

What are the minimum and maximum retention periods for this data, and what measures will be taken to ensure the integrity and confidentiality of the data and its storage media during archival storage? These questions often depend on regulatory requirements.

Stage 7: Destruction

What measures will be taken to ensure the secure and complete destruction of the PII? How can you ensure that every copy has been destroyed?

6 Ways Spam Filters Block Out Unwanted Messages Before Ever Reading Them

With spam growing at such an alarming rate, blocking it can be a very resource-intensive undertaking. In order deal with this threat efficiently, spam-protection companies have developed a number of tools for filtering out obnoxious or malicious email messages before they’ve even had a chance to pass through the network.

Whitelisting and Blacklisting

A whitelist is simply a list of senders that has been pre-authorized by you. If you work in an environment where security and confidentiality are important, you might want to prevent internal employees from receiving messages from people outside of the company.

A blacklist is the opposite of a whitelist, where you maintain a listing of IP addresses and email addresses   of known offenders.

The problem with these 2 approaches is that the lists need to be managed by hand. Also, they must be continually updated as new spammers emerge, and old spammers change their tactics.

Sender Policy Framework Checking (SPF)

http://www.openspf.org/ manages a list of email servers, and the IP addresses which are permitted to send email for those domains. This helps prevent email “spoofing”, which used to be a major flaw in the SMTP protocol. When an email header is received, its sender domain and IP address can be compared to a database to see if there is a match. If not, the message is blocked before it’s ever received.

Header Syntax Checking

This is one of the simplest spam check methods, but it’s also one of the most effective. It simply consists of checking the syntax of the incoming SMTP RCPT commands string. Poor syntax is suspicious, since professional mail server hosts adhere to very strict guidelines. On the other hand, spammers are generally much sloppier.

Reverse DNS Checking

The IP address can be checked against the DNS to see if it’s coming from a dial-up account or home computer. A professional email server would never do this, and the TOS of most ISPs forbid use hosting of servers on a consumer home connection.

Bounce Tag Address Validation

Have you ever mistyped an email address and had it bounced back? Well, some spammers will try to trick you into thinking that you’ve sent a bounced message. The way to protect against this trick is to insert a secret code in the header of every email you send out. If a “bounced” message comes back without this code in the header, you know it’s a fake.

Message Deferring

When a new message is coming in, the recipient will send a defer command to the sending server. This is a bit like saying “We’re busy right now. Please try again in 5 minutes.” When the message gets sent again the second time, that mail server passes the test. Spam bots are not likely to try re-sending the message as requested like a legitimate server would.

These are just a few basic ways that unwanted messages can be deflected before ever being received by your mail server. After that, you can begin a more advanced spam-detection process.

How Does A SYN Flood Attack Work?

SYN flood attacks can be used to take down servers by overwhelming their capacity for open connections.

Let’s suppose that there was a radio call-in show that you wanted to take off the air, and you know that they have a phone system capable of holding 7 callers on hold at the same time. So you go to the store and buy 7 cell phones. When the lines open up, you call the switchboard with all 7 phones simultaneously so that the show can’t accept any legitimate callers.

SYN floods work in a very similar manner.

When you contact a web server (or any other system), there is a standard “handshake” which takes place.

  • First, you send a packet which contains a SYN or “Synchronize” flag. This signals the start of a conversation.
  • If the server wants to start a conversation, it will issue a packet with an ACK or Acknowledgement flag.  This flag indicates that the server has opened a connection for your conversation. This connection will remain open until you close the connection or the connection times out.
    • If the connection is refused, the server SHOULD send you a packet with a RST or Reset flag. But many systems and firewalls don’t reply at all to rejected connections for security reasons.
  • Once the conversation has ended, you send a packet with a FIN or Finish flag to close the connection and free up these resources for others to use.

In a SYN attack, the attacker issues multiple packets with spoofed sender addresses. Each of these packets contains a SYN flag that lets the server know this spoofed user wants to open a connection.

Of course, the server will reply to each of these with an ACK flag, but the sender will never get this message because of the spoofed address. But that’s ok.

The server will hold these connections open until they time out. And since connections are a finite resource, the victim’s computer will temporarily become unable to accept new incoming connections.

Although this type of attack doesn’t cause any actual damage to the remote host, it can cause a major nuisance by making it temporarily unavailable to legitimate users.

Advantages of ISCSI for Storage Area Networks (SAN)

In a typical Storage Area Network configuration, you might have a few servers sharing a pool of storage in a disk array.

Ideally, the servers should be able to see these networked storage devices as local drives and run applications on them in the same way that you would run applications on the C drive of your computer.

When a pool of storage can be shared in this manner, it can lead to some interesting benefits such as:

  • Better overall storage usage: When each machine has their own hard drive, you may have some boxes running wastefully at 10% capacity, and others running dangerously at 90% capacity. With a Storage Area Network, there is less waste because storage can be dynamically assigned as-needed.
  • Better resiliency and disaster recovery: When you pool storage at a single point, you greatly speed up and simplify the processes associated with backup and recovery. And this kind of storage pool also allows you to spread out your data storage in such a way that the no data will be lost if one of the drives in the array fails.
  • Lower maintenance costs: Since all of the data is stored at a single point, you only have a single device to maintain.
  • Energy cost savings: Spinning inactive drives when not in use and cooling inactive servers can be a major source of IT energy waste.
  • Datacenter Space Savings: When you consolidate multiple physical devices to a single location point, you free up space to grow and delay or eliminate the need for expensive server room upgrades.

As you might imagine, a networked server/storage configuration like this requires some powerful tools and extremely fast connectivity.

Traditionally, these types of SANs would be implemented using Fibre Channel technology. Although, Fibre Channel is incredibly fast and robust, it’s also very expensive and difficult to implement for smaller businesses.

  • The problem with Fibre Channel is that it relies heavily on proprietary hardware and expensive optical cabling. With ISCSI, you can get performance similar to FC using off-the-shelf commodity hardware.
  • Also, the 4GB/S optical cabling used in FC networks can be overkill since disk IO is often too slow to take advantage of this speed. For most applications, ISCSI lets you get comparable performance through a 1GB/S Ethernet connection.
  • Many would argue that ISCSI is actually easier to manage than a Fibre Channel SAN because it’s based on standard Ethernet technology, so network administrators requires no special training.

If your company has been thinking about setting up a Storage Area Network, but doesn’t want to spend a fortune on Fibre Channel technology, ISCSI may be a good starting point to consider.

Why are iSeries, System i, AS/400, etc… owners so fanatical?

IBM’s servers are somewhat enigmatic. Computer experts are always praising the virtues of these systems, but it’s often difficult to get specifics on WHY.

And IBM isn’t of much help either. They’ve been success despite doing a HORRIBLE job of marketing their systems.

If fact, you can visit their web site right now and ask them for information via their live chat. You’ll probably find that even their own employees can’t explain the basics of iSeries.

Another reason that you’ve never heard of iSeries servers is that they’re incredibly resilient. With Windows systems, you’re constantly hearing about crashes, viruses, hacks and malware. And although Linux systems don’t crash as often, they usually cause more damage when they go down. iSeries servers simply don’t make headlines like this.

IBM has already sold hundreds of thousands of iSeries and AS/400 servers over the years, and most of the System i installations are in smaller businesses. (Usually under 100 employees)

System i has been around for a VERY long time. (About 4 years longer than PCs) One of the reasons that it’s survived so long is that it’s outlasted many other competitors such as:

  • Univac
  • NCR
  • Control Data
  • Wang VS
  • DEC
  • Burroughs

In fact, many argue that it would’ve also beat UNIX if the World Wide Web had never been created.

But despite being old, iSeries is not obsolete. And another reason you never hear about them. Other systems like Windows are designed to be obsolete within a few years. You need to keep buying the same software over and over, every few years… just so you can stay current. This requires a lot of marketing.

Instead of marketing their product aggressively, IBM has focused on building the best product they could. iSeries owners keep their systems for much longer than with other vendors, and they’re incredibly loyal. This has created enough repeat business and word-of-mouth for IBM to maintain market dominance by letting their clients do the selling for them.

iSeries servers are strong like a tank, and incredibly robust & resilient. Many companies have reported owning them and running them continuously for several years, without any downtime at all.

Also, iSeries servers are a completely integrated solution. IBM supplies everything, including software, OS and hardware. Although this locks you into a single vendor, you end up making your money back with lower TCO, cheaper integration and increased reliability.

That’s a good general overview of the reasons behind iSeries fanaticism. I’ll go in a bit deeper to show some of the technical features that make IBM systems so incredibly different and noteworthy.

(Image Source:IBM Media Resources)

Beware of Virtulization Sprawl

Although virtualization is amazing for reducing power consumption, cutting down maintenance costs, improving disaster recovery and resiliency, and just doing more with less…  there’s another aspect of virtualization that you need to be careful with.

One of the key things that you’ll notice when you start running virtualized systems, is that it will be incredibly easy to add new servers.  Although this might seem like a benefit at first…  over time, it could actually become a hindrance.

Because of this convenience, people within your organization will constantly be tempted to add new systems whether they need them or not.  This will result in higher maintenance costs, and more complicated systems.

Here are a few examples:

  • When there is poor control over the decisions in to add new servers, you will often end up with a mish-mash of dissimilar servers and configurations. This complexity adds to the costs and difficulties associated with maintenance, support and data protection of the virtual servers.
  • Whenever you install unnecessary servers, you also unnecessarily increase the licensing costs associated with those system.
  • If you add a new server without telling anyone or documenting its implementation, you could be creating a serious hole in the backup process which could lead to costly data loss.
  • A forgotten system may be poorly-configured, unmaintained or un-patched. This could lead to potential internal security problems later on.
  • Often, you may have installed a server and forgotten its original purpose. Although simply removing this virtual server may be tempting, there is always a risk that the server you are deleting supports a critical business process.
  • Before installing a new server, you must understand what your privacy and compliance obligations will be in regards to the information that will be kept and processed by this machine.

Remember, the whole point of virtualizing the company’s servers is to simplify the work load.

You need to make sure that there’s a process in place for evaluating the need for new business systems.  Then, cut scope relentlessly.  Make sure that only the most essential new services are added, and that no unnecessary servers are created.

Otherwise, you’re only asking for more management headaches.

How DDoS Attacks Get Around IP Address Filters

So now that DDoS (Distributed Denial Of Service) attacks are becoming more common, you might be wondering why web hosts and network admins don’t simply block traffic from suspicious locations.

The answer is a bit complex, but very interesting.

The TCP/IP protocol is the universally agreed-upon set of rules that dictates how machines talk to each other over the web. When this protocol was first designed in the 1970s, computers were few, and usually the exclusive domain of large corporations.

Back then, it was unthinkable that there would someday be billions of dirt-cheap machines on a single worldwide network… mostly owned by unsophisticated or technologically illiterate users. Or that someone might buy a machine with the intention of exploiting the trust of their peers.

When designing the mechanism by which these systems would talk to each other, the main focus was on efficiency, practicality and functionality.

Whenever you send out a packet of information over the web, these packets are coded with blocks of information such as:

  • Source address/port
  • Destination address/port
  • Total length
  • Protocol
  • Data
  • Etc…

When you send out a packet, the server on the other end will reply to the address supplied in this packet. It would not make sense to falsify the sender IP address in this packet, since the reply would just go into empty space.

DDoS attacks take advantage of this flaw in the TCP/IP packet structure in order to attack targets… even when they are blocking of filtering incoming traffic sources.

Let’s suppose that Server A is configured to only receive packets from a trusted machine called Server B. Server X sends a packet to Server A which contains the following information:

  • Source: Server A
  • Destination: Server B

Although the packet was sent from an unauthorized machine called Server X, Server A will accept the packet since the source is listed as Server B… a trusted source. It will then process the contents and send a response to Server B. But, since Server B was not expecting a response, the packet will be ignored.

Server X will never get a response, but that’s ok. If Server X is part of a large group of malicious machines… all acting in unison… they can overload Server A with bogus requests, causing it to become unavailable.

And this is done despite the fact that Server A is only configured to accept traffic from Server B.

This type of attack is difficult to stop, and the actual source of these invalid packets can be difficult to trace. Thankfully, a number of useful tools have been developed to help with this problem over the years. And I’ll be discussing those in another post.

But for now, I hope you can see the futility in simply relying exclusively on IP filtering to stop unwanted traffic.

Protect Yourself From “Dark Clouds” and EDoS Attacks When Hosting Servers In The Cloud

Any time a new technology empowers people to do good, there will be people who use this power to do evil. And this is especially true on the Internet.

As a result, the amazing popularity of cloud computing has opened up a number of new security threats.

With Infrastructure-as-a-Service (IaaS), computing power becomes a commodity. You are no longer paying to own a physical box that will run your software, or paying for the support and maintenance services that come along with this.

Instead, you only pay for the bandwidth, processing power and storage that you use. This is similar to the way you only pay for the water and electricity that you use as part of your monthly utilities bill.

And because cloud servers are much more robust and scalable than typical in-house systems, the are also less vulnerable to traffic spikes and DDoS attacks. (Distributed Denial of Service)

But part of the power of cloud server hosting can also be used against the server owner for a new kind of attack.

Economic Denial of Sustainability attacks – also called EDoS attacks – don’t aim to overload and crash a server in the way a typical DDoS attack would. Instead, the goal of an EDoS attack is to use up cloud resources in such a way as to cause the server hosting costs to break the victim’s budget.

This option is particularly attractive right now, since many cloud hosting providers make it difficult or impossible to place caps or limits on usage. This will hopefully change in the near future, as this threat becomes more pronounced.

The cloud is also changing the way these kinds of attacks are being deployed.

In a typical DDoS attack, a virus would infect computers in such a way that all of their resources can be combined to attack specified targets in unison. Because these are usually PCs or laptops with limited computing power, thousands – or even millions – of machines must be infected in order to execute an effective DDoS attack.

But thanks to cloud computing, hackers can now search for unsecured servers hosted on the IaaS provider’s infrastructure. Because each one of these cloud servers has access to massive amounts of resources, a single hacked system can inflict a lot of damage. This makes cloud servers very attractive to hackers.

In this case, there isn’t much that the IaaS provider can do, since the responsibility of configuring, patching and securing the guest servers rests entirely on the client’s shoulders.

Hosting your servers in the cloud is great, as long as you take security seriously.

 

Cost Reduction Benefits Of Cloud Hosting (Video 3)

Over the past few days, we’ve features a series of video interviews from CoreVault, discussing the flexibility and availability benefits of cloud computing. But there is one major factor that makes clous computing especially attractive to bueinsesses: Cost Savings.

And that’s the topic of today’s video, which is the last of the 3-part series.

The economy has been up and down a lot these past two years and I don’t know a single company that isn’t watching its expenses very closely. This especially occurs when a company needs to replace any equipment like servers. Other items that trigger cost reduction talks are the consideration of eliminating capital expenses or reducing ongoing monthly expenses. Whenever any of these areas are brought to light for discussion, then your business should be evaluating Cloud Hosting as a viable solution to these financial challenges.

You can also more about pricing and how Cloud Hosting can improve these important areas of your business by going to http://www.corevault.com/quickquote.

7 Ways The Current Economy Is Affecting IT Spending

Things are pretty bad right now for businesses, and things seem to only be getting worse. In order to survive the extended financial storm, companies need to squeeze every possible bit of unnecessary spending out of all corporate cost-centers.

One of the areas being most harshly affected by these cutbacks has been the IT department.

Below, I’ve listed 7 of the most common ways companies are changing their IT spending, and how IT vendors can align themselves to take advantage of these opportunities.

Investments in training and education have been reduced.

This applies both to front-line staff and IT support staff. Companies are demanding simpler, and more intuitive interfaces… sometimes with fewer features. Also, this reduction in training will is leading to lower ROI since employees don’t leverage tools to their full potential.

In the near future, we may start to see more serious consequences emerge from this trend in the areas of compliance and disaster recovery… which are constantly increasing in complexity.

Companies are postponing their IT hiring decisions.

It staff are being pressured to do more work in less time with the same staff. This can lead to poor prioritization, where important long-term projects may get pushed back in order to put out smaller fires.

Also, employees might start pushing themselves past their limits. When this happens, staff will rush to deliver quickly without laying down the proper foundation. Otherwise, projects get delivered late and the quality of the work is poor.

The end-result is that projects end up having to be scrapped and re-done multiple times.

Hardware and software projects are cancelled or postponed.

Companies that try to cut costs by postponing critical upgrades will usually end up paying more as a result of higher Total Cost of Ownership. However, these additional costs may be harder to track and quantify.

Productivity, downtime and maintenance all need to be taken into account when evaluating the decision to postpone spending.

There are also other risks that come up as software vendors stop supporting older systems, and new “quick-fix” applications get added which could become a potential source of conflict later on.

Companies will cut staff.

Unfortunately, this is a tactic that’s increasing in frequency.

Whenever a company cuts staff, it has a devastating effect on morale and productivity. High-quality employees begin planning their exit, while low-quality employees become apathetic to the needs of the organization. Also, staffing cuts can cause hostility and political disputes amongst upper management.

Whenever a long-time employee leaves, you also lose a lot of critical undocumented company knowledge that can’t be replaced.

Asking IT staff to work longer hours.

An employee might gladly stay late to work for a day or two. But after 3 or 4 times, it will begin to feel imposing. To your IT staff, staying late effectively amounts to a reduction in salary and an imposition on their personal lives.

Over-working your employees might also have the opposite effect of actually lowering their productivity. There’s an old expression that says “a slave can get more work done in 6 days than he can in 7”.

Asking your employees to work late can also cause other legal, morale and staff turnover problems.

Outsourcing IT functions.

Outsourcing can be difficult because the outsourcing company must take the time to understand your IT systems and business processes. Also, there is some risk in the fact that an outsourcing vendor may also be working with competitors or other companies that may cause a conflict of interest. And if you ever have a problem with your vendor, there may be long-term contracts in place and the transition process will also be very difficult.

Implementing SaaS and Managed Services

Many companies are turning to SaaS instead of implementing new in-house systems because it requires no up-front capital investments. However, many companies are concerned about security and privacy of their business data when relying on a third-party provider. Business continuity could also be a concern if the SaaS provider ever goes out of business.

What Is Operating System Virtualization?

When we typically think about virtualization, we imagine a hardware device which can host multiple different operating systems within the same box, with each of these operating systems having their own separate resources and acting independent f the other guest operating systems.

In this example, the operating systems reside on top of the virtualization software. With operating system virtualization, the operating system is installed first, and then the virtualization software is installed on top of it.

Unlike with other forms of virtualization, operating system virtualization will only let you create servers whose configurations are identical or similar to the host OS.

Doesn’t this seem like a major downside? What could possibly be a practical application for this kind of virtualization?

You’re staring at it right now.

Web hosting companies will place many virtual web servers on a single box or blade. Each of these servers is identical to the host system and independent of the other servers on that same device.

  • This is very convenient because patches or modifications can be made to the host server, and they will instantly be applied to all of the “containers” on this device.
  • Also, this is much more efficient than traditional virtualization since you don’t have to install a new OS for each guest.
  • Not only can you run more guest operating systems on a single server, but you can do so without having to purchase new server licenses for each guest OS since there is actually only a single core server running.

Another example would be a company that has to manage multiple SQL databases or any other scenario where many similar or identical servers need to be hosted or managed within the same datacenter.

Although it lacks in flexibility, it’s very good at managing large numbers of homogeneous servers.

Accessibility Of Data And Services Hosted In The Cloud (Video 2)

Our friends at CoreVault have recently published the second video in their 3-part series on cloud hosting. If you’ve ever been curious about the accessibility of your servers when moving to the cloud, I’d definitely reccomend checking out this video.

Here is a short description, according to CoreVault:

Accessibilty is the name of the game these days. Businesses need to be up and running 24/7 with little room for any downtime, if any at all, depending on your business needs. The mobility of workforces and diverse needs of organizations requires the need for 24/7 access of data. Cloud hosting can provide that level of access while improving your disaster recovery and business continuity plans at the same time.

Learn more about pricing and how Cloud Hosting can improve these important areas of your business, then go to http://www.corevault.com/quickquote.

The Difference Between IaaS, SaaS and PaaS

With every new technology, comes a new set of trendy IT acronyms that we must memorize. With the case of cloud computing, 3 of the most important are IaaS, SaaS and PaaS.

In order to fully understand the implications of these acronyms, you first need to understand virtualization. Since virtualization is the underlying technology that enabled cloud computing to flourish, I felt it was important to add it as a 4th term in the series.

Virtualization (Private Cloud)

Virtualization was the major technological leap that caused the “cloud computing” trend to finally take off. Although virtualization technology has been around for several decades, it’s only recently been available for commodity x86 chips. And this is what really took it out of the mainframe and into common small business server rooms.

In layman’s terms, a private cloud is a physical server – owned by you – which can run multiple operating systems at the same time. In order to accomplish this, you need to add an extra layer of software between the physical hardware and the operating system. This software – often called a “hypervisor” – allows each operating system to exist inside of an isolated bubble that allows it to run with all of its own resources… as if it was the only OS in the box… although it’s actually sharing resources with other operating systems.

In the case of servers, this is accomplished using a variety of different methods. (paravirtualization, hardware emulation, OS virtualization, etc…) For now, you don’t need to know the deeper technical details of how this is accomplished.

IaaS (Infrastructure-As-A-Service)

IaaS is very similar to a private cloud, except for the fact that you do not own the server. Instead, a third party allows you to install your own virtual server on their IT infrastructure in exchange for a rental fee.

This is the simplest form of “as-a-service” variations. The biggest difference between IaaS and Private Clouds is that the customer (you) does not have control over the virtualization layer (hypervisor) or the actual hardware. But everything above those layers is 100% within your control.

SaaS (Software-As-A-Service)

SaaS is the easiest and most frequently used “as-a-service” variation.

This term is used to described any applications which is managed and hosted by a third party, and whose interface is accessed from the client side.

Thanks to new technologies such as AJAX and HTML5, the vast majority of SaaS applications run directly from the web browser without requiring any additional downloads or installations from the client side. Common examples include Gmail, Salesforce and Youtube)

There are also other examples – such as Skype or online backup – that require software downloads from the SaaS provider.

PaaS (Platform-As-A-Service)

PaaS is the most complicated of the acronyms.

Essentially, PaaS provides developers with a framework that they can build upon in order to develop their own applications or customize existing SaaS applications.

This is similar to how you would create your own macros in your favourite spreadsheet, except you’re doing it using software resources and components which are owned and controlled by a third-party.

4 of the best- known examples of PaaS are Amazon Web Services, Google Code, Salesforce PaaS and Windows Azure.

The main advantages of PaaS are that it allows for rapid application development and testing, and that they scale really well.

And there you have it in a nutshell. The differences between PaaS, SaaS and IaaS

The Difference Between CDP and Scheduled Backups

When shopping around for online backup services, you’ll often find that these services come in one of 2 varieties: Continuous Backup and Scheduled Backup.

So how do you know which is most appropriate for you? Let’s take a look at both methodologies, and compare their features and benefits.

With CDP backups, the backup software is constantly on the alert for any changes in your files. As soon as a document or file has been changed, these changes are immediately sent over for backup.

Of course, there are a few things you should consider before looking into CDP backups:

  • CDP has one obvious major benefit in the fact that your backups will always be current as of a few minutes ago. Compare this with a scheduled backup, where you’ll always risk losing a fixed amount of data… depending on the interval you’ve selected. (ex: With a daily scheduled backup, you always stand to lose up to 24 hours worth of data)
  • If you’re a laptop user, you’ll want to make sure that your CDP backup has the ability to track changes while you’re disconnected from the network… and to automatically synchronize once you reconnect.
  • Because of the increase in network traffic caused by “pure” CDP backup, you’ll want to ensure that your CDP software has measures in place that can reduce the bandwidth burden. This can be accomplished through a number of techniques including compression and block-level incremental uploads.
  • Certain types of files – such as flat-file databases and Outlook PST files – are poorly suited for CDP since they change very frequently. This frequent updating can slow down your machine or take up unnecessary bandwidth. Files of this type are better-suited to flat file backups.
  • Since CDP backups spread the load evenly throughout the day, there is no need for a daily “backup window”.
  • CDP is best-suited to live working data, and poorly suited to environments where a lot of static or rarely-accessed information must be produced and stored. A classic example would be a fax program that receives lots of scanned contracts. (Once the contracts are saved, they will never again be modified) These types of files are best-suited to an archiving system. (A completely different breed of backup product)
  • Because CDP is constantly accessing the Internet and sending data over the network, it’s important to have extra layers of encryption implemented. For example, you’ll want to use a VPN service when accessing public WiFi hotspots (In case the SSL connection is compromised), and ensure that the data backup packets are encrypted from the client side before transmission.
  • It’s important to note that most CDP backups will only protect the flat files on your system, and will not restore the actual OS. If this is a concern, you have one of 2 options at your disposal: Look for an online backup service that offers bare metal recovery, or keep a copy of your system image on hand for emergencies.

Despite the benefits of CDP, it’s not recommended that you rely exclusively on continuous backups for protection. Ideally, the best solution would be to have an online backup service that combines both: CDP and Scheduled backup. And for rapidly growing volumes of rarely-accessed or static data, you should also implement an Archiving service in addition to your online backup.

About The Author: Storagepipe has been helping companies with their laptop and server backups for over a decade.

Why Companies Should Consider Cloud Hosting (Video 1)

This is the first in a 3-part series about the benefits of cloud hosting. It was put together by our good friends at CoreVault, and I strongly recommend checking this out.

We recently introduced our new Cloud Hosting services to the marketplace leveraging VMware’s industry leading technology. So like with any new product and especially with all the buzz around the Cloud, we wanted to help businesses understand what factors are most important to them when evaluating Cloud Hosting as a service.

So we have developed a new 3 part video series with CoreVault’s CIO, Raymond Castor, and Jeff Cato, V.P. Marketing, to help shed more light on this topic. Here is Reason #1 – Flexibility.

Flexibility is absolutely critical when needing access to your applications and data from anywhere, anytime. The 24/7 environment we operate in these days requires this type of flexibility. You need to test some applications before making it live? That is no problem with Cloud Hosting.

If you are interested in learning more about how our Cloud Hosting Services and even get a cost assessment, go to http://www.corevault.com/quickquote. Join us again soon as we unveil Reason #2 for why you should be considering Cloud Hosting.

Key Trends Driving Server Virtualization

There are a number of key trends driving the current popularity of server virtualization. In this article, I’d like to highlight a few of – what I consider to be – the most important ones.

The Ability To Run On Commodity Processors

Virtualization technology has been around for several decades. But until recent years, it’s mainly been confined to expensive mainframes at large corporations. But lately, we’ve seen the introduction of new hypervisors capable of running on cheap X86 commodity processors. This has made virtualization capability accessible to a much wider market.

Exponential Technology Improvements

We’ve all heard of Moore’s law. And it applies just as much to processors and chips as it does to storage, networking and other areas of computing.

As servers become cheaper, larger, more plentiful, the only way to keep the datacenter organized while minimizing costs is to consolidate all of the servers to the fewest-possible physical devices.

Green Computing

Every year, chips are getting denser and more powerful. And as such, every new generation of processors consumes more energy than the last. As a result, we’re now seeing datacenters full of servers that are only using 15% or less of their total processing capability.

Also, these systems need to be cooled using energy-hungry cooling and ventilation systems.

As a result, energy bills are quickly becoming one of the top costs associated with IT management. We’re seeing a sharp rise in the number of companies having to upgrade the hydro connections in their datacenters and create a separate electricity billing account for the IT department.

Server consolidation helps minimize the number of hardware devices and reduce overall power usage.

IT Maintenance Costs

Companies are continually adding new servers to their datacenters, and these machines must all be physically installed and maintained by on-site IT workers.

By consolidating servers to the fewest possible number of physical devices, you can keep growing your server infrastructure without having to hire new IT staff.  Many organizations have reported maintenance cost savings of 40% for every virtualized machine.

Under-Utilized Hardware

Datacenters are full of servers running at partial capacity. You’ll often see hard drives that are only half-full and processors that will never use more than 15% of their total processing power. And while these boxes are sitting idle – gobbling power and taking up expensive datacenter real-estate – you have to pay on-site staff to maintain these servers.

Consolidation can squeeze out the waste and eliminate lots of unneeded hardware so that you can keep your datacenter nice and tidy.

Disaster Recovery and High-Availability

One of the less often talked-about benefits of server virtualization has to do with data protection.

When you consolidate all of your storage to a single point, the backup process becomes much faster and easier. This is especially important for companies that are concerned about backup windows.

In case of hardware failure, virtual machines can be easily moved to another piece of hardware. And virtual servers can be re-built within minutes, as opposed to physical servers which can take hours or days to rebuild.

How strong admin safeguards protect your practice (HIPAA IT Disaster Protection)

It is 3am and you receive the dreaded phone call from your alarm company.  There is a fire in progress at your practice and the fire department is on scene.  You quickly scramble out of bed to dress and head to the clinic to inspect the damage.

You are relieved when you get there.  The fire gutted the business next door and there was very little fire damage to yours.  Until you walk inside.  Then you notice that the intense heat, smoke and water ruined everything.  The loss of your furniture is bad enough, but your computer equipment, and most importantly, your server, is now unusable.  What can you do to get your practice up and running?

Disaster recovery is an important part of the administrative safeguards for your practice.  While there is a slight risk that you will have your computers hacked or employee’s share information they shouldn’t, there is a much greater likelihood that your practice will suffer from some sort of disaster that affects your operation for at least 3 days.  What steps can you take to get your practice up and running immediately after an event like a fire?

Moving a copy of your data back-up off-site is critical to recovery.  The more recent the data on the back-up, the quicker your recovery in the event of a disaster.  Having only last week’s back-up in the safe deposit box potentially means that a weeks worth of data is lost if something happens and the most recent back-up (still at the office) cannot be used. Even in today’s modern small healthcare practice, a week of work can amount to millions of data points that might be lost and unrecoverable.  Seriously consider moving data back-up offsite daily using an online service to ensure availability.

Keep the CD’s and licensing information for all the software in your practice in the safe deposit box.  Restoring data is relatively easy; reinstalling software and not having install codes could delay your practice’s reopening.  No matter what software developers tell you, restoring applications from a backup seldom goes smoothly.  Keeping software at the office can lead to problems in the event of disaster recovery.

Find a practice with a similar set-up in your area and work out a site-sharing arrangement in the event of a disaster.  It should be a mutual agreement where each practice can use the other’s space in the event of specific damage to their respective building.  This type of agreement can really help reduce patient stress, especially if practitioners’ are going to be out of their building for a period in excess of three days.

Practice a part of your disaster recovery quarterly.  One of the most important areas to test is restoring the back-up.  Work out: Who will get the back-up tape from the safe deposit box?  Where do we get the equipment to replace lost computers?  What technical problems come up during the restore?  Addressing these issues before a disaster strikes can help smooth the recovery cycle should something happen.  Your IT professional should be able to provide test equipment to facilitate this vital drill.

As part of your disaster recovery planning, consider if having an off-premise service may offer a better solution. Almost all off-premise services are located in specially designed facilities with advanced fire suppression and data recovery systems.  These services have powerful generators to keep the building operating during power loss and battery systems to keep all the equipment online while the generators spin up. With your applications hosted and data stored on off-premise servers, you can be assured of 24/7 access regardless of any physical damage to your practice.

Remember, administrative safeguards help protect you against data loss.  While hacking and data theft are the scarier prospects, you run far more risk of data loss from fire, vandalism or severe weather.  Putting solid disaster recovery plans, perhaps to include moving your applications and data to an off-premise service, can not only protect your patients’ data, but also speed up recovery if something happens.  Understanding your risks allows you to plan appropriately and a good recovery plan gets your practice back into operation that much faster, reducing stress for everyone.

About The Author: John Caughell is the Marketing Coordinator for Argentstratus. They are leading experts in the field of cloud technology for the medical industry. If you have any concerns about privacy and security for PII or PHI in the cloud, get in touch with them. (PHI Protection and PII Protection)

Computing 101: What Exactly Is Virtualization?

Virtualization is currently one of the hottest topics in IT. This is the technology that’s enabling a number of other major trends such as Green Computing, Cloud Computing and High Availability to become practical and cost-effective for companies.

But before I can explain WHY virtualization is important, you must first understand exactly what it is that everyone is talking about when they use this futuristic-sounding word.

If you’re reading this as a non-technical person – or if you wish to explain this concept to a layman – it would be helpful to think about virtualized computing as part of a broader step in the evolution of computing technology.

The very first computer was the difference engine, designed by Charles Babbage. This was a purpose-built device that could only serve a single function. If you wanted to modify its purpose in any way, you had modify or change physical components within the machine.

Although this was a major limitation, mechanical computers continued to be used for a number of practical applications.

But by the mid 20th century, analog and digital electronic computers were developed which could be reprogrammed. But just as with the original mechanical difference engines, they could only run a single program at a time.

For a modern-day analogy, imagine having to purchase a computer for MS Word, another computer for Web Browsing, and another separate computer for PowerPoint. My laptop currently has over 30 applications on it, so I would need to purchase a stack of 30 laptops.

Then, we began to see the emergence of computers with operating systems.

Operating systems were a revolutionary idea in computing, because they added a separating layer between the computer hardware and the software that they ran. So the computer could act as a payroll processing unit first, then a sales forecasting unit, and then an invoicing machine. This functionality could be switched over quickly by loading different programs.

In this sense, an operating system could almost be thought of as a type of virtualization, where multiple processes can run simultaneously and share hardware through the use of an intermediary program which acts allocates resources efficiently and ensures that all of the programs play nicely together.

Now remember the laptop analogy I’d given you earlier, where you’d need a single machine for every purpose or program? Well, something similar to this happens within datacenters.

A company will purchase a physical server for email, a physical server for databases, a physical server for SharePoint, and another server for file storage. All of these servers take up a lot of room, use lots of electricity, and require a lot of maintenance.

This is a problem because many servers will never use more than 10% of their full processing capability during their entire lifetime. Wouldn’t it be great if this wasted 90% could be put to better use?

This is the problem that virtualization tries to solve.

Virtualization allows multiple operating systems to run and share resources on a single piece of hardware… much in the same way that an operating system acts as a “traffic cop” which allows multiple programs to run on a single piece of hardware.

This is accomplished by placing an additional layer of software between the operating system and the hardware. This software is often called a “hypervisor” or “Virtual Machine Manager”.

Just like how an operating system acts as a supervisor to the programs that you’re running, a hypervisor acts as a “supervisor to the supervisors” that run your programs.

Virtualization is NOT a new concept. In fact, IBM servers have been able to run virtualized systems since 1967. However, IBM mainframes are huge, expensive machines that were impractical for all but the largest companies.

It wasn’t until recent years that virtualization technology was redesigned to work on commodity X86 microprocessors – similar to those in your computer right now – that virtualization became a major trend.

Now, smaller companies could reap all of the benefits of virtualizing and consolidating their servers using cheap off-the-shelf technology and software.

In another article, I’ll go into more detail about the features, benefits, and trends that are drawing attention to this very important technology.

Factors to Consider Before Setting Up An Intrusion Prevention System (IPS)

Intrusion Prevention Systems are designed to analyze your network traffic, and block or report any suspicious or malicious traffic activity. Intrusion Prevention Systems come in many different flavours, each with its own particular customer scenario in mind.

It’s definitely not a one-size-fits-all product.

Before your company starts evaluating Intrusion Prevention Systems, you need to have a clear idea about what goals you want to achieve, and what problems you want to solve. Below, I’ve included a few questions that you should ask before evaluating solutions for your organization.

Will this IPS be used as a primary defence mechanism, or will it supplement other systems?

If your IPS is designed to serve as a primary defence mechanism, it may include some basic features such as the ability to “shun” known and suspected attackers. However, shunning can backfire if it generates false-positives. That’s why some companies prefer to implement simpler IPS systems as just one part of a broader network protection strategy.

Will you require any kind of forensics capability? And if so, what kind of information do you require?

Any basic IPS can detect and prevent attacks without too much supervision. But if you require forensic information, this could drive up the cost and complexity of your IPS implementation.

Are you looking to stop specific types of attacks such as viruses or hackers, or will you just be monitoring for suspicious behaviour?

Although there may be some overlap between these two, most applications will lean more towards one side than the other.

Are you more concerned with traffic patterns or traffic volume?

As with the previous example, most Intrusion Prevention Systems offer both “rate-based” traffic volume analysis (ex: for DDoS prevention) and “signature-based” traffic pattern analysis (ex: for hacking detection) features. However, since each product is designed with a different customer profile in mind, each IPS will lean more to one side or the other.

Do you want to protect servers or individual users?

An IPS that’s designed to protect individual users should be able to prevent incoming attacks while watching for outbound activity that might indicate an infected machine. However, an server-focused IPS will offer a much more granular ability to focus in on specific services and applications.

Are you focused on the core of your network, or the perimeter?

A core-focused IPS should be designed with high-availability, overflow-control and performance in mind. And a perimeter-focused intrusion prevention system will focus more on other factors such as traffic and latency.