by Paul Rudo on 25/09/12 at 12:18 pm
Applications such as tablets and smartphones have popularized the use of the touch screen to such an extent that it has become the default, expected interface for many other consumer devices. What’s more, the ease-of-use and ubiquity of touch screen designs in consumer products is having a knock-on effect when it comes to expectations for user interfaces in industrial equipment, medical products and a host of other non-consumer designs. And for designers looking to deliver the optimum user experience by unlocking the power of touch-based control the future is, undoubtedly, multi touch.
Most people are already familiar with the concept of multi touch displays through the two-fingered ‘pinch, zoom, rotate’ features now standard on tablets and smartphones. However, these basic functions are really only the tip of the iceberg when it comes to what multi touch can deliver. The next step is to build on this familiarity by extending the power of multi touch to deliver tangible benefits in markets that are smaller than the consumer sector and where interface and usability challenges are much more complex. This means developing user-centric interfaces that leverage multi touch to deliver more sophisticated navigation and object manipulation and enhance and improve opportunities for collaboration.
Among the growing number of examples of good multi touch implementations are the 19-inch ‘digital canvas’ built around Baanto’s ShadowSense™2D and 3D tracking technology and the work done by Perceptive Pixel. The former replicates ‘traditional’ painting thanks to the tracking technology’s ability to simultaneously detect location of the touch and the size and transparency of the object that touches the screen. The latter includes 3D modeling and data visualization applications that allow engineers to interact directly with on-screen information without having to reach for a touchpad, mouse or keyboard. This is on top of designs that support improved collaboration through more fluid, simultaneous data and object manipulation and analysis.
Take, for example, the company’s 27-inch multi touch display with an active stylus. Here the stylus can be used for writing and marking while the display can be manipulated through touch – in effect replicating the real-life use of pen and paper but with many more options and much more flexibility.
The great thing about all of these applications is that they go beyond consumer ‘gimmickry’ and support very practical ‘real world’ applications. The Perceptive Pixel multi touch display and stylus, for example, could quickly bring improved productivity to the desk of an engineer or an architect through a more natural and familiar environment for manipulation objects and designs. In addition, the digital canvas has the potential to allow graphic designers to do much more than is possible using a ‘conventional’ mouse, keyboard and graphics package.
But what about multi touch applications beyond the relatively ‘benign’ office environment? What about outdoor kiosks? Or on the factory floor? In the operating theatre? These multi touch systems typically need to be more robust and reliable and may need to be protected against accidental (or malicious) damage, unaffected by dust and fluids, and in some cases compatible with operators who will be wearing gloves.
Again manufacturers are rising to the challenge – the next-generation of projected capacitive multi touch technologies, for example, offers 10-finger touch that can be operated with bare or gloved fingers. Furthermore, multi touch implementations that enable areas of a screen to be locked and unlocked, address health and safety requirements to eliminate false activations or provide the ‘child lock’ level protection that, until now, has only been possible through physical controls.
Irrespective of the underlying technology deployed the very best multi touch implementations will be those that put the user at the heart of the design process. All too often interfaces are developed by software engineers with no real experience of – or relationship with – the user and who end up simply meeting the interface specification at a functional level. An iterative, user-centric process in which the user’s requirements inform both the initial design and ongoing development is essential to harnessing the power of multi touch to deliver a truly optimized user experience.
About The Author: Rob Anders is the chief executive of andersDX, a user interface technology specialist dedicated to optimising the display Xperience (DX) of retail, industrial, medical and other ‘non-consumer’ applications.
Image Source: Wikipedia