by Paul Rudo on 08/10/12 at 11:30 am
If you’ve been reading up on the latest innovations in microprocessor technology recently, you might have heard a number of new buzzwords such as Heterogeneous Architecture, HSA, and APUs. Many industry leaders such as Oracle and AMD have been very vocal about their belief that heterogeneous computing is the future of the industry.
Because of this, I thought that it might be a good idea to provide some background into what heterogeneous computing is, and how it differs from previous processor technologies.
Early computers were designed to automate data handling and calculations which had previously required lots of error-prone manual work. Central Processing Units (CPUs) were designed to take a list of complex calculations or data manipulation tasks, and sequentially perform these tasks at high speeds and with perfect accuracy.
Although the sophistication has increased a lot over the years, the essential idea behind a CPU has remained the same. CPUs are still the ideal mechanism for handling complex sequential tasks such as email or mathematical problem solving.
But when the PC era began, consumers saw that these machines had the potential beyond strictly utilitarian functions. As consumers began to also see the PC as an entertainment or gaming platform, they began demanding higher-quality graphical experiences. However, the sequential processing logic of traditional CPUs created a bottleneck which limited the speed at which high-quality graphics could be delivered. In response to this demand, Graphics Processing Units (GPUs) were created.
The main difference between GPUs and CPUs is that a GPU can perform operations in parallel, making it faster and more responsive than a CPU for certain types of applications. Because of this, GPUs are ideal for multimedia and gaming applications. But beyond entertainment, GPUs are also useful for many critical business tasks such as 3D modelling and crash test simulations.
But many applications require both complex processing capabilities of a CPU, with the responsive parallel processing power of a GPU. This can lead to new bottlenecks as information must be passed back and forth between the GPU and CPU when working on a task.
Homogeneous computing aims to eliminate this bottleneck by allowing the GPU and CPU to work on a problem simultaneously. In a homogeneous processing architecture, this is often accomplished by having a single address space which can be accessed by the CPU, GPU and other function devices at the same time. This greatly speeds up processing because data can be used by all processors without first having to move it.
The benefits of homogeneous computing really fall into 2 major categories: development and efficiency.
- This new architecture simplifies programming, and provides developers with greater flexibility and more powerful capabilities. It also has the potential to greatly improve the experience for gamers and people with graphically-intensive design and multimedia applications. Finally, this will lead to richer and more powerful web browser experiences, which will be increasingly important as productivity applications move primarily towards a SaaS-based delivery model.
- This new architecture also has the potential to reduce power consumption for applications which require both GPUs and CPUs. This means that mobile devices such as tablets and smart phones can offer longer battery life. And it also promises to help reduce power consumption costs for high-performance computing applications and supercomputers.
It’s important to note that heterogeneous computing doesn’t only apply to CPUs and GPUs, but also applies to the incorporation of other types of specialized processors such as digital signal processors and application-specific integrated circuits.
(Image Source: AMD Press Resources. Copyright AMD)