Researchers have unveiled an ‘inexact’ computer chip that improves power and resource efficiency by allowing for occasional errors. Prototypes unveiled at the ACM International Conference on Computing Frontiers in Italy a couple of weeks ago are apparently at least 15 times more efficient than current microchips.
The research was carried out by Rice University in Houston, Singapore’s Nanyang Technological University (NTU), Switzerland’s Centre for Electronics and Microtechnology (CSEM) and the University of California, Berkeley. It was selected as best-paper at the ACM (Association for Computing Machinery) conference.
The idea is that power use can be reduced by allowing processing components — like hardware for adding and multiplying numbers — to make a few mistakes. By managing the probability of errors and limiting which calculations produce errors, it’s possible to both cut energy demands and boost performance.
One example given is ‘pruning’, trimming away some of the rarely used portions of digital circuits. The prototype chip showed that pruning could cut energy demands 3.5 times with chips that deviated from the correct value by an average of 0.25%. Study co-author Avinash Lingamneni said that “When we factored in size and speed gains, these chips were 7.5 times more efficient than regular chips. Chips that got wrong answers with a larger deviation of about 8% were up to 15 times more efficient.”
The inexact hardware is also a key component of ISAID’s (Rice-NTU Institute for Sustainable and Applied Infodynamics) low-cost I-slate educational tablet designed for Indian classrooms with no electricity and too few teachers. Pruned chips are expected to cut power requirements in half and allow the I-slate to run on solar power from small panels. The first I-slates to contain pruned chips are expected by 2013.
When you first read that the new chip ‘challenges the industry’s 50-year pursuit of accuracy’ you start to wonder what’s going on here and the information coming out of these research institutes does help much.
But project co-investigator Christian Enz, who leads the CSEM arm of the collaboration, made things a little clearer when he pointed out that “Particular types of applications can tolerate quite a bit of error. For example, the human eye has a built-in mechanism for error correction. We used inexact adders to process images and found that relative errors up to 0.54% were almost indiscernible, and relative errors as high as 7.5% still produced discernible images.”
In fact it’s imaging that is the only real example given in the news release (I haven’t read the research paper itself). Other examples of how ‘inaccurate’ computer chips could be applied would have been useful. No doubt there are many things that computers do that don’t require 100% accuracy and these sorts of margins of error are acceptable. But I’m surprised that we haven’t yet seen headlines in some parts of the press pointing out the consequences when it comes to calculating your phone bill, or docking with the international space station.