Posits: Coming Soon To Hardware Near You

With applications in artificial intelligence and extensive hardware compatibility, high-accuracy posit arithmetic is set to rewrite the standard for computing.

Better answers, fewer bits

With posits, complexity is often turned into something much simpler. For computers, multiplication is a particularly taxing operation and prone to inaccuracies, especially when involving non-integer real numbers.

Such is the case with the logarithmic number system, for example. While converting to logarithm allows for addition instead of multiplication, these operations have always relied on approximations instead of the exact value. The result is a bunch of long decimals that have to be cut off to fit within a computer’s bit limits, often leading to rounding errors.

But in a recent breakthrough for posit arithmetic, a multiplication table of real numbers can be matched up and turned into an addition table—using small integers at that.

By foregoing decoding steps or mapping back to the original representation, calculations are made simple and low power yet provide exact answers. Such perfect mapping works best at the low precisions used by artificial intelligence (AI) applications, shared Gustafson.

While it may sound like a negative attribute, low precision refers to systems running on 16 or fewer bits, compared to single-precision designated at 32 bits. In other words, posits are doing much more with less.

Consider the balancing act of compression, such as for an image or audio file, where file size is reduced to as small as possible without evidently sacrificing image sharpness or sound quality. Similarly, 16-bit posits are essentially lossless when they reproduce data signals, whereas the deeply entrenched IEEE 32-bit floats lose too much information.

By providing better answers with fewer bits, Gustafson highlighted that posits can spark an immense jump in computational power and speed for AI applications.

“We might be able to build hardware that’s actually not just a better design for accuracy, but much faster than anything we have right now on supercomputers,” he added.

One particular technology that would stand to benefit most are deep neural networks (DNN), multilayered algorithms that attempt to emulate how the human brain processes information.

While simpler AI models analyze data when given a certain ruleset, neural networks learn on their own by manipulating and making sense of tons of training data, assigning labels the same way we distinguish between different types of dogs or how doctors identify damaged tissues in medical scans.

As such, DNN applications are typically developed for highly complex tasks like image processing and autonomous driving. However, Gustafson noted that these systems are extremely resource-intensive.

“The training can go on for days, just for the simplest task, so you’re talking about many kilowatt hours of energy consumption,” he explained. “It’s a bottleneck.”

Posits have the potential to markedly enhance efficiency in DNN training and inference—the latter referring to the predictive capabilities of AI models, which put their learning to work in finding patterns in new data.

Working with Gustafson, a US research team showed that 8-bit posits outperformed equivalent floats and had comparable results with 32-bit floats in DNN inference. Classification accuracy ranged from a promising 86 to 99 percent for various tasks, such as distinguishing malignant versus benign breast cancer tumors and identifying numbers of varying handwriting styles.

Aside from high accuracy, posit arithmetic closely matches the representation of a certain function that is used often in neural networks, accelerating learning and computing with less resources.

“The ability to use fewer bits means that you’re going to use a lot less energy and a lot less space on a chip, and all the costs go down,” Gustafson said.

Erinne Ong reports on basic scientific discoveries and impact-oriented applications, ranging from biomedicine to artificial intelligence. She graduated with a degree in Biology from De La Salle University, Philippines.

Related Stories from Asian Scientist