Beating Floating Point at its Own Game: Posit Arithmetic

Authors

  • John L. Gustafson A*STAR-CRC and National University of Singapore
  • Isaac T. Yonemoto Interplanetary Robot and Electric Brain Co.

DOI:

https://doi.org/10.14529/jsfi170206

Abstract

A new data type called a posit is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers (floats). Unlike earlier forms of universal number (unum) arithmetic, posits do not require interval arithmetic or variable size operands; like floats, they round if an answer is inexact. However, they provide compelling advantages over floats, including larger dynamic range, higher accuracy, better closure, bitwise identical results across systems, simpler hardware, and simpler exception handling. Posits never overflow to infinity or underflow to zero, and “Not-a-Number” (NaN) indicates an action instead of a bit pattern. A posit processing unit takes less circuitry than an IEEE float FPU. With lower power use and smaller silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPS using similar hardware resources. GPU accelerators and Deep Learning processors, in particular, can do more per watt and per dollar with posits, yet deliver superior answer quality.

A comprehensive series of benchmarks compares floats and posits for decimals of accuracy produced for a set precision. Low precision posits provide a better solution than “approximate computing” methods that try to tolerate decreased answer quality. High precision posits provide more correct decimals than floats of the same size; in some cases, a 32-bit posit may safely replace a 64-bit float. In other words, posits beat floats at their own game. 

References

James Demmel, John L. Gustafson, William E. Kahan, “The Great Debate @ ARITH23,” https://www.youtube.com/watch?v=KEAKYDyUua4; full transcription available at http://www.johngustafson.net/pdfs/DebateTranscription.pdf

What Every Scientist Should Know about Floating-Point Arithmetic,” Computing Surveys, March, 1991, Association for Computing Machinery, Inc.

John L. Gustafson. The End of Error: Unum Computing. CRC Press, 2015.

John L. Gustafson, “Beyond Floating Point: Next Generation Computer Arithmetic,” Stanford Seminar: https://www.youtube.com/watch?v=aP0Y1uAA-2Y&t=2847s

John L. Gustafson, “A Radical Approach to Computation with Real Numbers,

Supercomputing Frontiers and Innovations, Vol. 3, No. 2, 2016. DOI: 10.14529/jsfi160203.

IEEE Computer Society (August 29, 2008),“IEEE Standard for Floating-Point

Arithmetic,” IEEE. doi:10.1109/IEEESTD.2008.4610935. ISBN 978-0-7381-5753-5. IEEE Std. 754-2008.

Kulisch and Miranker, A New Approach to Scientific Computation, Academic Press, New York, 1983.

Isaac Yonemoto, https://github.com/interplanetary-robot/SigmoidNumbers.

Downloads

Published

2017-07-23

How to Cite

Gustafson, J. L., & Yonemoto, I. T. (2017). Beating Floating Point at its Own Game: Posit Arithmetic. Supercomputing Frontiers and Innovations, 4(2), 71–86. https://doi.org/10.14529/jsfi170206