2015-03-21

[1502.02551] Deep Learning with Limited Numerical Precision

http://arxiv.org/abs/1502.02551
Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy.
16ビットの浮動小数点数(半精度浮動小数点数)でも確率的に丸め処理をすれば使い物になるらしい。