FP16 Support? - Mailing list pgsql-hackers

From Kohei KaiGai
Subject FP16 Support?
Date
Msg-id CAOP8fzY_rZXHp+G7oGT96oOK0r=HyMrXrUPR=s-9cgFtteTUog@mail.gmail.com
Whole thread Raw
Responses Re: FP16 Support?  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: FP16 Support?  (Thomas Munro <thomas.munro@enterprisedb.com>)
List pgsql-hackers
Hello,

How about your thought for support of half-precision floating point,
FP16 in short? https://en.wikipedia.org/wiki/Half-precision_floating-point_format

Probably, it does not make sense for most of our known workloads. Our supported
hardware platform does not support FP16 operations, thus, it will be executed by
FP32 logic instead.

On the other hands, folks of machine-learning said FP32 values provides too much
accuracy than necessity, and FP16 can pull out twice calculation
throughput than FP32.
In fact, recent GPU models begin to support FP16 operations by the wired logic.
https://en.wikipedia.org/wiki/Pascal_(microarchitecture)

People often make inquiries about management of the data-set to be processed
for machine-learning. As literal, DBMS is software for database management.
There are some advantages, like flexible selection of parent population, pre- or
post-processing of the data-set (some algorithms requires to normalize the input
data in [0.0 - 1.0] range), and so on.

If we allow to calculate/manipulate/store the FP16 data in binary
compatible form,
it is much efficient way to fetch binary data for machine-learning engines.

No special operations for machine-learning are needed, but usual arithmetic
operations, type cast, array operations will be useful, even though it
internally
uses FP32 hardware operations on CPUs.

Any opinions?

Thanks,
-- 
HeteroDB, Inc / The PG-Strom Project
KaiGai Kohei <kaigai@heterodb.com>


pgsql-hackers by date:

Previous
From: "Bossart, Nathan"
Date:
Subject: Re: [HACKERS] pg_upgrade to clusters with a different WAL segmentsize
Next
From: Michael Paquier
Date:
Subject: Re: [HACKERS] pg_upgrade to clusters with a different WAL segment size