Stephen Frost <sfrost@snowman.net> writes:
> This was using just a straight-up 'numeric' data type though. Perhaps
> for that case we could drop the unnecessary zeros?
That would make numeric useless for the common scientific/engineering
usage where you write the number of decimal places you think are
significant in your measurement. In that usage, "1.0" and "1.000"
do have different meanings.
regards, tom lane