Hi
I am wrting a application that involves a lot of floating point number crunching.
My data is stored in tables of the form:
TABLE data (
date_id INT,
value FLOAT)
I have just noticed in the documention that the FLOAT data type is stored
in 8 bytes (ie 64 bits) as opposed to the REAL data type which uses 4 bytes
(ie 32 bits).
All things being equal, does this mean that if I change my data type to REAL
(and lose some precision) I will see a significant performance increase on
my 32 bit Pentium 4?
Or, if I keep my data type as FLOAT will I see a significant performance
increase by changing to a 64 bit CPU?
Regards
John Duffy
___________________________________________________________
Book yourself something to look forward to in 2005.
Cheap flights - http://www.tiscali.co.uk/travel/flights/
Bargain holidays - http://www.tiscali.co.uk/travel/holidays/