Checking for data changes across a very large table - Mailing list pgsql-general

From Stephen Hutchings
Subject Checking for data changes across a very large table
Date
Msg-id 5503C113FD3EF745927F96031B470A0B39111B701B@mail-jba1.jbanorthwest.co.uk
Whole thread Raw
Responses Re: Checking for data changes across a very large table  ("Greg Sabino Mullane" <greg@turnstep.com>)
List pgsql-general

I’d like some general guidance on a security issue please. This may belong in the another list so please push me there if appropriate.

 

We have an application design which includes a potential 2 billion row table (A). When the application user kicks off an analysis process, there is a requirement to perform a check on that table to verify that the data within hasn’t changed.

 

The initial intent was to hold a single “checksum” or equivalent in another small table (B). This checksum value would have been calculated over a subset of row values held within the 2bn row table. When the user chooses to kick off their processing, the application would re-calculate this checksum for table A in real time and compare the new result with the original value stored in table B. We might obtain the subset from table A like this:

 

SELECT * FROM (

            SELECT            *,

                        (row_number() OVER( ORDER BY "ID"))%10000000 AS rn

            FROM "TableA") aa

WHERE aa.rn=1 LIMIT 20000

 

... and then perform an operation on column data from that subset (in this example, a 100,000th of the original data) to calculate a checksum value. Obviously the data within the subset would have to have been affected in order for a difference to be detected.

 

It is a requirement that users have to have direct access to the DB (I hate this, but am overruled). I envisage that non-admin users would have read-only access, and the above validation would be an extra check on the data.

 

But my main concern is performance - I fear that this won’t be able to perform (in a few seconds rather than minutes). Does the general approach make any sense? Can anyone suggest a better starting point?

 

 

This is Postgres 9.0. The minimum hardware spec is small and Windows-based (64-bit, Core i7 processor, 8GB RAM, single 1TB hard disk). My evaluation DB has ended up with the following config tweaks:

 

shared_buffers 2048 MB

work_mem 2047 MB

checkpoint_segments 128

 

 

Thanks in advance.

 


JBA Consulting

 

JBA Consulting

South Barn

Broughton Hall

Skipton

North Yorkshire

BD23 3AE

United Kingdom

NCE Consultant of the Year 2010

t: +44 (0)1756 799919 | f: +44 (0)1756 799449

 

JBA is a Carbon Neutral Company. Please don't print this e-mail unless you really need to.

This email is covered by JBA Consulting's email disclaimer.

 

Attachment

pgsql-general by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Performance tuning in Pgsql
Next
From: "Greg Sabino Mullane"
Date:
Subject: Re: Checking for data changes across a very large table