Re: idea for concurrent seqscans - Mailing list pgsql-hackers

From Neil Conway
Subject Re: idea for concurrent seqscans
Date
Msg-id 42206DFD.3040205@samurai.com
Whole thread Raw
In response to Re: idea for concurrent seqscans  (Jeff Davis <jdavis-pgsql@empires.org>)
Responses Re: idea for concurrent seqscans
List pgsql-hackers
Jeff Davis wrote:
> I have a newer version of my Synchronized Scanning patch which hopefully
> makes it closer to a real patch, the first one was more of a proof of
> concept.

A few minor comments:

- context diffs (diff -c) are the preferred format. Also, folks usually 
send patches as a single diff; an easy way to generate that is via `cvs 
diff', or `diff -r old_dir new_dir'.

- needlessly reindenting code makes it difficult to understand what 
you've changed. You should probably follow the PG coding conventions WRT 
indentation, brace placement, and so forth, although this will be fixed 
by a script later in any case. See Chapter 43 of the docs.

- you don't need to (and should not) declare `static' functions in 
header files. If your additions to heapam.c aren't used outside of 
heapam.c, they needn't be declared in the header file.

- PG has an abstraction layer for using shared memory that you should 
take advantage of. You should do something like: (1) create a function 
that returns the amount of shared memory space you require (2) invoke 
the function from CreateSharedMemoryAndSemaphores (3) create/attach to 
and initialize the shared memory during startup, via ShmemInitStruct(). 
See how InitProcGlobal() works, for example.

- it makes me quite nervous to be reading and writing to shared data 
without using locks. If there is too much of a performance hit to 
acquire and release a lock for each page traversed, what about only 
updating the shared memory stats every K pages?

-Neil


pgsql-hackers by date:

Previous
From: "Andrew Dunstan"
Date:
Subject: Re: idea for concurrent seqscans
Next
From: "Andrew Dunstan"
Date:
Subject: Re: Development schedule