Add free-behind capability for large sequential scans - Mailing list pgsql-hackers

From Amit Kumar Khare
Subject Add free-behind capability for large sequential scans
Date
Msg-id 20020212195814.37928.qmail@web10102.mail.yahoo.com
Whole thread Raw
Responses Re: Add free-behind capability for large sequential scans  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Hi All,

(1)I am Amit Kumar Khare, I am doing MCS from UIUC USA
offcampus from India.

(2) We have been asked to enhance postgreSQL in one of
our assignments. So I
have chosen to pick "Add free-behind capability for
large sequential scans"
from TODO list. Many thanks to Mr. Bruce Momjian who
helped me out and
suggested to make a patch for this problem.

(3)As explained to me by Mr. Bruce, the problem
description is that if say
cache size is 1 mb and a sequential scan is done
through a 2mb file over and
over again the cache becomes useless.Because by the
time the second read of
the table happens the first 1mb has been forced out of
the cache already.Thus
the idea is not to cache very large sequential scans,
but to cache index scans
small sequential scans.

(4)what I think the problem arises because of default
LRU page replacement
policy. So I think we have to make use of MRU or LRU-K
page replacement
policies.

(5)But I am not sure and I wish more input into the
problem description from
you all. I have started reading the buffer manager
code and I found that
freelist.c may be needed to be modified and may be
some other too since we
have to identify the large sequential scans.

Please help me out

Regards
Amit Kumar Khare


__________________________________________________
Do You Yahoo!?
Send FREE Valentine eCards with Yahoo! Greetings!
http://greetings.yahoo.com


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Permissions problem
Next
From: Brent Verner
Date:
Subject: Re: [BUGS] Bug #581: Sequence cannot be deleted