Re: Anyone working on better transaction locking? - Mailing list pgsql-hackers

From Shridhar Daithankar
Subject Re: Anyone working on better transaction locking?
Date
Msg-id 200304121639.56596.shridhar_daithankar@nospam.persistent.co.in
Whole thread Raw
In response to Re: Anyone working on better transaction locking?  (Kevin Brown <kevin@sysexperts.com>)
Responses Re: Anyone working on better transaction locking?  (Kevin Brown <kevin@sysexperts.com>)
List pgsql-hackers
On Saturday 12 April 2003 16:24, you wrote:
> A better answer is that a database engine that can handle lots of
> concurrent requests can also handle a smaller number, but not vice
> versa.  So it's clearly an advantage to have a database engine that
> can handle lots of concurrent requests because such an engine can be
> applied to a larger number of problems.  That is, of course, assuming
> that all other things are equal...
>
> There are situations in which a database would have to handle a lot of
> concurrent requests.  Handling ATM transactions over a large area is
> one such situation.  A database with current weather information might
> be another, if it is actively queried by clients all over the country.
> Acting as a mail store for a large organization is another.  And, of
> course, acting as a filesystem is definitely another.  :-)

Well, there is another aspect one should consider. Tuning a database engine 
for a specifiic workload is a hell of a job and shifting it to altogether 
other end of paradigm must be justified.

OK. Postgresql is not optimised to handle lots of concurrent connections, at 
least not much to allow one apache request handler to use a connection. Then 
middleware connection pooling like done in php might be a simpler solution to 
go rather than redoing the postgresql stuff. Because it works.

> This is true, but whether you choose to limit the use of threads to a
> few specific situations or use them throughout the database, the
> dangers and difficulties faced by the developers when using threads
> will be the same.

I do not agree. Let's say I put threading functions in posgresql that do not 
touch shared memory interface at all. They would be hell lot simpler to code 
and mainten than converting postgresql to one thread per connection model.

> Of course, back here in the real world they *do* have to worry about
> this stuff, and that's why it's important to quantify the problem.
> It's not sufficient to say that "processes are slow and threads are
> fast".  Processes on the target platform may well be slow relative to
> other systems (and relative to threads).  But the question is: for the
> problem being solved, how much overhead does process handling
> represent relative to the total amount of overhead the solution itself
> incurs?

That is correct. However it would be a fair assumption on part of postgresql 
developers that a process once setup does not have much of processing 
overhead involved as such, given the state of modern server class OS and 
hardware. So postgresql as it is, fits in that model. I mean it is fine that 
postgresql has heavy connections. Simpler solution is to pool them.

That gets me wondering. Has anybody ever benchmarked how much a database 
connection weighs in terms of memory/CPU/IO BW. for different databases on 
different platforms? Is postgresql really that slow?
Shridhar



pgsql-hackers by date:

Previous
From: Kevin Brown
Date:
Subject: Re: Anyone working on better transaction locking?
Next
From: Greg Stark
Date:
Subject: Re: Anyone working on better transaction locking?