Re: RES: Priority to a mission critical transaction - Mailing list pgsql-performance

From Brian Hurt
Subject Re: RES: Priority to a mission critical transaction
Date
Msg-id 456DC19B.4020607@janestcapital.com
Whole thread Raw
In response to Re: RES: Priority to a mission critical transaction  (Ron Mayer <rm_pg@cheapcomplexdevices.com>)
Responses Re: RES: Priority to a mission critical transaction
List pgsql-performance
Ron Mayer wrote:
Brian Hurt wrote: 
Mark Lewis wrote:   
On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:      
I have the same question.  I've done some embedded real-time 
programming, so my innate reaction to priority inversions is that 
they're evil.  But, especially given priority inheritance, is there any 
situation where priority inversion provides *worse* performance than 
running everything at the same priority?         
Yes, there are certainly cases where a single high priority
transaction will suffer far worse than it otherwise would have. 
OK.

Although I'm tempted to make the issue more complex by throwing Software Transactional Memory into the mix:
http://citeseer.ist.psu.edu/shavit95software.html
http://citeseer.ist.psu.edu/anderson95realtime.html

That second paper is interesting in that it says that STM solves the priority inversion problem.  Basically the higher priority process forces the lower priority process to abort it's transaction and retry it.

Is it possible to recast Postgres' use of locks to use STM instead?  How would STM interact with Postgres' existing transactions?  I don't know.  This would almost certainly require Postgres to write it's own locking, with all the problems it entails (does the source currently use inline assembly anywhere?  I'd guess not.).
Apparently there are plenty of papers stating that priority inversion
is a major problem in RDBMs's for problems that require that specific
deadlines have to be met (such as in real time systems).  
It's definately a problem in realtime systems, not just realtime DBMS.  In this case,  running everything at the same priority doesn't work and isn't an option.

The question in my mind is whether overall the benefits outweigh
the penalties - in much the same way that qsort's can have O(n^2)
behavior but in practice outweigh the penalties of many alternatives.
 
Also, carefull choice of pivot values, and switching to other sorting methods like heapsort when you detect you're in a pathological case, help.  Make the common case fast and the pathological case not something that causes the database to fall over.

Setting priorities would be a solution to a problem I haven't hit yet, but can see myself needing to deal with.  Which is why I'm interested in this issue.  If it's a case of "setting priorities can make things better, and doesn't make things worse" is great.  If it's a case of "setting priorities can make things better, but occassionally makes things much worse" is a problem.


Of course, this is a little tricky to implement.  I haven't looked at
how difficult it'd be within Postgres.   
ISTM that it would be rather OS-dependent anyway.  Different OS's
have different (or no) hooks - heck, even different 2.6.* linuxes
(pre 2.6.18 vs post) have different hooks for priority
inheritance - so I wouldn't really expect to see cpu scheduling
policy details like that merged with postgresql except maybe from
a patched version from a RTOS vendor.
 
Hmm.  I was thinking of Posix.4's setpriority() call.

Brian

pgsql-performance by date:

Previous
From: Ron Mayer
Date:
Subject: Re: RES: Priority to a mission critical transaction
Next
From: Kevin Kempter
Date:
Subject: OT - how to size/match multiple databases/apps for a single server