Thread: Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
"Luke Lonergan"
Date:

And I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.

The patch isn't ready for application, but someone can quickly implement it I'd expect.

- Luke

Msg is shrt cuz m on ma treo

 -----Original Message-----
From:   Heikki Linnakangas [mailto:heikki@enterprisedb.com]
Sent:   Saturday, October 27, 2007 05:20 AM Eastern Standard Time
To:     Anton
Cc:     pgsql-performance@postgresql.org
Subject:        Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1

Anton wrote:
> I repost here my original question "Why it no uses indexes?" (on
> partitioned table and ORDER BY indexed_field DESC LIMIT 1), if you
> mean that you miss this discussion.

As I said back then:

The planner isn't smart enough to push the "ORDER BY ... LIMIT ..."
below the append node.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Simon Riggs
Date:
On Sat, 2007-10-27 at 15:12 -0400, Luke Lonergan wrote:
> And I repeat - 'we fixed that and submitted a patch' - you can find it
> in the unapplied patches queue.

I got the impression it was a suggestion rather than a tested patch,
forgive me if that was wrong.

Did the patch work? Do you have timings/different plan?

--
  Simon Riggs
  2ndQuadrant  http://www.2ndQuadrant.com


Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Gregory Stark
Date:
"Luke Lonergan" <LLonergan@greenplum.com> writes:

> And I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.

I can't find this. Can you point me towards it?

Thanks

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask about EnterpriseDB's PostGIS support!

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
"Luke Lonergan"
Date:
Sure - it's here:
  http://momjian.us/mhonarc/patches_hold/msg00381.html

- Luke


On 10/29/07 6:40 AM, "Gregory Stark" <stark@enterprisedb.com> wrote:

> "Luke Lonergan" <LLonergan@greenplum.com> writes:
>
>> And I repeat - 'we fixed that and submitted a patch' - you can find it in the
>> unapplied patches queue.
>
> I can't find this. Can you point me towards it?
>
> Thanks



Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Mark Kirkwood
Date:
Luke Lonergan wrote:
> Sure - it's here:
>   http://momjian.us/mhonarc/patches_hold/msg00381.html
>
>

To clarify - we've fixed this in Greenplum db - the patch as submitted
is (hopefully) a hint about how to fix it in Postgres, rather than a
working patch... as its full of non-postgres functions and macros:

CdbPathLocus_MakeHashed
cdbpathlocus_pull_above_projection
cdbpullup_findPathKeyItemInTargetList
cdbpullup_makeVar
cdbpullup_expr

Cheers

Mark

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Tom Lane
Date:
"Luke Lonergan" <llonergan@greenplum.com> writes:
> Sure - it's here:
>   http://momjian.us/mhonarc/patches_hold/msg00381.html

Luke, this is not a patch, and I'm getting pretty dang tired of seeing
you refer to it as one.  What this is is a very-selective extract from
Greenplum proprietary code.  If you'd like us to think it is a patch,
you need to offer the source code to all the GP-specific functions that
are called in the quoted additions.

Hell, the diff is *against* GP-specific code --- it removes calls
to functions that we've never seen, eg here:

-    /* Use constant expr if available.  Will be at head of list. */
-    if (CdbPathkeyEqualsConstant(pathkey))

This is not a patch, and your statements that it's only a minor porting
matter to turn it into one are lie^H^H^Hnonsense.  Please lift the
skirts higher than the ankle region if you want us to get excited.

            regards, tom lane

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Gregory Stark
Date:
"Mark Kirkwood" <markir@paradise.net.nz> writes:

> Luke Lonergan wrote:
>> Sure - it's here:
>>   http://momjian.us/mhonarc/patches_hold/msg00381.html
>
> To clarify - we've fixed this in Greenplum db - the patch as submitted is
> (hopefully) a hint about how to fix it in Postgres, rather than a working
> patch... as its full of non-postgres functions and macros:

Oh, that was the problem with the original patch and I thought Luke had said
that was the problem which was fixed.

> cdbpathlocus_pull_above_projection

In particular this is the function I was hoping to see. Anyways as Tom pointed
out previously there's precedent in Postgres as well for subqueries so I'm
sure I'll be able to do it.

(But I'm still not entirely convinced putting the append member vars into the
eclasses would be wrong btw...)

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
"Luke Lonergan"
Date:
BTW - Mark has volunteered to work a Postgres patch together.  Thanks Mark!

- Luke


On 10/29/07 10:46 PM, "Mark Kirkwood" <markir@paradise.net.nz> wrote:

> Luke Lonergan wrote:
>> Sure - it's here:
>>   http://momjian.us/mhonarc/patches_hold/msg00381.html
>>
>>
>
> To clarify - we've fixed this in Greenplum db - the patch as submitted
> is (hopefully) a hint about how to fix it in Postgres, rather than a
> working patch... as its full of non-postgres functions and macros:
>
> CdbPathLocus_MakeHashed
> cdbpathlocus_pull_above_projection
> cdbpullup_findPathKeyItemInTargetList
> cdbpullup_makeVar
> cdbpullup_expr
>
> Cheers
>
> Mark



hardware and For PostgreSQL

From
Ketema Harris
Date:
I am trying to build a very Robust DB server that will support 1000+
concurrent users (all ready have seen max of 237 no pooling being
used).  i have read so many articles now that I am just saturated.  I
have a general idea but would like feedback from others.

I understand query tuning and table design play a large role in
performance, but taking that factor away
and focusing on just hardware, what is the best hardware to get for
Pg to work at the highest level
(meaning speed at returning results)?

How does pg utilize multiple processors?  The more the better?
Are queries spread across multiple processors?
Is Pg 64 bit?
If so what processors are recommended?

I read this : http://www.postgresql.org/files/documentation/books/
aw_pgsql/hw_performance/node12.html
POSTGRESQL uses a multi-process model, meaning each database
connection has its own Unix process. Because of this, all multi-cpu
operating systems can spread multiple database connections among the
available CPUs. However, if only a single database connection is
active, it can only use one CPU. POSTGRESQL does not use multi-
threading to allow a single process to use multiple CPUs.

Its pretty old (2003) but is it still accurate?  if this statement is
accurate how would it affect connection pooling software like pg_pool?

RAM?  The more the merrier right? Understanding shmmax and the pg
config file parameters for shared mem has to be adjusted to use it.
Disks?  standard Raid rules right?  1 for safety 5 for best mix of
performance and safety?
Any preference of SCSI over SATA? What about using a High speed
(fibre channel) mass storage device?

Who has built the biggest baddest Pg server out there and what do you
use?

Thanks!





Re: hardware and For PostgreSQL

From
Ben
Date:
It would probably help you to spend some time browsing the archives of
this list for questions similar to yours - you'll find quite a lot of
consistent answers. In general, you'll find that:

- If you can fit your entire database into memory, you'll get the best
   performance.

- If you cannot (and most databases cannot) then you'll want to get the
   fastest disk system you can.

- For reads, RAID5 isn't so bad but for writes it's near the bottom of the
   options. RAID10 is not as efficient in terms of hardware, but if you
   want performance for both reads and writes, you want RAID10.

- Your RAID card also matters. Areca cards are expensive, and a lot of
   people consider them to be worth it.

- More procs tend to be better than faster procs, because more procs let
   you do more at once and databases tend to be i/o bound more than cpu
   bound.

- More or faster procs put more contention on the data, so getting more or
   better cpus just increases the need for faster disks or more ram.

- PG is 64 bit if you compile it to be so, or if you install a 64-bit
   binary package.

....and all that said, application and schema design can play a far more
important role in performance than hardware.


On Wed, 31 Oct 2007, Ketema Harris wrote:

> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being used).  i
> have read so many articles now that I am just saturated.  I have a general
> idea but would like feedback from others.
>
> I understand query tuning and table design play a large role in performance,
> but taking that factor away
> and focusing on just hardware, what is the best hardware to get for Pg to
> work at the highest level
> (meaning speed at returning results)?
>
> How does pg utilize multiple processors?  The more the better?
> Are queries spread across multiple processors?
> Is Pg 64 bit?
> If so what processors are recommended?
>
> I read this :
> http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html
> POSTGRESQL uses a multi-process model, meaning each database connection has
> its own Unix process. Because of this, all multi-cpu operating systems can
> spread multiple database connections among the available CPUs. However, if
> only a single database connection is active, it can only use one CPU.
> POSTGRESQL does not use multi-threading to allow a single process to use
> multiple CPUs.
>
> Its pretty old (2003) but is it still accurate?  if this statement is
> accurate how would it affect connection pooling software like pg_pool?
>
> RAM?  The more the merrier right? Understanding shmmax and the pg config file
> parameters for shared mem has to be adjusted to use it.
> Disks?  standard Raid rules right?  1 for safety 5 for best mix of
> performance and safety?
> Any preference of SCSI over SATA? What about using a High speed (fibre
> channel) mass storage device?
>
> Who has built the biggest baddest Pg server out there and what do you use?
>
> Thanks!
>
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>             http://www.postgresql.org/docs/faq
>

Re: hardware and For PostgreSQL

From
"Scott Marlowe"
Date:
On 10/31/07, Ketema Harris <ketema@ketema.net> wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used).  i have read so many articles now that I am just saturated.  I
> have a general idea but would like feedback from others.

Slow down, take a deep breath.  It's going to be ok.

You should definitely be looking at query pooling.  pgbouncer from
skype has gotten some good coverage lately.  I've used pgpool and
pgpool II with good luck myself.

> How does pg utilize multiple processors?  The more the better?
> Are queries spread across multiple processors?
> Is Pg 64 bit?
> If so what processors are recommended?

Generally more is better, up to a point.  PG runs one query per
processor max.  i.e. it doesn't spread a single query out over
multiple CPUs.

Yes, it's 64 bit, if you use a 64 bit version on a 64 bit OS.

Right now both Intel and AMD CPUs seem pretty good.

> RAM?  The more the merrier right?

That depends.  If 16 Gigs runs 33% faster than having 32 Gigs, the 16
Gigs will probably be better, especially if your data set fits in
16Gig.  But all things being equal, more memory = good.

> Understanding shmmax and the pg
> config file parameters for shared mem has to be adjusted to use it.

Don't forget all the other paramenters like work_mem and fsm settings.
 and regular vacuuming / autovacuuming

> Disks?  standard Raid rules right?  1 for safety 5 for best mix of
> performance and safety?

Neither of those is optimal for a transactional database.  5 isn't
particularly safe since two disks can kill your whole array.  RAID-10
is generally preferred, and RAID 50 or 6 can be a good choice.

> Any preference of SCSI over SATA? What about using a High speed
> (fibre channel) mass storage device?

What's most important is the quality of your controller.  A very high
quality SATA controller will beat a mediocre SCSI controller.  Write
back cache with battery backed cache is a must.  Escalade, Areca, LSI
and now apparently even Adaptec all have good controllers.  Hint:  If
it costs $85 or so, it's likely not a great choice for RAID.

I've seen many <$200 RAID controllers that were much better when you
turned off the RAID software and used kernel SW mode RAID instead
(witness Adaptec 14xx series)

Mass storage can be useful, especially if you need a lot of storage or
expansion ability.

> Who has built the biggest baddest Pg server out there and what do you
> use?

Not me, but we had a post from somebody with a very very very large
pgsql database on this list a few months ago...  Search the archives.

Re: hardware and For PostgreSQL

From
Joe Uhl
Date:
I realize there are people who discourage looking at Dell, but i've been
very happy with a larger ball of equipment we ordered recently from
them.  Our database servers consist of a PowerEdge 2950 connected to a
PowerVault MD1000 with a 1 meter SAS cable.

The 2950 tops out at dual quad core cpus, 32 gb ram, and 6 x 3.5"
drives.  It has a Perc 5/i as the controller of the in-box disks but
then also has room for 2 Perc 5/e controllers that can allow connecting
up to 2 chains of disk arrays to the thing.

In our environment we started the boxes off at 8gb ram with 6 15k SAS
disks in the server and then connected an MD1000 with 15 SATA disks to
one of the Perc 5/e controllers.  Gives tons of flexibility for growth
and for tablespace usage depending on budget and what you can spend on
your disks.  We have everything on the SATA disks right now but plan to
start moving the most brutalized indexes to the SAS disks very soon.

If you do use Dell, get connected with a small business account manager
for better prices and more attention.

Joe

Ketema Harris wrote:
> I am trying to build a very Robust DB server that will support 1000+
> concurrent users (all ready have seen max of 237 no pooling being
> used).  i have read so many articles now that I am just saturated.  I
> have a general idea but would like feedback from others.
>
> I understand query tuning and table design play a large role in
> performance, but taking that factor away
> and focusing on just hardware, what is the best hardware to get for Pg
> to work at the highest level
> (meaning speed at returning results)?
>
> How does pg utilize multiple processors?  The more the better?
> Are queries spread across multiple processors?
> Is Pg 64 bit?
> If so what processors are recommended?
>
> I read this :
> http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html
>
> POSTGRESQL uses a multi-process model, meaning each database
> connection has its own Unix process. Because of this, all multi-cpu
> operating systems can spread multiple database connections among the
> available CPUs. However, if only a single database connection is
> active, it can only use one CPU. POSTGRESQL does not use
> multi-threading to allow a single process to use multiple CPUs.
>
> Its pretty old (2003) but is it still accurate?  if this statement is
> accurate how would it affect connection pooling software like pg_pool?
>
> RAM?  The more the merrier right? Understanding shmmax and the pg
> config file parameters for shared mem has to be adjusted to use it.
> Disks?  standard Raid rules right?  1 for safety 5 for best mix of
> performance and safety?
> Any preference of SCSI over SATA? What about using a High speed (fibre
> channel) mass storage device?
>
> Who has built the biggest baddest Pg server out there and what do you
> use?
>
> Thanks!
>
>
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>               http://www.postgresql.org/docs/faq

Re: hardware and For PostgreSQL

From
Ron St-Pierre
Date:
Joe Uhl wrote:
> I realize there are people who discourage looking at Dell, but i've been
> very happy with a larger ball of equipment we ordered recently from
> them.  Our database servers consist of a PowerEdge 2950 connected to a
> PowerVault MD1000 with a 1 meter SAS cable.
>
>
We have a similar piece of equipment from Dell (the PowerEdge), and when
we had a problem with it we received excellent service from them. When
our raid controller went down (machine < 1 year old), Dell helped to
diagnose the problem and installed a new one at our hosting facility,
all within 24 hours.

fyi

Ron



Re: hardware and For PostgreSQL

From
"Joshua D. Drake"
Date:
On Wed, 31 Oct 2007 14:54:51 -0400
Joe Uhl <joeuhl@gmail.com> wrote:

> I realize there are people who discourage looking at Dell, but i've
> been very happy with a larger ball of equipment we ordered recently
> from them.  Our database servers consist of a PowerEdge 2950
> connected to a PowerVault MD1000 with a 1 meter SAS cable.
>
> The 2950 tops out at dual quad core cpus, 32 gb ram, and 6 x 3.5"
> drives.  It has a Perc 5/i as the controller of the in-box disks but
> then also has room for 2 Perc 5/e controllers that can allow
> connecting up to 2 chains of disk arrays to the thing.

The new Dell's based on Woodcrest (which is what you are talking about)
are a much better product that what Dell used to ship.

Joshua D. Drake
--

      === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
            UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


Attachment

Re: hardware and For PostgreSQL

From
Paul Lambert
Date:
Ron St-Pierre wrote:
> Joe Uhl wrote:
>> I realize there are people who discourage looking at Dell, but i've been
>> very happy with a larger ball of equipment we ordered recently from
>> them.  Our database servers consist of a PowerEdge 2950 connected to a
>> PowerVault MD1000 with a 1 meter SAS cable.
>>
>>
> We have a similar piece of equipment from Dell (the PowerEdge), and when
> we had a problem with it we received excellent service from them. When
> our raid controller went down (machine < 1 year old), Dell helped to
> diagnose the problem and installed a new one at our hosting facility,
> all within 24 hours.
>
> fyi
>
> Ron
>

This is good to know - I've got a new Dell PowerEdge 2900 quad-core, 4GB
RAM, 6*146Gb SAN disks on RAID-5 controller arriving in the next week or
two as a new database server. Well, more like a database development
machine - but it will host some non-production databases as well. Good
to know Dell can be relied on these days, I was a bit concerned about
that when purchasing sent me a copy invoice from Dell - particularly
since I was originally told an IBM was on the way.

Good ol' purchasing, can always rely on them to change their minds last
minute when they find a cheaper system.

--
Paul Lambert
Database Administrator
AutoLedgers


Re: hardware and For PostgreSQL

From
Magnus Hagander
Date:
Ron St-Pierre wrote:
> Joe Uhl wrote:
>> I realize there are people who discourage looking at Dell, but i've been
>> very happy with a larger ball of equipment we ordered recently from
>> them.  Our database servers consist of a PowerEdge 2950 connected to a
>> PowerVault MD1000 with a 1 meter SAS cable.
>>
>>
> We have a similar piece of equipment from Dell (the PowerEdge), and when
> we had a problem with it we received excellent service from them. When
> our raid controller went down (machine < 1 year old), Dell helped to
> diagnose the problem and installed a new one at our hosting facility,
> all within 24 hours.

24 hours?! I have a new one for my HP boxes onsite in 4 hours, including
a tech if needed...

But I assume Dell also has service-agreement deals you can get to get
the level of service you'd want. (But you won't get it for a
non-brand-name server, most likely)

Bottom line - don't underestimate the service you get from the vendor
when something breaks. Because eventually, something *will* break.


//Magnus

Re: hardware and For PostgreSQL

From
Joe Uhl
Date:
Magnus Hagander wrote:
> Ron St-Pierre wrote:
>
>> Joe Uhl wrote:
>>
>>> I realize there are people who discourage looking at Dell, but i've been
>>> very happy with a larger ball of equipment we ordered recently from
>>> them.  Our database servers consist of a PowerEdge 2950 connected to a
>>> PowerVault MD1000 with a 1 meter SAS cable.
>>>
>>>
>>>
>> We have a similar piece of equipment from Dell (the PowerEdge), and when
>> we had a problem with it we received excellent service from them. When
>> our raid controller went down (machine < 1 year old), Dell helped to
>> diagnose the problem and installed a new one at our hosting facility,
>> all within 24 hours.
>>
>
> 24 hours?! I have a new one for my HP boxes onsite in 4 hours, including
> a tech if needed...
>
> But I assume Dell also has service-agreement deals you can get to get
> the level of service you'd want. (But you won't get it for a
> non-brand-name server, most likely)
>
> Bottom line - don't underestimate the service you get from the vendor
> when something breaks. Because eventually, something *will* break.
>
>
> //Magnus
>
Yeah the response time depends on the service level purchased.  I
generally go with 24 hour because everything is redundant so a day of
downtime isn't going to bring services down (though it could make them
slow depending on what fails) but you can purchase 4 hr and in some
cases even 2 hr.  I had a "gold" level support contract on a server that
failed awhile back and within 3 net hours they diagnosed and fixed the
problem by getting onsite and replacing the motherboard and a cpu.  I
haven't had any of our 24hr support level devices fail yet so don't have
anything to compare there.

If you do go with Dell and want the higher support contracts i'll
restate that a small business account is the way to go.  Typically the
prices are better to the point that a support level upgrade appears free
when compared to the best shopping cart combo I can come up with.

Joe

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Mark Kirkwood
Date:
Gregory Stark wrote:
> cdbpathlocus_pull_above_projection
>
>
> In particular this is the function I was hoping to see. Anyways as Tom pointed
> out previously there's precedent in Postgres as well for subqueries so I'm
> sure I'll be able to do it.
>
> (But I'm still not entirely convinced putting the append member vars into the
> eclasses would be wrong btw...)
>
>
I spent today looking at getting this patch into a self contained state.
Working against HEAD I'm getting bogged down in the PathKeyItem to
PathKey/EquivalenceClass/EquivalenceMember(s) change. So I figured I'd
divide and conquer to some extent, and initially provide a patch:

- against 8.2.(5)
- self contained  (i.e no mystery functions)

The next step would be to update to to HEAD. That would hopefully
provide some useful material for others working on this.

Thoughts suggestions?

regards

Mark


Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Gregory Stark
Date:
"Mark Kirkwood" <markir@paradise.net.nz> writes:

> I spent today looking at getting this patch into a self contained state.
> Working against HEAD I'm getting bogged down in the PathKeyItem to
> PathKey/EquivalenceClass/EquivalenceMember(s) change. So I figured I'd divide
> and conquer to some extent, and initially provide a patch:
>
> - against 8.2.(5)
> - self contained  (i.e no mystery functions)

That would be helpful for me. It would include the bits I'm looking for.

> The next step would be to update to to HEAD. That would hopefully provide some
> useful material for others working on this.

If that's not too much work then that would be great but if it's a lot of work
then it may not be worth it if I'm planning to only take certain bits. On the
other hand if it's good then we might just want to take it wholesale and then
add to it.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Mark Kirkwood
Date:
Gregory Stark wrote:
> "Mark Kirkwood" <markir@paradise.net.nz> writes:
>
>
>> I spent today looking at getting this patch into a self contained state.
>> Working against HEAD I'm getting bogged down in the PathKeyItem to
>> PathKey/EquivalenceClass/EquivalenceMember(s) change. So I figured I'd divide
>> and conquer to some extent, and initially provide a patch:
>>
>> - against 8.2.(5)
>> - self contained  (i.e no mystery functions)
>>
>
> That would be helpful for me. It would include the bits I'm looking for.
>
>
>> The next step would be to update to to HEAD. That would hopefully provide some
>> useful material for others working on this.
>>
>
> If that's not too much work then that would be great but if it's a lot of work
> then it may not be worth it if I'm planning to only take certain bits. On the
> other hand if it's good then we might just want to take it wholesale and then
> add to it.
>
>

Here is a (somewhat hurried) self-contained version of the patch under
discussion. It applies to 8.2.5 and the resultant code compiles and
runs. I've left in some unneeded parallel stuff (PathLocus struct),
which I can weed out in a subsequent version if desired. I also removed
the 'cdb ' from  most of the function names and (I  hope) any Greenplum
copyrights.

I discovered that the patch solves a slightly different problem... it
pulls up index scans as a viable path choice, (but not for the DESC
case) but does not push down the LIMIT to the child tables ... so the
actual performance improvement is zero - however hopefully the patch
provides useful raw material to help.

e.g - using the examine schema from the OP email - but removing the DESC
from the query:

part=# set enable_seqscan=off;
SET
part=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;
                                                                   QUERY
PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=198367.14..198367.15 rows=1 width=20)
   ->  Sort  (cost=198367.14..200870.92 rows=1001510 width=20)
         Sort Key: public.n_traf.date_time
         ->  Result  (cost=0.00..57464.92 rows=1001510 width=20)
               ->  Append  (cost=0.00..57464.92 rows=1001510 width=20)
                     ->  Index Scan using n_traf_date_time_login_id on
n_traf  (cost=0.00..66.90 rows=1510 width=20)
                     ->  Index Scan using
n_traf_y2007m01_date_time_login_id on n_traf_y2007m01 n_traf
(cost=0.00..4748.38 rows=83043 width=20)
                     ->  Index Scan using
n_traf_y2007m02_date_time_login_id on n_traf_y2007m02 n_traf
(cost=0.00..4772.60 rows=83274 width=20)
                     ->  Index Scan using
n_traf_y2007m03_date_time_login_id on n_traf_y2007m03 n_traf
(cost=0.00..4782.12 rows=83330 width=20)
                     ->  Index Scan using
n_traf_y2007m04_date_time_login_id on n_traf_y2007m04 n_traf
(cost=0.00..4818.29 rows=83609 width=20)
                     ->  Index Scan using
n_traf_y2007m05_date_time_login_id on n_traf_y2007m05 n_traf
(cost=0.00..4721.85 rows=82830 width=20)
                     ->  Index Scan using
n_traf_y2007m06_date_time_login_id on n_traf_y2007m06 n_traf
(cost=0.00..4766.56 rows=83357 width=20)
                     ->  Index Scan using
n_traf_y2007m07_date_time_login_id on n_traf_y2007m07 n_traf
(cost=0.00..4800.44 rows=83548 width=20)
                     ->  Index Scan using
n_traf_y2007m08_date_time_login_id on n_traf_y2007m08 n_traf
(cost=0.00..4787.55 rows=83248 width=20)
                     ->  Index Scan using
n_traf_y2007m09_date_time_login_id on n_traf_y2007m09 n_traf
(cost=0.00..4830.67 rows=83389 width=20)
                     ->  Index Scan using
n_traf_y2007m10_date_time_login_id on n_traf_y2007m10 n_traf
(cost=0.00..4795.78 rows=82993 width=20)
                     ->  Index Scan using
n_traf_y2007m11_date_time_login_id on n_traf_y2007m11 n_traf
(cost=0.00..4754.26 rows=83351 width=20)
                     ->  Index Scan using
n_traf_y2007m12_date_time_login_id on n_traf_y2007m12 n_traf
(cost=0.00..4819.51 rows=84028 width=20)
(18 rows)







Attachment

Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1

From
Gregory Stark
Date:
"Mark Kirkwood" <markir@paradise.net.nz> writes:

> Here is a (somewhat hurried) self-contained version of the patch under
> discussion. It applies to 8.2.5 and the resultant code compiles and runs. I've
> left in some unneeded parallel stuff (PathLocus struct), which I can weed out
> in a subsequent version if desired. I also removed the 'cdb ' from  most of the
> function names and (I  hope) any Greenplum copyrights.

Thanks, I'll take a look at it.

> I discovered that the patch solves a slightly different problem... it pulls up
> index scans as a viable path choice, (but not for the DESC case) but does not
> push down the LIMIT to the child tables ... so the actual performance
> improvement is zero - however hopefully the patch provides useful raw material
> to help.


> SET
> part=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;
>                                                                   QUERY PLAN
>
-------------------------------------------------------------------------------------------------------------------------------------------------
> Limit  (cost=198367.14..198367.15 rows=1 width=20)
>   ->  Sort  (cost=198367.14..200870.92 rows=1001510 width=20)
>         Sort Key: public.n_traf.date_time
>         ->  Result  (cost=0.00..57464.92 rows=1001510 width=20)
>               ->  Append  (cost=0.00..57464.92 rows=1001510 width=20)
>                     ->  Index Scan using n_traf_date_time_login_id on n_traf
> (cost=0.00..66.90 rows=1510 width=20)

That looks suspicious. There's likely no good reason to be using the index
scan unless it avoids the sort node above the Append node. That's what I hope
to do by having the Append executor code do what's necessary to maintain the
order.

From skimming your patch previously I thought the main point was when there
was only one subnode. In that case it was able to pull the subnode entirely
out of the append node and pull up the paths of the subnode. In Postgres that
would never happen because constraint exclusion will never be able to prune
down to a single partition because of the parent table problem but I expect
we'll change that.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

libgcc double-free, backend won't die

From
Craig James
Date:
This is driving me crazy.  I have some Postgres C function extensions in a shared library.  They've been working fine.
Iupgraded to Fedora Core 6 and gcc4, and now every time psql(1) disconnects from the server, the serverlog gets this
message:

   *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8

and the backend process won't die.  Every single connection that executes one of my functions leaves an idle process,
likethis: 

  $ ps -ef | grep postgres
  postgres 12938 12920  0  23:24 ?    00:00:00 postgres: mydb mydb [local] idle

This error only happens on disconnect.  As long as I keep the connection open, I can

Worse, these zombie Postgres processes won't die, which means I can't shut down and restart Postgres unless I "kill -9"
allof them, and I can't use this at all because I get zillions of these dead processes. 

I've used valgrind on a test application that runs all of my functions outside of the Postgres environment, and not a
singleproblem shows up even after hours of processing.  I tried setting MALLOC_CHECK_ to various values, so that I
couldtrap the abort() call using gdb, but once MALLOC_CHECK_ is set, the double-free error never occurs.  (But malloc
slowsdown too much this way.) 

I even read through the documentation for C functions again, and carefully examined my code.  Nothing is amiss, some of
thefunctions are quite simple yet still exhibit this problem. 

Anyone seen this before?  It's driving me nuts.

  Postgres 8.1.4
  Linux kernel 2.6.22
  gcc 4.1.1

Thanks,
Craig

Re: libgcc double-free, backend won't die

From
Alvaro Herrera
Date:
Craig James wrote:
> This is driving me crazy.  I have some Postgres C function extensions
> in a shared library.  They've been working fine.  I upgraded to Fedora
> Core 6 and gcc4, and now every time psql(1) disconnects from the
> server, the serverlog gets this message:
>
>   *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8

Do you have any Perl or Python functions or stuff like that?

>  Postgres 8.1.4

Please upgrade to 8.1.10 and try again.  If it still fails we will be
much more interested in tracking it down.

--
Alvaro Herrera       Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
Maybe there's lots of data loss but the records of data loss are also lost.
(Lincoln Yeoh)

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Alvaro Herrera wrote:
> Craig James wrote:
>> This is driving me crazy.  I have some Postgres C function extensions
>> in a shared library.  They've been working fine.  I upgraded to Fedora
>> Core 6 and gcc4, and now every time psql(1) disconnects from the
>> server, the serverlog gets this message:
>>
>>   *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8
>
> Do you have any Perl or Python functions or stuff like that?

There is one Perl function, but it is never invoked during this test.  I connect to Postgres, issue one "select
myfunc()",and disconnect. 

>>  Postgres 8.1.4
>
> Please upgrade to 8.1.10 and try again.  If it still fails we will be
> much more interested in tracking it down.

Good idea, but alas, no difference.  I get the same "double free or corruption!" mesage.  I compiled 8.1.10 from source
andinstalled, then rebuilt all of my code from scratch and reinstalled the shared object. Same message as before. 

Here is my guess -- and this is just a guess.  My functions use a third-party library which, of necessity, uses
malloc/freein the ordinary way.  I suspect that there's a bug in the Postgres palloc() code that's walking over memory
thatregular malloc() allocates.  The third-party library (OpenBabel) has been tested pretty thoroughly by me an others
andhas no memory corruption problems.  All malloc's are freed properly.  Does that seem like a possibility? 

I can't figure out how to use ordinary tools like valgrind with a Postgres backend process to track this down.

Thanks,
Craig


Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
Craig James <craig_james@emolecules.com> writes:
> This is driving me crazy.  I have some Postgres C function extensions in a shared library.  They've been working
fine. I upgraded to Fedora Core 6 and gcc4, and now every time psql(1) disconnects from the server, the serverlog gets
thismessage: 
>    *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8

Have you tried attaching to one of these processes with gdb to see where
it ends up?  Have you checked to see if the processes are becoming
multi-threaded?

            regards, tom lane

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Tom Lane wrote:
> Craig James <craig_james@emolecules.com> writes:
>> This is driving me crazy.  I have some Postgres C function extensions in a shared library.  They've been working
fine. I upgraded to Fedora Core 6 and gcc4, and now every time psql(1) disconnects from the server, the serverlog gets
thismessage: 
>>    *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8
>
> Have you tried attaching to one of these processes with gdb to see where
> it ends up?  Have you checked to see if the processes are becoming
> multi-threaded?
>
>             regards, tom lane
>


# ps -ef | grep postgres
postgres 31362     1  0 06:53 ?        00:00:00 /usr/local/pgsql/bin/postmaster -D /postgres/main
postgres 31364 31362  0 06:53 ?        00:00:00 postgres: writer process
postgres 31365 31362  0 06:53 ?        00:00:00 postgres: stats buffer process
postgres 31366 31365  0 06:53 ?        00:00:00 postgres: stats collector process
postgres 31442 31362  0 06:54 ?        00:00:00 postgres: craig_test craig_test [local] idle
root     31518 31500  0 07:06 pts/6    00:00:00 grep postgres
# gdb -p 31442
GNU gdb Red Hat Linux (6.5-15.fc6rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.

[snip - a bunch of symbol table stuff]

0x00110402 in __kernel_vsyscall ()
(gdb) bt
#0  0x00110402 in __kernel_vsyscall ()
#1  0x0082fb8e in __lll_mutex_lock_wait () from /lib/libc.so.6
#2  0x007bfce8 in _L_lock_14096 () from /lib/libc.so.6
#3  0x007befa4 in free () from /lib/libc.so.6
#4  0x00744f93 in _dl_map_object_deps () from /lib/ld-linux.so.2
#5  0x0074989d in dl_open_worker () from /lib/ld-linux.so.2
#6  0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2
#7  0x00749222 in _dl_open () from /lib/ld-linux.so.2
#8  0x00858712 in do_dlopen () from /lib/libc.so.6
#9  0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2
#10 0x008588c5 in __libc_dlopen_mode () from /lib/libc.so.6
#11 0x00836139 in init () from /lib/libc.so.6
#12 0x008362d3 in backtrace () from /lib/libc.so.6
#13 0x007b3e11 in __libc_message () from /lib/libc.so.6
#14 0x007bba96 in _int_free () from /lib/libc.so.6
#15 0x007befb0 in free () from /lib/libc.so.6
#16 0x001f943a in DeleteByteCode (node=0x890ff4) at chains.cpp:477
#17 0x00780859 in exit () from /lib/libc.so.6
#18 0x081a6064 in proc_exit ()
#19 0x081b5b9d in PostgresMain ()
#20 0x0818e34b in ServerLoop ()
#21 0x0818f1de in PostmasterMain ()
#22 0x08152369 in main ()
(gdb)


Re: libgcc double-free, backend won't die

From
Alvaro Herrera
Date:
Craig James wrote:

> Here is my guess -- and this is just a guess.  My functions use a
> third-party library which, of necessity, uses malloc/free in the
> ordinary way.  I suspect that there's a bug in the Postgres palloc()
> code that's walking over memory that regular malloc() allocates.  The
> third-party library (OpenBabel) has been tested pretty thoroughly by
> me an others and has no memory corruption problems.  All malloc's are
> freed properly.  Does that seem like a possibility?

Not really.  palloc uses malloc underneath.

--
Alvaro Herrera       Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
"La vida es para el que se aventura"

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Alvaro Herrera wrote:
> Craig James wrote:
>
>> Here is my guess -- and this is just a guess.  My functions use a
>> third-party library which, of necessity, uses malloc/free in the
>> ordinary way.  I suspect that there's a bug in the Postgres palloc()
>> code that's walking over memory that regular malloc() allocates.  The
>> third-party library (OpenBabel) has been tested pretty thoroughly by
>> me an others and has no memory corruption problems.  All malloc's are
>> freed properly.  Does that seem like a possibility?
>
> Not really.  palloc uses malloc underneath.

But some Postgres code could be walking off the end of a malloc'ed block, even if palloc() is allocating and
deallocatingcorrectly.  Which is why I was hoping to use valgrind to see what's going on. 

Thanks,
Craig



Re: libgcc double-free, backend won't die

From
Alvaro Herrera
Date:
Craig James wrote:
> Alvaro Herrera wrote:
>> Craig James wrote:
>>
>>> Here is my guess -- and this is just a guess.  My functions use a
>>> third-party library which, of necessity, uses malloc/free in the
>>> ordinary way.  I suspect that there's a bug in the Postgres palloc()
>>> code that's walking over memory that regular malloc() allocates.  The
>>> third-party library (OpenBabel) has been tested pretty thoroughly by
>>> me an others and has no memory corruption problems.  All malloc's are
>>> freed properly.  Does that seem like a possibility?
>>
>> Not really.  palloc uses malloc underneath.
>
> But some Postgres code could be walking off the end of a malloc'ed
> block, even if palloc() is allocating and deallocating correctly.
> Which is why I was hoping to use valgrind to see what's going on.

I very much doubt it.  Since you've now shown that OpenBabel is
multithreaded, then that's a much more likely cause.

--
Alvaro Herrera                  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"When the proper man does nothing (wu-wei),
his thought is felt ten thousand miles." (Lao Tse)

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Alvaro Herrera wrote:
> Craig James wrote:
>> Alvaro Herrera wrote:
>>> Craig James wrote:
>>>
>>>> Here is my guess -- and this is just a guess.  My functions use a
>>>> third-party library which, of necessity, uses malloc/free in the
>>>> ordinary way.  I suspect that there's a bug in the Postgres palloc()
>>>> code that's walking over memory that regular malloc() allocates.  The
>>>> third-party library (OpenBabel) has been tested pretty thoroughly by
>>>> me an others and has no memory corruption problems.  All malloc's are
>>>> freed properly.  Does that seem like a possibility?
>>> Not really.  palloc uses malloc underneath.
>> But some Postgres code could be walking off the end of a malloc'ed
>> block, even if palloc() is allocating and deallocating correctly.
>> Which is why I was hoping to use valgrind to see what's going on.
>
> I very much doubt it.  Since you've now shown that OpenBabel is
> multithreaded, then that's a much more likely cause.

Can you elaborate?  Are multithreaded libraries not allowed to be linked to Postgres?

Thanks,
Craig

Re: libgcc double-free, backend won't die

From
Alvaro Herrera
Date:
Craig James wrote:
> Alvaro Herrera wrote:
>> Craig James wrote:
>>> Alvaro Herrera wrote:
>>>> Craig James wrote:
>>>>
>>>>> Here is my guess -- and this is just a guess.  My functions use a
>>>>> third-party library which, of necessity, uses malloc/free in the
>>>>> ordinary way.  I suspect that there's a bug in the Postgres palloc()
>>>>> code that's walking over memory that regular malloc() allocates.  The
>>>>> third-party library (OpenBabel) has been tested pretty thoroughly by
>>>>> me an others and has no memory corruption problems.  All malloc's are
>>>>> freed properly.  Does that seem like a possibility?
>>>> Not really.  palloc uses malloc underneath.
>>> But some Postgres code could be walking off the end of a malloc'ed
>>> block, even if palloc() is allocating and deallocating correctly.
>>> Which is why I was hoping to use valgrind to see what's going on.
>>
>> I very much doubt it.  Since you've now shown that OpenBabel is
>> multithreaded, then that's a much more likely cause.
>
> Can you elaborate?  Are multithreaded libraries not allowed to be
> linked to Postgres?

Absolutely not.

--
Alvaro Herrera                         http://www.flickr.com/photos/alvherre/
"La gente vulgar solo piensa en pasar el tiempo;
el que tiene talento, en aprovecharlo"

Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
Craig James <craig_james@emolecules.com> writes:
> GNU gdb Red Hat Linux (6.5-15.fc6rh)
> Copyright (C) 2006 Free Software Foundation, Inc.
> GDB is free software, covered by the GNU General Public License, and you are
> welcome to change it and/or distribute copies of it under certain conditions.

> [snip - a bunch of symbol table stuff]

Please show that stuff you snipped --- it might have some relevant
information.  The stack trace looks a bit like a threading problem...

            regards, tom lane

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Alvaro Herrera wrote:
>>> ...Since you've now shown that OpenBabel is
>>> multithreaded, then that's a much more likely cause.
>> Can you elaborate?  Are multithreaded libraries not allowed to be
>> linked to Postgres?
>
> Absolutely not.

Ok, thanks, I'll work on recompiling OpenBabel without thread support.

Since I'm not a Postgres developer, perhaps one of the maintainers could update the Postgres manual.  In chapter
32.9.6,it says, 

  "To be precise, a shared library needs to be created."

This should be amended to say,

  "To be precise, a non-threaded, shared library needs to be created."

Cheers,
Craig



Re: libgcc double-free, backend won't die

From
Craig James
Date:
Tom Lane wrote:
> Craig James <craig_james@emolecules.com> writes:
>> GNU gdb Red Hat Linux (6.5-15.fc6rh)
>> Copyright (C) 2006 Free Software Foundation, Inc.
>> GDB is free software, covered by the GNU General Public License, and you are
>> welcome to change it and/or distribute copies of it under certain conditions.
>
>> [snip - a bunch of symbol table stuff]
>
> Please show that stuff you snipped --- it might have some relevant
> information.  The stack trace looks a bit like a threading problem...

# gdb -p 31442
GNU gdb Red Hat Linux (6.5-15.fc6rh)
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-redhat-linux-gnu".
Attaching to process 31442
Reading symbols from /usr/local/pgsql/bin/postgres...(no debugging symbols found)...done.
Using host libthread_db library "/lib/libthread_db.so.1".
Reading symbols from /usr/lib/libz.so.1...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /usr/lib/libreadline.so.5...(no debugging symbols found)...done.
Loaded symbols for /usr/lib/libreadline.so.5
Reading symbols from /lib/libtermcap.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libtermcap.so.2
Reading symbols from /lib/libcrypt.so.1...
(no debugging symbols found)...done.
Loaded symbols for /lib/libcrypt.so.1
Reading symbols from /lib/libresolv.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libresolv.so.2
Reading symbols from /lib/libnsl.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/libnsl.so.1
Reading symbols from /lib/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libm.so.6...
(no debugging symbols found)...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /lib/libnss_files.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libnss_files.so.2
Reading symbols from /usr/local/pgsql/lib/libchmoogle.so...done.
Loaded symbols for /usr/local/pgsql/lib/libchmoogle.so
Reading symbols from /lib/libgcc_s.so.1...done.
Loaded symbols for /lib/libgcc_s.so.1
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/jaguarformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/jaguarformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/libopenbabel.so.2...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/libopenbabel.so.2
Reading symbols from /usr/lib/libstdc++.so.6...done.
Loaded symbols for /usr/lib/libstdc++.so.6
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastaformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastaformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cansmilesformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cansmilesformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/APIInterface.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/APIInterface.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmodformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmodformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/molreportformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/molreportformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fhformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fhformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemkinformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemkinformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmcifformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmcifformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/thermoformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/thermoformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/carformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/carformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/ghemicalformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/ghemicalformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/turbomoleformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/turbomoleformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xmlformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xmlformat.so
Reading symbols from /usr/lib/libxml2.so.2...done.
Loaded symbols for /usr/lib/libxml2.so.2
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/rxnformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/rxnformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/reportformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/reportformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/acrformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/acrformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/nwchemformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/nwchemformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/hinformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/hinformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/bgfformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/bgfformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/shelxformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/shelxformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/yasaraformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/yasaraformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/viewmolformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/viewmolformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mdlformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mdlformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/CSRformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/CSRformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacaoformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacaoformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gaussformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gaussformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/titleformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/titleformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gamessformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gamessformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/zindoformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/zindoformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fingerprintformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fingerprintformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/balstformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/balstformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cssrformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cssrformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cdxmlformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cdxmlformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/crkformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/crkformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xedformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xedformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawcdxformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawcdxformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cmlformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cmlformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpdformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpdformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/amberformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/amberformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/smilesformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/smilesformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemtoolformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemtoolformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pubchem.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pubchem.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fchkformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fchkformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/qchemformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/qchemformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mopacformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mopacformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/PQSformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/PQSformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastsearchformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastsearchformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/freefracformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/freefracformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chem3dformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chem3dformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/inchiformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/inchiformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cccformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cccformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpqcformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpqcformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/copyformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/copyformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cifformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cifformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/unichemformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/unichemformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/boxformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/boxformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mol2format.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mol2format.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/tinkerformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/tinkerformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/featformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/featformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/alchemyformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/alchemyformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pngformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pngformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pcmodelformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pcmodelformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/dmolformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/dmolformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gausscubeformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gausscubeformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/povrayformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/povrayformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xyzformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xyzformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacheformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacheformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawctformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawctformat.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gromos96format.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gromos96format.so
Reading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pdbformat.so...done.
Loaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pdbformat.so

0x00110402 in __kernel_vsyscall ()
(gdb) bt
#0  0x00110402 in __kernel_vsyscall ()
#1  0x0082fb8e in __lll_mutex_lock_wait () from /lib/libc.so.6
#2  0x007bfce8 in _L_lock_14096 () from /lib/libc.so.6
#3  0x007befa4 in free () from /lib/libc.so.6
#4  0x00744f93 in _dl_map_object_deps () from /lib/ld-linux.so.2
#5  0x0074989d in dl_open_worker () from /lib/ld-linux.so.2
#6  0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2
#7  0x00749222 in _dl_open () from /lib/ld-linux.so.2
#8  0x00858712 in do_dlopen () from /lib/libc.so.6
#9  0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2
#10 0x008588c5 in __libc_dlopen_mode () from /lib/libc.so.6
#11 0x00836139 in init () from /lib/libc.so.6
#12 0x008362d3 in backtrace () from /lib/libc.so.6
#13 0x007b3e11 in __libc_message () from /lib/libc.so.6
#14 0x007bba96 in _int_free () from /lib/libc.so.6
#15 0x007befb0 in free () from /lib/libc.so.6
#16 0x001f943a in DeleteByteCode (node=0x890ff4) at chains.cpp:477
#17 0x00780859 in exit () from /lib/libc.so.6
#18 0x081a6064 in proc_exit ()
#19 0x081b5b9d in PostgresMain ()
#20 0x0818e34b in ServerLoop ()
#21 0x0818f1de in PostmasterMain ()
#22 0x08152369 in main ()
(gdb)

Re: libgcc double-free, backend won't die

From
Magnus Hagander
Date:
On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
> Alvaro Herrera wrote:
> >>>...Since you've now shown that OpenBabel is
> >>>multithreaded, then that's a much more likely cause.
> >>Can you elaborate?  Are multithreaded libraries not allowed to be
> >>linked to Postgres?
> >
> >Absolutely not.
>
> Ok, thanks, I'll work on recompiling OpenBabel without thread support.
>
> Since I'm not a Postgres developer, perhaps one of the maintainers could
> update the Postgres manual.  In chapter 32.9.6, it says,
>
>  "To be precise, a shared library needs to be created."
>
> This should be amended to say,
>
>  "To be precise, a non-threaded, shared library needs to be created."
>

Just before someone goes ahead and writes it (which is probably a good idea
in general), don't write it just like taht - because it's platform
dependent. On win32, you can certainly stick a threaded library to it -
which is good, because most (if not all) win32 libs are threaded... Now, if
they actually *use* threads explicitly things might break (but most likely
not from that specifically), but you can link with them without the
problem. I'm sure there are other platforms with similar situations.


//Magnus

Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> Craig James wrote:
>> Can you elaborate?  Are multithreaded libraries not allowed to be
>> linked to Postgres?

> Absolutely not.

The problem is that you get into library-interaction bugs like the
one discussed here:
http://archives.postgresql.org/pgsql-general/2007-11/msg00580.php
http://archives.postgresql.org/pgsql-general/2007-11/msg00610.php

I suspect what you're seeing is the exact same problem on a different
glibc internal mutex: the mutex is left uninitialized on the first trip
through the code because the process is not multithreaded, and then
after OpenBabel gets loaded the process becomes multithreaded, and then
it starts trying to use the mutex :-(.

Since the glibc boys considered the other problem to be their bug,
they'd probably be interested in fixing this one too.  Unfortunately,
you picked a Fedora version that reached EOL last week.  Update to
FC7 or FC8, and if you still see the problem, file a bugzilla entry
against glibc.

But having said all that, that still only addresses the question of
why the process hangs up during exit().  Why the double-free report is
being made at all is less clear, but I kinda think that unexpected
multithread behavior may be at bottom there too.

            regards, tom lane

Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
Craig James <craig_james@emolecules.com> writes:
>> Please show that stuff you snipped --- it might have some relevant
>> information.  The stack trace looks a bit like a threading problem...

> Using host libthread_db library "/lib/libthread_db.so.1".

That's pretty suspicious, but not quite a smoking gun.  Does "info
threads" report more than 1 thread?

> Reading symbols from /usr/lib/libstdc++.so.6...done.
> Loaded symbols for /usr/lib/libstdc++.so.6

Hmm, I wonder whether *this* is the problem, rather than OpenBabel
per se.  Trying to use C++ inside the PG backend is another minefield
of things that don't work.

            regards, tom lane

Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
Magnus Hagander <magnus@hagander.net> writes:
> On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
>> Since I'm not a Postgres developer, perhaps one of the maintainers could
>> update the Postgres manual.  In chapter 32.9.6, it says,
>>
>> "To be precise, a shared library needs to be created."
>>
>> This should be amended to say,
>>
>> "To be precise, a non-threaded, shared library needs to be created."

> Just before someone goes ahead and writes it (which is probably a good idea
> in general), don't write it just like taht - because it's platform
> dependent.

I can find no such text in our documentation at all, nor any reference
to OpenBabel.  I think Craig must be looking at someone else's
documentation.

            regards, tom lane

Re: libgcc double-free, backend won't die

From
"Joshua D. Drake"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, 11 Dec 2007 11:25:08 -0500
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Magnus Hagander <magnus@hagander.net> writes:
> > On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
> >> Since I'm not a Postgres developer, perhaps one of the maintainers
> >> could update the Postgres manual.  In chapter 32.9.6, it says,
> >> 
> >> "To be precise, a shared library needs to be created."
> >> 
> >> This should be amended to say,
> >> 
> >> "To be precise, a non-threaded, shared library needs to be
> >> created."
> 
> > Just before someone goes ahead and writes it (which is probably a
> > good idea in general), don't write it just like taht - because it's
> > platform dependent.
> 
> I can find no such text in our documentation at all, nor any reference
> to OpenBabel.  I think Craig must be looking at someone else's
> documentation.

It's actually 33.9.6 and it is in:

http://www.postgresql.org/docs/8.2/static/xfunc-c.html#DFUNC

He is looking directly at our documentation :)

Sincerely,

Joshua D. Drake



- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHXrxJATb/zqfZUUQRAmSTAJwO0kdDovLB7kFGaPL9OPna3rm8ZwCfVaNo
XKtTfT7He9rNEvMBs5e+O94=
=qmOr
-----END PGP SIGNATURE-----

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Tom Lane wrote:
> Magnus Hagander <magnus@hagander.net> writes:
>> On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
>>> Since I'm not a Postgres developer, perhaps one of the maintainers could
>>> update the Postgres manual.  In chapter 32.9.6, it says,
>>>
>>> "To be precise, a shared library needs to be created."
>>>
>>> This should be amended to say,
>>>
>>> "To be precise, a non-threaded, shared library needs to be created."
>
>> Just before someone goes ahead and writes it (which is probably a good idea
>> in general), don't write it just like taht - because it's platform
>> dependent.
>
> I can find no such text in our documentation at all, nor any reference
> to OpenBabel.  I think Craig must be looking at someone else's
> documentation.


http://www.postgresql.org/docs/8.1/static/xfunc-c.html#DFUNChttp://www.postgresql.org/docs/8.1/static/xfunc-c.html#DFUNC

Craig

Re: libgcc double-free, backend won't die

From
Gregory Stark
Date:
"Magnus Hagander" <magnus@hagander.net> writes:

> On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:
>
>> This should be amended to say,
>>
>>  "To be precise, a non-threaded, shared library needs to be created."
>
> Just before someone goes ahead and writes it (which is probably a good idea
> in general), don't write it just like taht - because it's platform
> dependent. On win32, you can certainly stick a threaded library to it -
> which is good, because most (if not all) win32 libs are threaded... Now, if
> they actually *use* threads explicitly things might break (but most likely
> not from that specifically), but you can link with them without the
> problem. I'm sure there are other platforms with similar situations.

Even on Unix there's nothing theoretically wrong with loading a shared library
which uses threads. It's just that there are a whole lot of practical problems
which can crop up.

1) No Postgres function is guaranteed to be thread-safe so you better protect
   against concurrent calls to Postgres API functions. Also Postgres functions
   use longjmp which can restore the stack pointer to a value which may have
   been set earlier, possibly by another thread which wouldn't work.

So you're pretty much restricted to calling Postgres API functions from the
main stack which means from the original thread Postgres loaded you with.

Then there's

2) Some OSes have bugs (notably glibc for a specific narrow set of versions)
   and don't expect to have standard library functions called before
   pthread_init() then called again after pthread_init(). If they expect the
   system to be either "threaded" or "not threaded" then they may be surprised
   to see that state change.

That just means you have to use a non-buggy version of your OS. Unfortunately
tracking down bugs in your OS to figure out what's causing them and whether
it's a particular known bug can be kind of tricky.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
"Joshua D. Drake" <jd@commandprompt.com> writes:
>> I can find no such text in our documentation at all, nor any reference
>> to OpenBabel.  I think Craig must be looking at someone else's
>> documentation.

> It's actually 33.9.6 and it is in:
> http://www.postgresql.org/docs/8.2/static/xfunc-c.html#DFUNC

[ shrug... ]  That documentation is not intended to address how to
configure OpenBabel.  It's talking about setting up linker commands,
and "threaded" is not a relevant concept at that level.

            regards, tom lane

Re: libgcc double-free, backend won't die

From
James Mansion
Date:
Gregory Stark wrote:
> 1) No Postgres function is guaranteed to be thread-safe so you better protect
>    against concurrent calls to Postgres API functions. Also Postgres functions
>    use longjmp which can restore the stack pointer to a value which may have
>    been set earlier, possibly by another thread which wouldn't work.
>
>
That's a whole different thing to saying that you can't use a threaded
subsystem under a Postgres
process.

> 2) Some OSes have bugs (notably glibc for a specific narrow set of versions)
>    and don't expect to have standard library functions called before
>    pthread_init() then called again after pthread_init(). If they expect the
>    system to be either "threaded" or "not threaded" then they may be surprised
>    to see that state change.
>
>
Is there any particular reason not to ensure that any low-level
threading support in libc is enabled right
from the get-go, as a build-time option?  Does it do anything that's not
well defined in a threaded
process?  Signal handling and atfork (and posix_ exec) are tyical areas
I guess.  While this can potentially
make malloc slower, Postgres already wraps malloc so using a caching
thread-aware malloc
substitute such as nedmalloc should be no problem.

I don't see any issue with the setjmp usage - so long as only one thread
uses any internal API.  Which
can be checked rather easily at runtime with low cost in a debug build.
> That just means you have to use a non-buggy version of your OS. Unfortunately
> tracking down bugs in your OS to figure out what's causing them and whether
> it's a particular known bug can be kind of tricky.
>
Is that really much of an issue an the  current version of any major OS
though?  Its reaonable to
limit the use of a threaded library (in particular, the runtimes for
most embeddable languages, or
libraries for RPC runtimes, etc) to 'modern' platforms that support
threads effectively.  On
many such platforms these will already implicitly link libpthread anyway.

James



Re: libgcc double-free, backend won't die

From
Tom Lane
Date:
James Mansion <james@mansionfamily.plus.com> writes:
> Is there any particular reason not to ensure that any low-level
> threading support in libc is enabled right
> from the get-go, as a build-time option?

Yes.

1) It's of no value to us

2) On many platforms there is a nonzero performance penalty

            regards, tom lane

Re: libgcc double-free, backend won't die

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> James Mansion <james@mansionfamily.plus.com> writes:
>> Is there any particular reason not to ensure that any low-level
>> threading support in libc is enabled right
>> from the get-go, as a build-time option?
>
> Yes.
> 1) It's of no value to us
> 2) On many platforms there is a nonzero performance penalty

And the only reason to do that would be to work around one bug in one small
range of glibc versions. If you're going to use a multi-threaded library
(which isn't very common since it's hard to do safely for all those other
reasons) surely using a version of your OS without any thread related bugs is
a better idea.


--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

Re: libgcc double-free, backend won't die

From
Craig James
Date:
Gregory Stark wrote:
> "Tom Lane" <tgl@sss.pgh.pa.us> writes:
>
>> James Mansion <james@mansionfamily.plus.com> writes:
>>> Is there any particular reason not to ensure that any low-level
>>> threading support in libc is enabled right
>>> from the get-go, as a build-time option?
>> Yes.
>> 1) It's of no value to us

Who is "us"?  Some of us would like to use the system for advanced scientific work, and scientific libraries are
usuallywritten in C++. 

>> 2) On many platforms there is a nonzero performance penalty

I'm surprised you say this, given that you're usually the voice of reason when it comes to rejecting hypothetical
statementsin favor of tested facts.  If building Postgres using thread-safe technology is really a performance burden,
thatcould be easily verified.  A "nonzero performance penalty", what does that mean, a 0.0001% slowdown?  I find it
hardto believe that the performance penalty of thread-safe version would even be measurable. 

If nobody has the time to do such a test, or other priorities take precedence, that's understandable.  But the results
aren'tin yet. 

> And the only reason to do that would be to work around one bug in one small
> range of glibc versions. If you're going to use a multi-threaded library
> (which isn't very common since it's hard to do safely for all those other
> reasons) surely using a version of your OS without any thread related bugs is
> a better idea.

You're jumping ahead.  This problem has not been accurately diagnosed yet.  It could be that the pthreads issue is
completelymisleading everyone, and in fact there is a genuine memory corruption going on here.  Or not.  We don't know
yet. I have made zero progress fixing this problem. 

The "one small range of glibc versions" is a giveaway.  I've seen this problem in FC3, 5, and 6 (I went through this
seriesof upgrades all in one week trying to fix this problem).  With each version, I recompiled Postgres and OpenBabel
fromscratch.  I'm going to try FC7 next since it's now the only "official" supported version, but I don't believe glibc
isthe problem. 

Andrew Dalke, a regular contributor to the OpenBabel forum, suggests another problem: It could be a result of linking
thewrong libraries together.  The gcc/ld system has a byzantine set of rules and hacks that if I understand Andrew's
posting)select different versions of the same library depending on what it thinks you might need.  It's possible that
thewrong version of some system library is getting linked in. 

Craig

Re: libgcc double-free, backend won't die

From
Gregory Stark
Date:
"Craig James" <craig_james@emolecules.com> writes:

> Gregory Stark wrote:
>
>> And the only reason to do that would be to work around one bug in one small
>> range of glibc versions. If you're going to use a multi-threaded library
>> (which isn't very common since it's hard to do safely for all those other
>> reasons) surely using a version of your OS without any thread related bugs is
>> a better idea.
>
> You're jumping ahead. This problem has not been accurately diagnosed yet. It
> could be that the pthreads issue is completely misleading everyone, and in fact
> there is a genuine memory corruption going on here. Or not. We don't know yet.
> I have made zero progress fixing this problem.

Well, no that would be you jumping ahead then... You proposed Postgres
changing the way it handles threaded libraries based on Tom's suggestion that
your problem was something like the glibc problem previously found. My comment
was based on the known glibc problem. From what you're saying it's far from
certain that the problem would be fixed by changing Postgres's behaviour in
the way you proposed.


--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

Re: libgcc double-free, backend won't die

From
James Mansion
Date:
Tom Lane wrote:
> Yes.
>
> 1) It's of no value to us
>
> 2) On many platforms there is a nonzero performance penalty
>
>
I think you have your head in the ground, but its your perogative.  *You*
might not care, but anyone wanting to use thread-aware libraries (and I'm
*not* talking about threading in any Postgres code) will certainly value
it if
they can do so with some stability.

There's a clear benefit to being able to use such code.  I suggested a
build option but you reject it out of hand.  And in doing so, you also
lock out
the benefits that you *could* have as well, in future..  It seems religious,
which is unfortunate.

Are you suggesting that the performance penalty, apart from the
malloc performance (which is easily dealt with) is *material*?
An extra indirection in access to errno will hurt so much?  Non-zero I can
accept, but clinging to 'non-zero' religiously isn't smart, especially
if its a
build-time choice.

We'll clearly move to multiple cores, and the clock speed enhancements will
slow (at best).  In many cases, the number of available cores will
exceed the
number of instantaneously active connections.  Don't you want to be able
to use all the horsepower?

Certainly on the sort of systems I work in my day job (big derivative
trading
systems) its the norm that the cache hit rate on Sybase is well over
99%, and
such systems are typically CPU bound.  Parallelism matters, and will matter
more and more in future.

So, an ability to start incrementally adding parallel operation of some
actions
(whether scanning or updating indices or pushing data to the peer) is
valuable,
as is the ability to use threaded libraries - and the (potential?)
ability to use
embedded languages and more advanced libraries in Postgres procs is one
of the
advantages of the system itself.  (I'd like to discount the use of a
runtime in a
seperate process - the latency is a problem for row triggers and functions)

James


Re: libgcc double-free, backend won't die

From
Bruce Momjian
Date:
James Mansion wrote:
> I think you have your head in the ground, but its your perogative.
> *You* might not care, but anyone wanting to use thread-aware libraries
> (and I'm *not* talking about threading in any Postgres code) will
> certainly value it if they can do so with some stability.

I suggest you find out the cause of your problem and then we can do more
research.  Talking about us changing the Postgres behavior from the
report of one user who doesn't even have the full details isn't
productive.

--
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

Multi-threading friendliness (was: libgcc double-free, backend won't die)

From
Craig James
Date:
Bruce Momjian wrote:
> James Mansion wrote:
>> I think you have your head in the ground, but its your perogative.
>> *You* might not care, but anyone wanting to use thread-aware libraries
>> (and I'm *not* talking about threading in any Postgres code) will
>> certainly value it if they can do so with some stability.
>
> I suggest you find out the cause of your problem and then we can do more
> research.  Talking about us changing the Postgres behavior from the
> report of one user who doesn't even have the full details isn't
> productive.

I think you're confusing James Mansion with me (Craig James).  I'm the one with the unresolved problem.

James is suggesting, completely independently of whether or not there's a bug in my system, that a thread-friendly
optionfor Postgres would be very useful. 

Don't confuse thread-friendly with a threaded implemetation of Postgres itself.  These are two separate questions.
Thread-friendlyinvolves compile/link options that don't affect the Postgres source code at all. 

Craig

Re: Multi-threading friendliness

From
James Mansion
Date:
Craig James wrote:
> Don't confuse thread-friendly with a threaded implemetation of
> Postgres itself.  These are two separate questions.  Thread-friendly
> involves compile/link options that don't affect the Postgres source
> code at all.
Indeed.  I'm specifically not suggesting that Postgres should offer an
API that can be called from
anything except the initial thread of its process - just that library
subsystems might want to use
threads internally and that should be OK. Or rather, it should be
possible to build Postgres
so that its OK.  Even if there's a small slowdown, the benefit of
running the full JVM or CLR
might outweigh that quite easily *in some circumstances*.

I've also hinted that at some stage you might want to thread some parts
of the implementation,
but I'm not suggesting that would be an early target.  It seems to me
sensible to make it
straightforward to take baby steps in that direction in future would be
a reasonable thing to
do.  As would being friendly to dynamically loaded C++ code.  If you
create the framework,
(and we're talking the barest of scaffolding) then others can work to
show the cost/benefit.

I fail to see why this would be a controversial engineering approach.

James