Thread: Oracle rant

Oracle rant

From
mlw
Date:
I just wanted to post this note.

I have been in Oracle hell for four days now, and in between the 5 
minutes of work and the hours of watings, dealing with table spaces, 
extents, and all that, I just keep thinking about how much easier 
PostgreSQL is to work with.

We all may bitch and moan about bugs and stuff, but my project would 
have been easier with PostgreSQL.

Has anyone ever noticed that Oracle has all these nice little arcane 
ways to fail?



Re: Oracle rant

From
Gavin Sherry
Date:
On Wed, 15 Jan 2003, mlw wrote:

> I just wanted to post this note.
> 
> I have been in Oracle hell for four days now, and in between the 5 
> minutes of work and the hours of watings, dealing with table spaces, 
> extents, and all that, I just keep thinking about how much easier 
> PostgreSQL is to work with.
> 
> We all may bitch and moan about bugs and stuff, but my project would 
> have been easier with PostgreSQL.
> 
> Has anyone ever noticed that Oracle has all these nice little arcane 
> ways to fail?

Yes.

I was doing some work with a company. I wanted to introduce
Postgres. They're traditionally an oracle shop. "Our Oracle DBAs don't
know Postgres, we're going to have to employ *another* DBA". No they
don't. :-)



Re: Oracle rant

From
mlw
Date:
<br /><br /> Gavin Sherry wrote:<br /><blockquote cite="midPine.LNX.4.21.0301161446480.4783-100000@linuxworld.com.au"
type="cite"><prewrap="">On Wed, 15 Jan 2003, mlw wrote:
 
 </pre><blockquote type="cite"><pre wrap="">I just wanted to post this note.

I have been in Oracle hell for four days now, and in between the 5 
minutes of work and the hours of watings, dealing with table spaces, 
extents, and all that, I just keep thinking about how much easier 
PostgreSQL is to work with.

We all may bitch and moan about bugs and stuff, but my project would 
have been easier with PostgreSQL.

Has anyone ever noticed that Oracle has all these nice little arcane 
ways to fail?   </pre></blockquote><pre wrap="">
Yes.

I was doing some work with a company. I wanted to introduce
Postgres. They're traditionally an oracle shop. "Our Oracle DBAs don't
know Postgres, we're going to have to employ *another* DBA". No they
don't. :-)</pre></blockquote> This is the truth, we have had an oracle box for two and a half years, we have had 4
PostgreSQLboxes with it. The Oracle system is on a 4 CPU Sun box. The PostgreSQL systems are on 2 CPU PIII boxes. <br
/><br/> We had "certified oracle DBA"s setup the oracle box. I setup the PostgreSQL boxes.  The PostgreSQL boxes NEVER
hadan unscheduled interruption in service. The Oracle system stops from time to time because of various arcane reasons.
Youget the error message, look it up on alltheweb.com, and fix it. The whole system is bogus. It DEMANDS a full time
DBA.PostgreSQL does not.<br /><br /> 

Re: Oracle rant

From
Mark Kirkwood
Date:
<snippage>

>   The Oracle system stops from time to time because of various arcane 
> reasons. You get the error message, look it up on alltheweb.com, and 
> fix it. The whole system is bogus. It DEMANDS a full time DBA. 
> PostgreSQL does not.
>
I could be accused of being cynical here (gosh)... but I think thats the 
whole idea - (hook'em with product and leverage "consulting" or "expert 
dba"..). One could be excused for thinking that "its all about money".

<extra rant>
Once upon a time I did the Oracle 7.3 certification thing , however I 
subsequently I feel that I really dont *need* to buy into this "Dba 
Guild" mentality that the whole business seemed to be about (i.e. arcane 
little "need to know" things that trap all but the initiated... and of 
course certification is all about *being* the initiated...oh...and... 
maybe the exam fees help perpetuate this thing too...).
</extra rant>

Thanks to you guys for providing the opportunity to share this  ;-)

Mark



Re: Oracle rant

From
Adrian 'Dagurashibanipal' von Bidder
Date:
> (i.e. arcane
> little "need to know" things that trap all but the initiated...

So, for postgres, that means:- a good thing the autovacuum thingy is coming along- postgres should auto-tune itself -
the*cost could perhaps be 
adjusted after some statistics have been collected, and there should be
some sensible way to determine an optimal setting for the famous
shared_buffers (and the default should be something that gets reasonable
performance on common cases)

No, I don't expect the second one soon - I know how hard it is. No, I'm
not debating that PostgreSQL is not much, much, much easier to
administrate and set up than Oracle. I'm just saying that there are
*some* small arcane details in postgres, too (although, at least, they
don't affect stability, just performance).

cheers
-- vbi

--
pub  1024D/92082481 2002-02-22 Adrian von Bidder     Key fingerprint = EFE3 96F4 18F5 8D65 8494  28FC 1438 5168 9208
2481

Re: Oracle rant

From
Mark Kirkwood
Date:
Adrian 'Dagurashibanipal' von Bidder wrote:

> I'm just saying that there are
>*some* small arcane details in postgres, too (although, at least, they
>don't affect stability, just performance).
>  
>
Indeed you are right... Pg has its own collection of arcane details too,
but hopefully the culture of Postgesql (in common with all open source 
projects) is to "expose and educate" rather than "confine to a  group of 
the initiated".

Does that sound better ? ( i.e no so rabid Oracle bashing)

Cheers

mark




Re: Oracle rant

From
Adrian 'Dagurashibanipal' von Bidder
Date:
On Thu, 2003-01-16 at 08:29, Mark Kirkwood wrote:
> Adrian 'Dagurashibanipal' von Bidder wrote:
>
> > I'm just saying that there are
> >*some* small arcane details in postgres, too (although, at least, they
> >don't affect stability, just performance).
> >
> >
> Indeed you are right... Pg has its own collection of arcane details too,
> but hopefully the culture of Postgesql (in common with all open source
> projects) is to "expose and educate" rather than "confine to a  group of
> the initiated".
>
> Does that sound better ? ( i.e no so rabid Oracle bashing)

Yes, sounds better. Seriously: I absolutely agree that Oracle is not
inclined to make it easier to use their product - after all, as was
said, they sell courses and certifications, while pg tries to be easy to
use. I just got the impression from the first few messages that some
people think that pg has no secret tricks you're supposed to know at
all. Experience counts. With all systems. (And knowing the secret tricks
is what experience comes down to, basically).

cheers
-- vbi

--
signature virus v1.0 - please use me in your own mail.

Re: Oracle rant

From
"Fred Zellinger"
Date:
I work in an all Oracle shop, with server instances around the world.  At
least 20 servers are 400Gb+ and a couple are 4 Terabyte+.  I tooks $15k worth
of Oracle training, have set up my own instances and done Perl/CGI/Apache
work along with setting up really big data warehousing apps for factories and
engineers.

I also am a Linux Nut and use Postgres whenever possible because I like the
freedom of access to the HACKERS mailing list...something only a few highly
renound DBA snobs have with Oracle.

I have been impressed however, with the degree to which Oracle(synonymous
with Ellison) has attempted to become "Open".  Oracle is getting into Linux
almost as heavily as IBM, mostly prompted by their common rivalry with M$ and
SQLServer.  Oracle's licensing policy of "download it if you want and we'll
trust you to do the right thing and give us money if you meet the criteria"
does build a sense of trust with the technical world.  And, their
documentation is fairly open and detailed.  What their docs don't cover, a
search on Google or something else(like attending Oracle World events) will
generally illuminate.

So, as far as "Openness" goes, I would say that PostgreSQL is more open than
Oracle, but Oracle is pretty good.

The one thing I notice about PostgreSQL however, is this tendency to keep the
DBA away from considerations of hardware..."don't worry about the man behind
the curtains folks...." mentality.

With Oracle, you can screw around with files and tablespaces and extents and
segments and partition striping and local and global indexing and block sizes
and 400+ other tuning parameters to your heart's content.  And, if you
happened to put your data on separate server instances, you and use database
links to join the data together.  With Oracle's transaction logging and
rollback segments, a paranoid DBA can do all sorts of replication schemes,
backup schemes, and point in time recovery...to ANY POINT IN TIME, whether or
not there was a crash or simple a user who issued a really dumb SQL
statement.  Perhaps this is a tremendous waste of time and leads to a lot of
crashes with arcane error messages, but it gives the DBA control.

I am a control freak and I think a lot of other people are too.  Oracle is
tremendously complex and has a steep learning curve, but it gives me control.With PG, a lot of effort has been made to
simplify. This removes DBA 
control, but probably also contributes to the stability you guys think you
have over Oracle.  Perhaps Oracle's supposed instability is partially due to
allowing DBAs to fiddle with too much.  I know that some is sometimes due to
Oracle releasing poorly coded features too soon, but I think a lot of it is
also due to DBAs screwing with stuff too.

Of course, if the boss just wanted me to get the DB running and quit screwing
with coallescing free extents in tablespaces, then I would just run PG.

If PostgreSQL were to open up all the internals of storage and become as
complex as Oracle, there probably would be a lot of high profile crashes and
PG would get a bad reputation.  However, I think that having a mode of
operation(controlled at compile time) where all the dirty details of storage
was made accessible in the data dictionary, would be something good to pacify
the control freaks.

Food for thought.  If you need someone play devils advocate in the Oracle vs.
PG debates, I'll do it.  I think that a little critique of PG


Fred


On 1/16/2003, "Adrian 'Dagurashibanipal' von Bidder" <avbidder@fortytwo.ch>
wrote:

>On Thu, 2003-01-16 at 08:29, Mark Kirkwood wrote:
>> Adrian 'Dagurashibanipal' von Bidder wrote:
>>
>> > I'm just saying that there are
>> >*some* small arcane details in postgres, too (although, at least, they
>> >don't affect stability, just performance).
>> >
>> >
>> Indeed you are right... Pg has its own collection of arcane details too,
>> but hopefully the culture of Postgesql (in common with all open source
>> projects) is to "expose and educate" rather than "confine to a  group of
>> the initiated".
>>
>> Does that sound better ? ( i.e no so rabid Oracle bashing)
>
>Yes, sounds better. Seriously: I absolutely agree that Oracle is not
>inclined to make it easier to use their product - after all, as was
>said, they sell courses and certifications, while pg tries to be easy to
>use. I just got the impression from the first few messages that some
>people think that pg has no secret tricks you're supposed to know at
>all. Experience counts. With all systems. (And knowing the secret tricks
>is what experience comes down to, basically).
>
>cheers
>-- vbi
>
>--
>signature virus v1.0 - please use me in your own mail.


Re: Oracle rant

From
Jeff
Date:
On Wed, 15 Jan 2003, mlw wrote:

> I just wanted to post this note.
>
> I have been in Oracle hell for four days now, and in between the 5
> minutes of work and the hours of watings, dealing with table spaces,

I've been in Informix hell for the month or so.
At first, we were getting the message "No more extents" - so I look up the
documentation and it says the only way to fix it is to either delete data
or unload and reload the data.  Ok. fine. I talked with a bunch of people
and they agreed that is how to fix the problem.  Unfortunatly, we cannot
have much downtime and a reload would take days and days.  So I went about
writing a little appliation to do it (Since I could not use a lovely sql
statement to do it because of a lack of logical log space - turning the
logs off is also not something I want to do.. especially on a multi-day
operation).

So the thing runs.  A week later the data is loaded, indexes built. We're
about to switch the tables and I'm replaying my "audit trail" to make sure
the new table is up to date when I get "No more exents".  Luckly I had a
bottle of rum handy.  After some searching around on newsgroups and
message boards it turns out that the problem is not that there are not
enough extents, but there are 16.7M pages of data in the table. The only
fixes for THAT are to fragment the table (Kind of like an internal union
view type thing - which I couldn't do because informix doesn't let you do
union with a text blob).  So after lots of testing and finding out the
version of informix we run has a bug where max() doesn't work if your
index is fragmented, we've finally started the copy again.

A long time ago I managed to win a battle to get a PG system (7.0.1) for a
thing I developed.  That machine has NEVER <knock knock> had a problem.
And since that time we've moved lots of stuff onto it... currently is only
about 4GB in size. Someday I'll upgrade to a newer version, but if it
ain't broke, don't fix it.

So with all that, you gotta appreciate both sides - hte fact pg "just
works" and the tunability of bigger db's (Oh yeah - and we've actually had
informix on the horn about the problem - their solution was "upgrade to
9.4 - it'll be out in march").

Hopefully this last thing will complete and I'll be done with it.

------------------------------------------------------------------------------
Jeff Trout <jeff@jefftrout.com>                  http://www.jefftrout.com/  Ronald McDonald, with the help of cheese
soup,     controls America from a secret volkswagon hidden in the past
 
-------------------------------------------------------------------------------




Options for growth

From
"D'Arcy J.M. Cain"
Date:
Due to the fact that we are growing out of our current system (PostgreSQL on 
PCs) we are looking for ways to expand and one of the suggestions has been to 
toss PostgreSQL in favour of Oracle with Remote Access Cluster (RAC) 
software.  The theory is that you can just plug machines into the cluster if 
the database appears to be straining and they automagically take over some of 
the load.  Not knowing much about it I can only argue about price and source 
code availability.  The first has some value but the second is harder to 
argue without knowing about RAC.  Is it really as simple as it sounds or 
would we just be giving up the other two for a new set of problems.

My idea is to create a new middleware layer that allows me to split things up 
based on various criteria without changing my application.  RAC sounds like 
it does that at the database/SQL level.  Does it?

We are also looking at hardware solutions, multi-CPU PCs with tons (24GB) of 
memory.  I know that memory will improve access if it prevents swapping but 
how well does PostgreSQL utilize multiple CPUs?

And finally, if you had your dream machine to run on, what would it be?  We 
are also looking outside of PC hardware but we are concerned about not having 
access to that nice, cheap, generic hardware for when we need to grow again 
or for redundant backup.

Thanks for any tips and suggestions.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 425 1212     (DoD#0082)    (eNTP)   |  what's for dinner.


Re: Oracle rant

From
"Ross J. Reedstrom"
Date:
On Thu, Jan 16, 2003 at 11:17:42AM -0500, Jeff wrote:
> On Wed, 15 Jan 2003, mlw wrote:
> 
> So with all that, you gotta appreciate both sides - hte fact pg "just
> works" and the tunability of bigger db's (Oh yeah - and we've actually had
> informix on the horn about the problem - their solution was "upgrade to
> 9.4 - it'll be out in march").

Here lies the real secret to why Open Source of all types keeps the
techies (like us) maximally happy: I know I've seen Tom Lane (and
others) often suggest an upgrade as the real fix for a problem, but  the
suggestion to upgrade to a not yet released version invariably includes
the option of applying the patch yourself. Not something Oracle can offer.

Ross


Re: Options for growth

From
Adrian 'Dagurashibanipal' von Bidder
Date:
On Thu, 2003-01-16 at 17:42, D'Arcy J.M. Cain wrote:

> We are also looking at hardware solutions, multi-CPU PCs with tons (24GB) of
> memory.  I know that memory will improve access if it prevents swapping but
> how well does PostgreSQL utilize multiple CPUs?

At most one CPU is used for any single postgres backend (that means for
any single database connection). So, if your load problem is single
queries being too slow, thee's nothing you can do with adding more CPUs.
If your problem is many connections maxing out the db, PostgreSQL can
take full advantage of multiple CPUs.

Of course, most db apps still are not cpu bound, so you'd have to do
some careful benchmarking first or you'll be spending too much money.

cheers
-- vbi

--
get my gpg key here: http://fortytwo.ch/gpg/92082481

Re: Options for growth

From
Neil Conway
Date:
On Thu, 2003-01-16 at 11:42, D'Arcy J.M. Cain wrote:
> Is [Oracle RAC] really as simple as it sounds or would we just be
> giving up the other two for a new set of problems.

That's a question you should be asking to an authority on Oracle RAC
(which pgsql-hackers is not).

> My idea is to create a new middleware layer that allows me to split things up 
> based on various criteria without changing my application.

Personally, I would not be very eager to use home-brew replication for a
heavy-load, production-critical application (which is what your app
sounds like). But YMMV...

> We are also looking at hardware solutions, multi-CPU PCs with tons (24GB) of 
> memory.  I know that memory will improve access if it prevents swapping but 
> how well does PostgreSQL utilize multiple CPUs?

The estimates I've heard from a couple parties are that PostgreSQL tends
to scale well up to 4 CPUs. I've been meaning to take a look at
improving that, but I haven't had a chance yet...

Another option is to put some money toward the current development
effort to get truly scalable replication for PostgreSQL. In the end, I'd
think the cost of subsidizing some of that development would be a
fraction of the license fees you'll end up paying Oracle over the
years...

Cheers,

Neil
-- 
Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC





Re: Oracle rant

From
Petru Paler
Date:
On Thu, Jan 16, 2003 at 10:50:49AM -0600, Ross J. Reedstrom wrote:
> suggestion to upgrade to a not yet released version invariably includes
> the option of applying the patch yourself. Not something Oracle can offer.

Not for a sane price, I guess. I believe the high end support contracts
include getting custom patched versions in a short period of time.


Petru


Re: Oracle rant

From
Peter Eisentraut
Date:
Adrian 'Dagurashibanipal' von Bidder writes:

>  - postgres should auto-tune itself - the *cost could perhaps be
> adjusted after some statistics have been collected, and there should be
> some sensible way to determine an optimal setting for the famous
> shared_buffers (and the default should be something that gets reasonable
> performance on common cases)

Over the last couple of years PostgreSQL has transformed from hardly
configurable to fully configurable.  Currently we're in a mode where we
add new configuration parameters whenever there's a degree of uncertainty.
Sooner rather than later we need to shift to the next phase, which is as
you say autoconfiguration, because ease of administration is one of the
great advantages of PostgreSQL.

-- 
Peter Eisentraut   peter_e@gmx.net



Re: Oracle rant

From
mlw
Date:
<br /><br /> Peter Eisentraut wrote:<br /><blockquote
cite="midPine.LNX.4.44.0301161950540.789-100000@localhost.localdomain"type="cite"><pre wrap="">Adrian
'Dagurashibanipal'von Bidder writes:
 
 </pre><blockquote type="cite"><pre wrap=""> - postgres should auto-tune itself - the *cost could perhaps be
adjusted after some statistics have been collected, and there should be
some sensible way to determine an optimal setting for the famous
shared_buffers (and the default should be something that gets reasonable
performance on common cases)   </pre></blockquote><pre wrap="">
Over the last couple of years PostgreSQL has transformed from hardly
configurable to fully configurable.  Currently we're in a mode where we
add new configuration parameters whenever there's a degree of uncertainty.
Sooner rather than later we need to shift to the next phase, which is as
you say autoconfiguration, because ease of administration is one of the
great advantages of PostgreSQL</pre></blockquote><br /> I think the idea of adding a parameter when ever you are not
sure,is a great idea. That does preclude, however, the ability for a process within PostgreSQL from analyzing the
metricsand updating the parameter file or table.<br /><br /><br /> 

Re: Options for growth

From
"Christopher Kings-Lynne"
Date:
> Due to the fact that we are growing out of our current system 
> (PostgreSQL on 
> PCs) we are looking for ways to expand and one of the suggestions 
> has been to 
> toss PostgreSQL in favour of Oracle with Remote Access Cluster (RAC) 
> software.

You mean Real Application Clusters?

Chris




Re: Oracle rant

From
Mark Kirkwood
Date:
Fred Zellinger wrote:

>I also am a Linux Nut and use Postgres whenever possible because I like the
>freedom of access to the HACKERS mailing list...something only a few highly
>renound DBA snobs have with Oracle.
>  
>
Indeed, I think this is a significant component of the appeal of open 
source

>I have been impressed however, with the degree to which Oracle(synonymous
>with Ellison) has attempted to become "Open".  Oracle is getting into Linux
>almost as heavily as IBM, mostly prompted by their common rivalry with M$ and
>SQLServer.  
>
I wonder if the "conversion" to openness may more a mechanism to 
distinguish themselves from Microsoft, than a heart-felt belief in the 
principles themselves.... but its nice anyway !

regards

Mark




Re: Options for growth

From
"D'Arcy J.M. Cain"
Date:
On Thursday 16 January 2003 12:23, Neil Conway wrote:
> On Thu, 2003-01-16 at 11:42, D'Arcy J.M. Cain wrote:
> > Is [Oracle RAC] really as simple as it sounds or would we just be
> > giving up the other two for a new set of problems.
>
> That's a question you should be asking to an authority on Oracle RAC
> (which pgsql-hackers is not).

True but I already have their perspective.  Now I am looking for reasons to 
stay with PostgreSQL.

> > My idea is to create a new middleware layer that allows me to split
> > things up based on various criteria without changing my application.
>
> Personally, I would not be very eager to use home-brew replication for a
> heavy-load, production-critical application (which is what your app
> sounds like). But YMMV...

Not replication per se although I suppose that could be built in.  What I am 
talking about is an application that knows our business logic and brokers 
requests for data from the database(s) in an OO way.  The idea is to split 
data up in the middleware whenever necessary.

> > We are also looking at hardware solutions, multi-CPU PCs with tons (24GB)
> > of memory.  I know that memory will improve access if it prevents
> > swapping but how well does PostgreSQL utilize multiple CPUs?
>
> The estimates I've heard from a couple parties are that PostgreSQL tends
> to scale well up to 4 CPUs. I've been meaning to take a look at
> improving that, but I haven't had a chance yet...

Cool.  I am looking at Tyan boards that have 4 CPUs and 24GB memory.

> Another option is to put some money toward the current development
> effort to get truly scalable replication for PostgreSQL. In the end, I'd
> think the cost of subsidizing some of that development would be a
> fraction of the license fees you'll end up paying Oracle over the
> years...

This is definitely an option.  I can probably put both people and money into 
such an effort.  I would be happy to hear from people who can work on that.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 425 1212     (DoD#0082)    (eNTP)   |  what's for dinner.


Re: Options for growth

From
"D'Arcy J.M. Cain"
Date:
On Thursday 16 January 2003 20:54, Christopher Kings-Lynne wrote:
> > toss PostgreSQL in favour of Oracle with Remote Access Cluster (RAC)
> > software.
>
> You mean Real Application Clusters?

Oops, yes.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 425 1212     (DoD#0082)    (eNTP)   |  what's for dinner.


Re: Options for growth

From
"D'Arcy J.M. Cain"
Date:
On Thursday 16 January 2003 11:59, Adrian 'Dagurashibanipal' von Bidder wrote:
> On Thu, 2003-01-16 at 17:42, D'Arcy J.M. Cain wrote:
> > We are also looking at hardware solutions, multi-CPU PCs with tons (24GB)
> > of memory.  I know that memory will improve access if it prevents
> > swapping but how well does PostgreSQL utilize multiple CPUs?
>
> At most one CPU is used for any single postgres backend (that means for
> any single database connection). So, if your load problem is single
> queries being too slow, thee's nothing you can do with adding more CPUs.
> If your problem is many connections maxing out the db, PostgreSQL can
> take full advantage of multiple CPUs.

I most definitely have multiple queries running at once.  My main issue is 
whether PostgreSQL scales up properly or does it get bogged down with too 
many locked queries.

> Of course, most db apps still are not cpu bound, so you'd have to do
> some careful benchmarking first or you'll be spending too much money.

Natch.

-- 
D'Arcy J.M. Cain <darcy@{druid|vex}.net>   |  Democracy is three wolves
http://www.druid.net/darcy/                |  and a sheep voting on
+1 416 425 1212     (DoD#0082)    (eNTP)   |  what's for dinner.


createlang failed!

From
"John Liu"
Date:
When I run createlang plpgsql template1
on linux 7.3 (pg version 7.3.1), it failed -

createlang plpgsql template1
ERROR:  stat failed on file '$libdir/plpgsql': No such file or directory
createlang: language installation failed

Can someone point me where could go wrong?

thanks.

johnl


Re: createlang failed!

From
Tom Lane
Date:
"John Liu" <johnl@synthesys.com> writes:
> When I run createlang plpgsql template1
> on linux 7.3 (pg version 7.3.1), it failed -

> createlang plpgsql template1
> ERROR:  stat failed on file '$libdir/plpgsql': No such file or directory
> createlang: language installation failed

What does 'pg_config --pkglibdir' say?  Is there a plpgsql.so in there?
Is there any additional information about the failure in the
postmaster's log?  (If you are routing the postmaster's stderr to
/dev/null, now is a good time to stop doing that --- what I'm wondering
about is dynamic-linker errors, which will only appear on stderr.)
        regards, tom lane


Re: Options for growth

From
Daniel Kalchev
Date:
>>>"D'Arcy J.M. Cain" said:> On Thursday 16 January 2003 11:59, Adrian 'Dagurashibanipal' von Bidder wrot    e:> > On
Thu,2003-01-16 at 17:42, D'Arcy J.M. Cain wrote:> > > We are also looking at hardware solutions, multi-CPU PCs with
tons(24GB    )> > > of memory.  I know that memory will improve access if it prevents> > > swapping but how well does
PostgreSQLutilize multiple CPUs?> >> > At most one CPU is used for any single postgres backend (that means for> > any
singledatabase connection). So, if your load problem is single> > queries being too slow, thee's nothing you can do
withadding more CPUs.> > If your problem is many connections maxing out the db, PostgreSQL can> > take full advantage
ofmultiple CPUs.> > I most definitely have multiple queries running at once.  My main issue is > whether PostgreSQL
scalesup properly or does it get bogged down with too > many locked queries.
 

That would depend on the OS. Not many 'pc-based unix' support over 4 GB of 
memory, some don't even go that far.

If memory is an issue, have you considered going to 64bit CPU?

Memory is indeed an issue for a complex database setup, especially if you want 
to give the backends enough shared and sort memory.

As already said, PostgreSQL will utilize multiple CPUs - as effectively as 
your OS can do this of course. PostgreSQL is not an OS by itself and does not 
really control these resources.

I have also found it very helpful to split database from application servers 
(wish I do it as often as I recommend it :) - thus you can optimize the part 
that needs most resources.. In many cases the requirements are quite 
different. With todays gigabit LANs, bandwidth between machines shouldn't be 
an issue.

By the way, I too wonder which supported OS platform would support over 4GB of 
memory on a PC..

Daniel



Re: Options for growth

From
Adrian 'Dagurashibanipal' von Bidder
Date:
[no cc:s please]

On Mon, 2003-01-20 at 10:31, Daniel Kalchev wrote:
> >>>"D'Arcy J.M. Cain" said:
>  > On Thursday 16 January 2003 11:59, Adrian 'Dagurashibanipal' von Bidder wrot
>      e:
>  > > On Thu, 2003-01-16 at 17:42, D'Arcy J.M. Cain wrote:
>  > > > We are also looking at hardware solutions, multi-CPU PCs with tons (24GB
>      )
>  > > > of memory.  I know that memory will improve access if it prevents
>  > > > swapping but how well does PostgreSQL utilize multiple CPUs?
>  > >
>  > > At most one CPU is used for any single postgres backend (that means for
>  > > any single database connection). So, if your load problem is single
>  > > queries being too slow, thee's nothing you can do with adding more CPUs.
>  > > If your problem is many connections maxing out the db, PostgreSQL can
>  > > take full advantage of multiple CPUs.
>  >
>  > I most definitely have multiple queries running at once.  My main issue is
>  > whether PostgreSQL scales up properly or does it get bogged down with too
>  > many locked queries.
>
> That would depend on the OS. Not many 'pc-based unix' support over 4 GB of
> memory, some don't even go that far.

> By the way, I too wonder which supported OS platform would support over 4GB of
> memory on a PC..

Linux? I don't think there's any problem handling more than 4G memory in
the system. On 32bit architectures, there's of course the 3G (I think)
per process limit, but as postgres uses multiprocess and not
multithreading, this issue doesn't hit so soon. Of course, if the per
process memory is the problem, you'd have to go to 64bit.

cheers
-- vbi

--
featured link: http://fortytwo.ch/gpg/intro

Re: Options for growth

From
Sean Chittenden
Date:
> > That would depend on the OS. Not many 'pc-based unix' support over
> > 4 GB of memory, some don't even go that far.
>
> > By the way, I too wonder which supported OS platform would support
> > over 4GB of memory on a PC..
>
> Linux? I don't think there's any problem handling more than 4G
> memory in the system. On 32bit architectures, there's of course the
> 3G (I think) per process limit, but as postgres uses multiprocess
> and not multithreading, this issue doesn't hit so soon. Of course,
> if the per process memory is the problem, you'd have to go to 64bit.

Heh, don't kid yourself.  x86 can only handle 4GB of memory
addressing.  The hack that Linux uses is to swap out 2GB sections of
RAM to a 4GB+ memory range, then copy the memory range it needs down
into usable memory space.  Can we say large page tables?  :)

You need an actual 64bit CPU to access more than 4GB of RAM without
paying for it through the nose.  -sc

--
Sean Chittenden

Re: Oracle rant

From
Curt Sampson
Date:
On Thu, 16 Jan 2003, Fred Zellinger wrote:

> With Oracle, you can screw around with files and tablespaces and
> extents and segments and partition striping and local and global
> indexing and block sizes and 400+ other tuning parameters to your
> heart's content. ... I am a control freak and I think a lot of
> other people are too. Oracle is tremendously complex and has a
> steep learning curve, but it gives me control. With PG, a lot
> of effort has been made to simplify. ... If PostgreSQL were to
> open up all the internals of storage and become as complex as
> Oracle, there probably would be a lot of high profile crashes and PG
> would get a bad reputation. However, I think that having a mode of
> operation(controlled at compile time) where all the dirty details of
> storage was made accessible in the data dictionary, would be something
> good to pacify the control freaks.

The reason you don't have all of these storage parameters available to
you, and they can't be "opened up" is that they simply aren't there.
Oracle uses raw devices and does all of its own management of disk space
at the block level. Postgres relies on the filesystem and the operating
system to deal with block allocation and a lot of the I/O scheduling.

Adding an effecient layer to replace this is a major project that would
probably provide few gains, since operating systems have gotten a lot
smarter about block allocation and I/O scheduling over the years. There
has been some discussion about this before, if you look back in the
archives.

There are various other places in postgres that the coders could work
on that are likely to provide more performance gain for less effort.
The optimizer comes to mind. At the low level, dropping shared memory
and moving to mmap might (but it's not certain) provide some noticable
improvement for not too much implementation effort.

cjs
-- 
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org   Don't you know, in this new Dark Age, we're
alllight.  --XTC
 


Re: Options for growth

From
Curt Sampson
Date:
On Wed, 22 Jan 2003, Sean Chittenden wrote:

> > > By the way, I too wonder which supported OS platform would support
> > > over 4GB of memory on a PC..
> >
> > Linux? I don't think there's any problem handling more than 4G
> > memory in the system. On 32bit architectures, there's of course the
> > 3G (I think) per process limit, but as postgres uses multiprocess
> > and not multithreading, this issue doesn't hit so soon. Of course,
> > if the per process memory is the problem, you'd have to go to 64bit.
>
> Heh, don't kid yourself.  x86 can only handle 4GB of memory
> addressing.  The hack that Linux uses is to swap out 2GB sections of
> RAM to a 4GB+ memory range, then copy the memory range it needs down
> into usable memory space.  Can we say large page tables?  :)
>
> You need an actual 64bit CPU to access more than 4GB of RAM without
> paying for it through the nose.  -sc

No, you do not. If you need to access more than two to three GB
(depending on the OS) of RAM on a 32-bit machine *within a single
process* (as mentioned above), you have a problem. But this problem does
not necessarially involve copying; you could use, e.g., mmap to remap
chunks of your address space.

If you have multiple processes, and your OS is sensibly written, no
memory copying is necessary on the process side. All you do is change
the page tables, and the appropriate physical memory, no matter where in
the physical address space it resides, will be mapped into the 32-bit
virtual memory address space.

That's not to say that there might not be other issues with I/O on, say,
32-bit PCI buses. IIRC, typically PCI bus controllers use physical,
not virtual addresses on the bus for DMA, so you're going to have to
use bounce buffers if you wish a 32-bit PCI card to do I/O outside the
bottom 4 GB of memory. But on the other hand, if you're spending the
money on a motherboard that can take more than 4 GB of RAM, you're
almost certainly getting a few 64-bit PCI slots, and probably you'd also
be spending the money to buy 64-bit PCI disk controllers.

This is not to say you shouldn't go for a 64-bit system, especially
given that the AMD ones are probably going to get awfully cheap fairly
soon. But postgres itself is today not equipment to take any more
advantage of one than it is of a 32-bit system with a greater than
32-bit physical address space. (And there's been doubt about whether
the techniques that would take advantage of this would provide all that
much of a performance improvement, anyway. Still, it seems to me that
it would be pretty cool, when you're doing I/O on a table, just to say,
with one system call, "mmap this entire file containing the table into
my address space," and not have to worry about running out of address
space when you do this on multiple large tables. (And yes, I know this
would actually be, "map this 1 GB chunk of this large table" in the
current postgres implemenation.)

cjs
-- 
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org   Don't you know, in this new Dark Age, we're
alllight.  --XTC
 


Re: Options for growth

From
Curt Sampson
Date:
On Thu, 16 Jan 2003, D'Arcy J.M. Cain wrote:

> Due to the fact that we are growing out of our current system
> (PostgreSQL on PCs) we are looking for ways to expand and one of the
> suggestions has been to toss PostgreSQL in favour of Oracle with
> Remote Access Cluster (RAC) software. The theory is that you can just
> plug machines into the cluster if the database appears to be straining
> and they automagically take over some of the load.
> ...
> My idea is to create a new middleware layer that allows me to split
> things up based on various criteria without changing my application.

It's a basic principle of clustering that doing it in an application-
aware way will always be more efficient than trying to hide it from the
application.

If you've not read it already, I strongly suggest reading _In Search of
Clusters_ by Gregory F. Pfister.

> And finally, if you had your dream machine to run on, what would it
> be? We are also looking outside of PC hardware but we are concerned
> about not having access to that nice, cheap, generic hardware for when
> we need to grow again or for redundant backup.

If you can manage to stick with PC hardware, you are going to save a
*lot* of money. If you're considering buying a reasonably well loaded
Sun E6000 or similar, it's well worth spending twenty or thirty thousand
dollars on a big PC system and spending some time to see if that will do
the trick before you shell out a couple hundred thousand for the Sun.

As for how well postgres uses multiple CPUs: so long as you've got lots
of connections with the load distributed among them, it's dependent on
the OS, postgres. If the OS can handle the scheduling (which, last I
checked, Linux couldn't, at least not without patches), eight or sixteen
CPUs will be fine.

cjs
-- 
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org   Don't you know, in this new Dark Age, we're
alllight.  --XTC
 


Re: Options for growth

From
Hannu Krosing
Date:
Curt Sampson kirjutas N, 23.01.2003 kell 17:42:
> If the OS can handle the scheduling (which, last I checked, Linux couldn't,

When did you do your checking ? 
(just curious, not to start a flame war ;)

>  at least not without patches), eight or sixteen
> CPUs will be fine.
> 
> cjs
-- 
Hannu Krosing <hannu@tm.ee>


Re: Options for growth

From
Curt Sampson
Date:
On Fri, 23 Jan 2003, Hannu Krosing wrote:

> Curt Sampson kirjutas N, 23.01.2003 kell 17:42:
> > If the OS can handle the scheduling (which, last I checked, Linux couldn't,
>
> When did you do your checking ?
> (just curious, not to start a flame war ;)

This was perhaps a year or so ago. IBM had some patches to fix a lot of
the scheduler problems. I wouldn't be surprised if things are in a much
better state now.

Anyway, there are lots of work-arounds. Find the appropriate patches if
the kernel still doesn't have them, run Solaris, whatever....

cjs
-- 
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org   Don't you know, in this new Dark Age, we're
alllight.  --XTC
 


Re: Options for growth

From
"scott.marlowe"
Date:
On 23 Jan 2003, Hannu Krosing wrote:

> Curt Sampson kirjutas N, 23.01.2003 kell 17:42:
> > If the OS can handle the scheduling (which, last I checked, Linux couldn't,
> 
> When did you do your checking ? 
> (just curious, not to start a flame war ;)
> 
> >  at least not without patches), eight or sixteen
> > CPUs will be fine.

Yeah, take a look here:

http://www.sgi.com/servers/altix/

64 CPUs seems scalable enough for me.  :-)  When can we expect BSD to run 
on this system and use all 64 CPUs efficiently?



Re: createlang failed!

From
"John Liu"
Date:
Thanks, fixed. The problem was
caused -
there's a postgreSQL came with the
Redhat server, but I tried to install one
only used by a specific user, I set
LD_LIBRARY_PATH in this user's profile.
pg_config --pkglibdir points to the correct
lib dir, but when I run createlang plpgsql template1
it keeps failed.

I've to run -
createlang --pglib=Dir plpgsql template1

then:
createlang -l template1
Procedural languages Name   | Trusted?
---------+----------plpgsql | t
(1 row)

But here's another observation -
I still can't create plpgsql in the mydb
(createdb mydb) even template1 supports plpgsql.
I've to run
createlang --pglib=Dir --dbname=DB plpgsql.

I guess it's me to make it messy.

johnl


> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> Sent: Friday, January 17, 2003 9:15 AM
> To: John Liu
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] createlang failed!
>
>
> "John Liu" <johnl@synthesys.com> writes:
> > When I run createlang plpgsql template1
> > on linux 7.3 (pg version 7.3.1), it failed -
>
> > createlang plpgsql template1
> > ERROR:  stat failed on file '$libdir/plpgsql': No such file or directory
> > createlang: language installation failed
>
> What does 'pg_config --pkglibdir' say?  Is there a plpgsql.so in there?
> Is there any additional information about the failure in the
> postmaster's log?  (If you are routing the postmaster's stderr to
> /dev/null, now is a good time to stop doing that --- what I'm wondering
> about is dynamic-linker errors, which will only appear on stderr.)
>
>             regards, tom lane



Re: Options for growth

From
Andrew Sullivan
Date:
On Thu, Jan 16, 2003 at 12:23:52PM -0500, Neil Conway wrote:
> 
> The estimates I've heard from a couple parties are that PostgreSQL tends
> to scale well up to 4 CPUs. I've been meaning to take a look at
> improving that, but I haven't had a chance yet...

I can definitely tell you that Postgres scales _fine_ beyond 4
processors.  Indeed, we have found under some loads that 4 processors
is not enough; but when we put it into an 8- or more-way box, it is
much faster.

That's on Solaris, though, which is generally very good at handling
greater-than-4 CPUs.  That's why Solaris is a good platform for us,
even though its fork() times rot.

> think the cost of subsidizing some of that development would be a
> fraction of the license fees you'll end up paying Oracle over the
> years...

And it's worth pointing out what those ORAC licenses really cost: it
might be as little as the savings of a single year.

By the way ORAC may not be _quite_ as bulletproof as it seems.  It
shares file areas, and there are rumours of locking troubles that
people trip over.  Nothing they'll share with you, of course: the
license forbids as much.  But if you ask someone over the top of a
glass, he or she might tell you about it.

A

-- 
----
Andrew Sullivan                         204-4141 Yonge Street
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M2P 2A8                                        +1 416 646 3304
x110



Re: Options for growth

From
GB Clark
Date:
On Thu, 23 Jan 2003 11:19:36 -0700 (MST)
"scott.marlowe" <scott.marlowe@ihs.com> wrote:

> On 23 Jan 2003, Hannu Krosing wrote:
> 
> > Curt Sampson kirjutas N, 23.01.2003 kell 17:42:
> > > If the OS can handle the scheduling (which, last I checked, Linux couldn't,
> > 
> > When did you do your checking ? 
> > (just curious, not to start a flame war ;)
> > 
> > >  at least not without patches), eight or sixteen
> > > CPUs will be fine.
> 
> Yeah, take a look here:
> 
> http://www.sgi.com/servers/altix/
> 
> 64 CPUs seems scalable enough for me.  :-)  When can we expect BSD to run 
> on this system and use all 64 CPUs efficiently?
> 

I think FreeBSD 5.[1|2] will be able to.  That was the entire reason for SMPng and
KSE.  There is not too much of the kernel left untouched from the 4.0 split.

As far as NetBSD or OpenBSD goes, I would not expect it too soon...

GB

-- 
GB Clark II             | Roaming FreeBSD Admin
gclarkii@VSServices.COM | General Geek           CTHULU for President - Why choose the lesser of two evils?


Re: Options for growth

From
"scott.marlowe"
Date:
On Wed, 12 Feb 2003, GB Clark wrote:

> On Thu, 23 Jan 2003 11:19:36 -0700 (MST)
> "scott.marlowe" <scott.marlowe@ihs.com> wrote:
> 
> > On 23 Jan 2003, Hannu Krosing wrote:
> > 
> > > Curt Sampson kirjutas N, 23.01.2003 kell 17:42:
> > > > If the OS can handle the scheduling (which, last I checked, Linux couldn't,
> > > 
> > > When did you do your checking ? 
> > > (just curious, not to start a flame war ;)
> > > 
> > > >  at least not without patches), eight or sixteen
> > > > CPUs will be fine.
> > 
> > Yeah, take a look here:
> > 
> > http://www.sgi.com/servers/altix/
> > 
> > 64 CPUs seems scalable enough for me.  :-)  When can we expect BSD to run 
> > on this system and use all 64 CPUs efficiently?
> > 
> 
> I think FreeBSD 5.[1|2] will be able to.  That was the entire reason for SMPng and
> KSE.  There is not too much of the kernel left untouched from the 4.0 split.
> 
> As far as NetBSD or OpenBSD goes, I would not expect it too soon...

I just downloaded 5.0 last week and I've a pretty little dual PPro sitting 
here that needs to be ridden hard.  It has lots of spare drives and Linux 
is already on one, so this will be a nice box for playing with different 
distros and what not.

Now I just need an altix...  Even a little one would do.  Now how do I 
convince the powers that be where I work that we have a need for an 8 to 
64 way SMP monster box?