Thread: Storage management?????

Storage management?????

From
Chris Pesko
Date:
Scalability -

I'm new to Postgres and I am noticing there are no commands to create files
or tablespaces and manage storage of data over several disks for either
table partitioning or seperating tables and indexes.  How do you manage
scalability of the database and strategic placement of objects or files
over several disks?  I don't fully understand how the database is stored
and how it extends when it needs to.  How do you impose size limits and how
do can you store and manage, for example, a 100Gig database?

Backup/Monitoring -

How do you backup and recover using WAL transaction logging to roll forward
a database to the point in time of failure?
Does anyone have a comprehensive set of backup/recovery and database
monitoring scripts they can send my way for both data wharehousing and
transactional database so I don't have to re-invent the wheel?  Or are
there any scripts published or available for download anywhere?

pg_dumpall errors -

I am using postgres user with same password in all the databases but still
the pg_dumpall program tries to interactively authenticate even though I am
using "local trust" in pg_hba.config.  Any suggestions?



Re: Storage management?????

From
"Oliver Elphick"
Date:
Chris Pesko wrote:
  >Scalability -
  >
  >I'm new to Postgres and I am noticing there are no commands to create files
  >or tablespaces and manage storage of data over several disks for either
  >table partitioning or seperating tables and indexes.  How do you manage
  >scalability of the database and strategic placement of objects or files
  >over several disks?  I don't fully understand how the database is stored
  >and how it extends when it needs to.  How do you impose size limits and how
  >do can you store and manage, for example, a 100Gig database?

Without user intervention, all database files under $PGDATA.  A single table
that becomes too large is split, by default into 1Gb chunks.  However, there
is no provision for splitting a database.

You can use an alternative location when creating a database:
 CREATE DATABASE name WITH LOCATION = 'dbpath'

An alternative way of managing things is to use symbolic links; obviously
this requires direct intervention by the administrator.

  >Backup/Monitoring -
  >
  >How do you backup and recover using WAL transaction logging to roll forward
  >a database to the point in time of failure?

I don't think that this has been written yet.  WAL is quite new.

  >Does anyone have a comprehensive set of backup/recovery and database
  >monitoring scripts they can send my way for both data wharehousing and
  >transactional database so I don't have to re-invent the wheel?  Or are
  >there any scripts published or available for download anywhere?
  >
  >pg_dumpall errors -
  >
  >I am using postgres user with same password in all the databases but still
  >the pg_dumpall program tries to interactively authenticate even though I am
  >using "local trust" in pg_hba.config.  Any suggestions?


Are you perhaps connecting through TCP/IP (PGHOST set to "localhost")?  Then
the local rule would not apply.  Or is there another line earlier than
"local all trust" which matches your connection?


--
Oliver Elphick                                Oliver.Elphick@lfix.co.uk
Isle of Wight                              http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
                 ========================================
     "I beseech you therefore, brethren, by the mercies of
      God, that ye present your bodies a living sacrifice,
      holy, acceptable unto God, which is your reasonable
      service. And be not conformed to this world; but be ye
      transformed by the renewing of your mind, that ye may
      prove what is that good, and acceptable, and perfect,
      will of God."        Romans 12:1,2



Re: Storage management?????

From
Chris Pesko
Date:
Thank you for responding and here is a follow up question regarding your
scalability answer.  Is the database size limited to the size of the one
disk it rides on.  Or does the directory volume need to be configured to
span multiple disks to get some scalability?

At 08:40 PM 10/3/01 +0100, Oliver Elphick wrote:
>Chris Pesko wrote:
>   >Scalability -
>   >
>   >I'm new to Postgres and I am noticing there are no commands to create
> files
>   >or tablespaces and manage storage of data over several disks for either
>   >table partitioning or seperating tables and indexes.  How do you manage
>   >scalability of the database and strategic placement of objects or files
>   >over several disks?  I don't fully understand how the database is stored
>   >and how it extends when it needs to.  How do you impose size limits
> and how
>   >do can you store and manage, for example, a 100Gig database?
>
>Without user intervention, all database files under $PGDATA.  A single table
>that becomes too large is split, by default into 1Gb chunks.  However, there
>is no provision for splitting a database.
>
>You can use an alternative location when creating a database:
>  CREATE DATABASE name WITH LOCATION = 'dbpath'
>
>An alternative way of managing things is to use symbolic links; obviously
>this requires direct intervention by the administrator.
>
>   >Backup/Monitoring -
>   >
>   >How do you backup and recover using WAL transaction logging to roll
> forward
>   >a database to the point in time of failure?
>
>I don't think that this has been written yet.  WAL is quite new.
>
>   >Does anyone have a comprehensive set of backup/recovery and database
>   >monitoring scripts they can send my way for both data wharehousing and
>   >transactional database so I don't have to re-invent the wheel?  Or are
>   >there any scripts published or available for download anywhere?
>   >
>   >pg_dumpall errors -
>   >
>   >I am using postgres user with same password in all the databases but
> still
>   >the pg_dumpall program tries to interactively authenticate even though
> I am
>   >using "local trust" in pg_hba.config.  Any suggestions?
>
>
>Are you perhaps connecting through TCP/IP (PGHOST set to "localhost")?  Then
>the local rule would not apply.  Or is there another line earlier than
>"local all trust" which matches your connection?
>
>
>--
>Oliver Elphick                                Oliver.Elphick@lfix.co.uk
>Isle of Wight                              http://www.lfix.co.uk/oliver
>PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
>GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
>                  ========================================
>      "I beseech you therefore, brethren, by the mercies of
>       God, that ye present your bodies a living sacrifice,
>       holy, acceptable unto God, which is your reasonable
>       service. And be not conformed to this world; but be ye
>       transformed by the renewing of your mind, that ye may
>       prove what is that good, and acceptable, and perfect,
>       will of God."        Romans 12:1,2



Re: Storage management?????

From
Brian McCane
Date:
On Wed, 3 Oct 2001, Chris Pesko wrote:

> Thank you for responding and here is a follow up question regarding your
> scalability answer.  Is the database size limited to the size of the one
> disk it rides on.  Or does the directory volume need to be configured to
> span multiple disks to get some scalability?
>
---------------->8 SNIP 8<------------------
Chris,

    If you are asking if it is possible for a database to grow from
one filesystem to another, the answer is currently "NO".  However, as was
already pointed out, you can put the files anywhere and use symbolic links
to the files.  I have distributed my largest database across 5 drives on
2 separate SCSI busses at this point.  I have placed index files on 2
disks, and table files on 2 others, and the WAL files on another (7.1.2
unpatched).  The goal has been to reduce disk latency during updates to
the tables for large multi-table transactions, and it has worked quite
nicely.  The only real problem has been that I needed to write a script
that warns me whenever a table grows beyond 1GB.  The new file
(ie. 64587393.1) is created in $PGDATA and I then move it to whichever
drive I have the master file (ie. 65487393) on.  I could automate the
whole thing using the perl script, but this would mean having the script
automagically do a 'pg_ctl stop', move the file to a pre-defined
drive, create the link, 'pg_ctl start'.  Since I do this manually during
low system usage times, and I don't trust the script to work 100% of the
time, I just do it by hand.

- brian

Wm. Brian McCane                    | Life is full of doors that won't open
Search http://recall.maxbaud.net/   | when you knock, equally spaced amid those
Usenet http://freenews.maxbaud.net/ | that open when you don't want them to.
Auction http://www.sellit-here.com/ | - Roger Zelazny "Blood of Amber"


Re: Storage management?????

From
Chris Pesko
Date:
This is great information.  You just wrote the main content of Chapter #11
for "Storage Management".  Now I understand how to employ some sort of file
management.  Thanks.

   At 11:14 AM 10/4/01 -0500, Brian McCane wrote:
>On Wed, 3 Oct 2001, Chris Pesko wrote:
>
> > Thank you for responding and here is a follow up question regarding your
> > scalability answer.  Is the database size limited to the size of the one
> > disk it rides on.  Or does the directory volume need to be configured to
> > span multiple disks to get some scalability?
> >
>---------------->8 SNIP 8<------------------
>Chris,
>
>         If you are asking if it is possible for a database to grow from
>one filesystem to another, the answer is currently "NO".  However, as was
>already pointed out, you can put the files anywhere and use symbolic links
>to the files.  I have distributed my largest database across 5 drives on
>2 separate SCSI busses at this point.  I have placed index files on 2
>disks, and table files on 2 others, and the WAL files on another (7.1.2
>unpatched).  The goal has been to reduce disk latency during updates to
>the tables for large multi-table transactions, and it has worked quite
>nicely.  The only real problem has been that I needed to write a script
>that warns me whenever a table grows beyond 1GB.  The new file
>(ie. 64587393.1) is created in $PGDATA and I then move it to whichever
>drive I have the master file (ie. 65487393) on.  I could automate the
>whole thing using the perl script, but this would mean having the script
>automagically do a 'pg_ctl stop', move the file to a pre-defined
>drive, create the link, 'pg_ctl start'.  Since I do this manually during
>low system usage times, and I don't trust the script to work 100% of the
>time, I just do it by hand.
>
>- brian
>
>Wm. Brian McCane                    | Life is full of doors that won't open
>Search http://recall.maxbaud.net/   | when you knock, equally spaced amid
>those
>Usenet http://freenews.maxbaud.net/ | that open when you don't want them to.
>Auction http://www.sellit-here.com/ | - Roger Zelazny "Blood of Amber"