Thread: Dealing with big tables

Dealing with big tables

From
"Mindaugas"
Date:
Hello,

  Started to work with big tables (like 300GB) and performance problems started to appear. :(

  To simplify things - table has an index on From an index on To columns. And it also have several other not indexed
columns.There are 100000+ of different values for From and the same for To. 

  I execute simple query "select * from bigtable where From='something'". Query returns like 1000 rows and takes 5++
secondsto complete. As far as I understand the query is slow because: 
  - first it has to retrieve pointers to rows with data from index. That goes fast.
  - retrieve all the rows one by one. There we have 100% random read because rows with the same From is distributed
evenlythrough all the 300GB and most probably nothing is cached. So there we are limited by _one_ disk performance
independentlyof how many disks we have in storage? And in case storage consists of 15k rpm Cheetahs with 3.5ms average
readseek time we should expect not more than ~285 rows per second? 

  I feel that I'm overlooking something here. But I'm new into data warehousing. :)

  Also this query should greatly benefit from parallel execution or async IO. Storage (seeks/second) scales almost
linearlywhen it has a lot of disks. And query is completely IO bound so it should scale well on single server. 

  And I cannot use some index organized table or table partitioned by From :) because there are at least 2 similar
indexesby which queries can be executed - From and To. 

  Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something from PostgreSQL features.

  Thanks,

  Mindaugas

Re: Dealing with big tables

From
Sami Dalouche
Date:
Hi,

my answer may be out of topic since you might be looking for a
postgres-only solution.. But just in case....

What are you trying to achieve exactly ? Is there any way you could
re-work your algorithms to avoid selects and use a sequential scan
(consider your postgres data as one big file) to retrieve each of the
rows, analyze / compute them (possibly in a distributed manner), and
join the results at the end ?

A few pointers :
http://lucene.apache.org/hadoop/
http://www.gridgain.com/

Regards,
Sami Dalouche



On Sun, 2007-12-02 at 12:26 +0200, Mindaugas wrote:
> Hello,
>
>   Started to work with big tables (like 300GB) and performance problems started to appear. :(
>
>   To simplify things - table has an index on From an index on To columns. And it also have several other not indexed
columns.There are 100000+ of different values for From and the same for To. 
>
>   I execute simple query "select * from bigtable where From='something'". Query returns like 1000 rows and takes 5++
secondsto complete. As far as I understand the query is slow because: 
>   - first it has to retrieve pointers to rows with data from index. That goes fast.
>   - retrieve all the rows one by one. There we have 100% random read because rows with the same From is distributed
evenlythrough all the 300GB and most probably nothing is cached. So there we are limited by _one_ disk performance
independentlyof how many disks we have in storage? And in case storage consists of 15k rpm Cheetahs with 3.5ms average
readseek time we should expect not more than ~285 rows per second? 
>
>   I feel that I'm overlooking something here. But I'm new into data warehousing. :)
>
>   Also this query should greatly benefit from parallel execution or async IO. Storage (seeks/second) scales almost
linearlywhen it has a lot of disks. And query is completely IO bound so it should scale well on single server. 
>
>   And I cannot use some index organized table or table partitioned by From :) because there are at least 2 similar
indexesby which queries can be executed - From and To. 
>
>   Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something from PostgreSQL features.
>
>   Thanks,
>
>   Mindaugas
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: You can help support the PostgreSQL project by donating at
>
>                 http://www.postgresql.org/about/donate


Re: Dealing with big tables

From
"Mindaugas"
Date:
> my answer may be out of topic since you might be looking for a
> postgres-only solution.. But just in case....

  I'd like to stay with SQL.

> What are you trying to achieve exactly ? Is there any way you could
> re-work your algorithms to avoid selects and use a sequential scan
> (consider your postgres data as one big file) to retrieve each of the
> rows, analyze / compute them (possibly in a distributed manner), and
> join the results at the end ?

  I'm trying to improve performance - get answer from mentioned query
faster.

  And since cardinality is high (100000+ different values) I doubt that it
would be possible to reach select speed with reasonable number of nodes of
sequential scan nodes.

  Mindaugas

Re: Dealing with big tables

From
"Guillaume Smet"
Date:
On Dec 2, 2007 11:26 AM, Mindaugas <ml@kilimas.com> wrote:
>   I execute simple query "select * from bigtable where From='something'". Query returns like 1000 rows and takes 5++
secondsto complete. As far as I understand the query is slow because: 

Can you post an EXPLAIN ANALYZE? Which version of PostgreSQL do you use?

--
Guillaume

Re: Dealing with big tables

From
Gregory Stark
Date:
"Mindaugas" <ml@kilimas.com> writes:

>   I execute simple query "select * from bigtable where From='something'".
>   Query returns like 1000 rows and takes 5++ seconds to complete.

As you pointed out that's not terribly slow for 1000 random accesses. It
sounds like your drive has nearly 5ms seek time which is pretty common.

What exactly is your goal? Do you need this query to respond in under a
specific limit? What limit? Do you need to be able to execute many instances
of this query in less than 5s * the number of executions? Or do you have more
complex queries that you're really worried about?

I do have an idea of how to improve Postgres for this case but it has to wait
until we're done with 8.3 and the tree opens for 8.4.

>   Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something
>   from PostgreSQL features.

Both Greenplum and EnterpriseDB have products in this space which let you
break the query up over several servers but at least in EnterpriseDB's case
it's targeted towards running complex queries which take longer than this to
run. I doubt you would see much benefit for a 5s query after the overhead of
sending parts of the query out to different machines and then reassembling the
results. If your real concern is with more complex queries they may make sense
though. It's also possible that paying someone to come look at your database
will find other ways to speed it up.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

Re: Dealing with big tables

From
"Mindaugas"
Date:
> What exactly is your goal? Do you need this query to respond in under a
> specific limit? What limit? Do you need to be able to execute many instances
> of this query in less than 5s * the number of executions? Or do you have more
> complex queries that you're really worried about?

  I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.

> Both Greenplum and EnterpriseDB have products in this space which let you
> break the query up over several servers but at least in EnterpriseDB's case
> it's targeted towards running complex queries which take longer than this to
> run. I doubt you would see much benefit for a 5s query after the overhead of
> sending parts of the query out to different machines and then reassembling the
> results. If your real concern is with more complex queries they may make sense
> though. It's also possible that paying someone to come look at your database
> will find other ways to speed it up.

  I see. This query also should benefit alot even when run in parallel on one server. Since anyway most time it spends
inwaiting for storage to respond. 

  Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't
they?

  Mindaugas

Re: Dealing with big tables

From
Mark Kirkwood
Date:
Mindaugas wrote:
>
>
>   And I cannot use some index organized table or table partitioned by From :) because there are at least 2 similar
indexesby which queries can be executed - From and To. 
>
>

This makes things a bit tough. One trick is to vertically partition the
table into two new tables - with "From" in one and "To" in the other...
then you can (horizontally) partition or cluster on each of these
columns separately.

You can make it reasonably transparent by using a view to combine the
columns again to get something that looks like the original table.

Cheers

Mark

Re: Dealing with big tables

From
Matthew
Date:
On Mon, 3 Dec 2007, Mark Kirkwood wrote:

> > And I cannot use some index organized table or table partitioned by
> > From :) because there are at least 2 similar indexes by which queries
> > can be executed - From and To.

> This makes things a bit tough. One trick is to vertically partition the
> table into two new tables - with "From" in one and "To" in the other...
> then you can (horizontally) partition or cluster on each of these
> columns separately.

Or you could even commit a capital sin and have several copies of the same
table, sorted by different columns. Just remember to select from the
correct table to get the performance, and to write all changes to all the
tables! Kind of messes up transactions and locking a little though.

Matthew

--
No, C++ isn't equal to D. 'C' is undeclared, so we assume it's an int,
with a default value of zero.  Hence, C++ should really be called 1.
-- met24, commenting on the quote "C++ -- shouldn't it be called D?"

Re: Dealing with big tables

From
"Merlin Moncure"
Date:
On Dec 2, 2007 7:35 AM, Mindaugas <ml@kilimas.com> wrote:
>   I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.
>
>   Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't
they?

The PostgreSQL architecture does not allow that...some of the MVCC
info is stored in the tuple.  An all index access strategy would only
help anyways if all the information being looked up was in the
index...this probably isn't practical in really big table anyways (I
think said feature is overrated).  What would help, at least
temporarily, would be to cluster the table on your index so at least
your lookups would have some locality.  Just be aware that cluster is
a one shot deal...if there are changes to the table it will gradually
fall out of order.  Otherwise it would be an ordered table, which is
what everyone wants...some of the big commercial databases have this.

In any case, you should still at least post an explain analyze to make
sure something funky isn't going on.  Barring that, I would be looking
at some type of caching to optimize around the problem...perhaps look
at the table design?

merlin