Thread: what is the maximum number of rows in a table in postgresql 8.1
Ok, finally am changing my question.
Do get quick response from postgresql what is the maximum number of records i can have in a table in postgresql 8.1 ???
Do get quick response from postgresql what is the maximum number of records i can have in a table in postgresql 8.1 ???
Sathiya, th maximum number of records in one PostreSQL table ist unlimited: http://www.postgresql.org/about/ [for some values of unlimited] Some further help: googling for: postgresql limits site:postgresql.org leads you to this answer quite quick, while googling for maximum number of rows in a postgresql table leads you to a lot of misleading pages. Harald On Tue, Mar 25, 2008 at 12:42 PM, sathiya psql <sathiya.psql@gmail.com> wrote: > Ok, finally am changing my question. > > > Do get quick response from postgresql what is the maximum number of records > i can have in a table in postgresql 8.1 ??? > > > > -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Spielberger Straße 49 70435 Stuttgart 0173/9409607 fx 01212-5-13695179 - EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!
th maximum number of records in one PostreSQL table ist unlimited:
am asking for good performance, not just limitation..
If i have half a crore record, how the performance will be ?
If i have half a crore record, how the performance will be ?
http://www.postgresql.org/about/
[for some values of unlimited]
Some further help:
googling for:
postgresql limits site:postgresql.org
but i need some experimentation result...
I have 1 GB RAM with Pentium Celeron.
50 lakh records and postgres performance is not good....
It takes 30 sec for simple queries....
I have 1 GB RAM with Pentium Celeron.
50 lakh records and postgres performance is not good....
It takes 30 sec for simple queries....
sathiya psql escribió: > I have 1 GB RAM with Pentium Celeron. > 50 lakh records and postgres performance is not good.... > > It takes 30 sec for simple queries.... Shows us the explain analyze. There is no problem with a large number of records, as long as you're not expecting to process all of them all the time. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Shows us the explain analyze. There is no problem with a large number
of records, as long as you're not expecting to process all of them all
the time.
yes many a times i need to process all the records,
often i need to use count(*) ????
so what to do ?? ( those trigger options i know already, but i wil l do count on different parameters )
often i need to use count(*) ????
so what to do ?? ( those trigger options i know already, but i wil l do count on different parameters )
sathiya psql wrote: > > yes many a times i need to process all the records, > > often i need to use count(*) ???? > > so what to do ?? ( those trigger options i know already, but i wil l do > count on different parameters ) *** PLEASE *** post the output of an EXPLAIN ANALYSE on one or more of your queries, and POST THE QUERY TEXT TOO. For example, if your query was: SELECT COUNT(*) FROM sometable WHERE somefield > 42 ; then you would run: ANALYZE sometable; then you would run: EXPLAIN ANALYZE SELECT COUNT(*) FROM sometable WHERE somefield > 42 ; and paste the resulting text into an email message to this list. Without your query text and the EXPLAIN ANALYZE output from it it is much harder for anybody to help you. You should also post the output of a psql "\d" command on your main table definitions. As for what you can do to improve performance, some (hardly an exclusive list) of options include: - Maintaining a summary table using a trigger. The summary table might track counts for various commonly-searched-for criteria. Whether this is practical or not depends on your queries, which you have still not posted to the list. - Tuning your use of indexes (adding, removing, or adjusting indexes to better service your queries). Use EXPLAIN ANALYZE to help with this, and READ THE MANUAL, which has excellent information on tuning index use and profiling queries. - Tune the query planner parameters to make better planning decisions. In particular, if your data and indexes all fit in ram you should reduce the cost of index scans relative to sequential scans. There is plenty of information about that on this mailing list. Also, READ THE MANUAL, which has excellent information on tuning the planner. - Investigating table partitioning and tablespaces (this requires considerable understanding of postgresql to use successfully). You probably want to avoid this unless you really need it, and I doubt it will help much for in-memory databases anyway. - Buy a faster computer -- Craig Ringer
EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=6069.373..6069.374 rows=1 loops=1)
-> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)
Total runtime: 6069.553 ms
(3 rows)
zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=6259.436..6259.437 rows=1 loops=1)
-> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 width=0) (actual time=0.013..4448.549 rows=3279119 loops=1)
Total runtime: 6259.543 ms
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=6069.373..6069.374 rows=1 loops=1)
-> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 width=0) (actual time=0.012..4322.345 rows=3279119 loops=1)
Total runtime: 6069.553 ms
(3 rows)
zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time=6259.436..6259.437 rows=1 loops=1)
-> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 width=0) (actual time=0.013..4448.549 rows=3279119 loops=1)
Total runtime: 6259.543 ms
sathiya psql wrote: > EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ; > And your usual query is: SELECT count(*) from call_log_in_ram; ? If so, you should definitely build a summary table maintained by a trigger to track the row count. That's VERY well explained in the mailing list archives. This was suggested to you very early on in the discussion. If you have problems with other queries, how about showing EXPLAIN ANALYZE for the other queries you're having problems with? -- Craig Ringer
sathiya psql wrote: > EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ; > QUERY > PLAN > ------------------------------------ > ---------------------------------------------------------------------------------------------- > Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual > time=6069.373..6069.374 rows=1 loops=1) > -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 > width=0) (actual time=0.012..4322.345 rows=3279119 loops=1) > Total runtime: 6069.553 ms > (3 rows) You will never get good performance automatically with COUNT(*) in PostgreSQL. You can either create your own infrastructure (triggers, statistics tables, etc) or use an approximate result like this: CREATE OR REPLACE FUNCTION fcount(varchar) RETURNS bigint AS $$ SELECT reltuples::bigint FROM pg_class WHERE relname=$1; $$ LANGUAGE 'sql'; Use the above function as: SELECT fcount('table_name'); fcount -------- 7412 (1 row)
In response to "sathiya psql" <sathiya.psql@gmail.com>: > EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ; > QUERY > PLAN > ---------------------------------------------------------------------------------------------------------------------------------- > Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time= > 6069.373..6069.374 rows=1 loops=1) > -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 > width=0) (actual time=0.012..4322.345 rows=3279119 loops=1) > Total runtime: 6069.553 ms > (3 rows) > > zivah=# EXPLAIN ANALYZE SELECT count(*) from call_log_in_ram ; > QUERY > PLAN > ---------------------------------------------------------------------------------------------------------------------------------- > Aggregate (cost=90760.80..90760.80 rows=1 width=0) (actual time= > 6259.436..6259.437 rows=1 loops=1) > -> Seq Scan on call_log_in_ram (cost=0.00..89121.24 rows=3279119 > width=0) (actual time=0.013..4448.549 rows=3279119 loops=1) > Total runtime: 6259.543 ms 6 seconds doesn't sound like an unreasonable amount of time to count 3 million rows. I don't see any performance issue here. What were your expectations? -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ wmoran@collaborativefusion.com Phone: 412-422-3463x4023
>> th maximum number of records in one PostreSQL table ist unlimited: > > am asking for good performance, not just limitation.. > > If i have half a crore record, how the performance will be ? How long is a piece of string? It depends what you are doing, whether you have indexes, how the tables are arranged, and how good the statistics are. Postgres has available to it almost all of the best data handling algorithms, and generally it uses them sensibly. Use the EXPLAIN tool to get Postgres to tell you how it will execute a query. Read the manual. We have people running databases with an arawb (thousand million) or more rows without any significant performance problems. However, if you tell Postgres to read the entire table (like doing SELECT COUNT(*) FROM table), it will obviously take time. Matthew -- In the beginning was the word, and the word was unsigned, and the main() {} was without form and void...