Re: PGSQL with high number of database rows? - Mailing list pgsql-general

From Listmail
Subject Re: PGSQL with high number of database rows?
Date
Msg-id op.tp7x88h8zcizji@apollo13
Whole thread Raw
In response to PGSQL with high number of database rows?  (Tim Perrett <hello@timperrett.com>)
List pgsql-general
> Are there any implications with possibly doing this? will PG handle it?
> Are there realworld systems using PG that have a massive amount of data
> in them?

    It's not how much data you have, it's how you query it.

    You can have a table with 1000 rows and be dead slow if said rows are big
TEXT data and you seq-scan it in its entierety on every webpage hit your
server gets...
    You can have a terabyte table with billions of row, and be fast if you
know what you're doing and have proper indexes.

    Learning all this is very interesting. MySQL always seemed hostile to me,
but postgres is friendly, has helpful error messages, the docs are great,
and the developer team is really nice.

    The size of your data has no importance (unless your disk is full), but
the size of your working set does.

    So, if you intend on querying your data for a website, for instance,
where the user searches data using forms, you will need to index it
properly so you only need to explore small sections of your data set in
order to be fast.

    If you intend to scan entire tables to generate reports or statistics,
you will be more interested in knowing if the size of your RAM is larger
or smaller than your data set, and about your disk throughput.

    So, what is your application ?

pgsql-general by date:

Previous
From: "Martin Gainty"
Date:
Subject: Re: Using C# to create stored procedures
Next
From: Listmail
Date:
Subject: Re: Webappication and PostgreSQL login roles