> To explain. With any 'programming exercise' I do, I 'start small' and
> try to see program behavior on small scale (both datasets and number of
> involved modules) before I roll out any larger setup for testing.
>
> In this case, tha DB will be used with 'TABLE ludzie' popolated with
> close to a milion entries, so when I noticed 'Seq-scan' I became
> warried.
That's different to what you posted.. 5 rows in a table won't make a
database use an index so there's no point testing that if you're
expecting millions of rows.
> My real warry was the discrepancy of "TABLE users' v/s 'TABLE ludzie'
> results - this smelled like uncontrolable, unpredictible result. But
> obviosly, not being too proficient with DBMS, I didn't realise the query
> plan is build from trancient estimates of access cost. I've never before
> fell into the necesity to ANALYSE table, only relaying on self-estimates
> the DBMS gathers along the use of the system. Obviously that's totally
> wrong for pre-production system evaluation where datasets are cooked and
> swapped faster then any DB self-estimates have any chance to get
> collected.
Postgres has 'autovacuum' which keeps table stats up to date in most
cases enough for you not to worry about this sort of thing.
http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM
It won't fit every single case but give it a go.
--
Postgresql & php tutorials
http://www.designmagick.com/