Thread: UPDATE 66k rows too slow

UPDATE 66k rows too slow

From
Miguel Arroz
Date:
Hi!

   I'm testing an update on 66k rows on Postgresql, and it seems
something is not right here.

   My server is a Quad-Xeon 3.2 Ghz with 2 GB RAM and a RAID 1 running
FreeBSD 6.3 and PgSQL 8.3. My development machine is a PowerBook G4
1.67 Ghz with 2 GB RAM, OS X Leopard and PgSQL 8.3.

   I detected that an update in my application was runnning to slow.
So, I'm testing an update query with no conditions, just:

   UPDATE text_answer_mapping_ebt SET f1 = false;

   f1 is a boolean column, so it can't get much simpler than this.
I've analysed and vaccumed several times, yet the results I get on the
Xeon are:

EXPLAIN ANALYZE UPDATE text_answer_mapping_ebt SET f1 = false;
                                                             QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------
  Seq Scan on text_answer_mapping_ebt  (cost=0.00..13945.72
rows=265072 width=92) (actual time=21.123..1049.054 rows=66268 loops=1)
  Total runtime: 63235.363 ms
(2 rows)

   On my powerbook, this runs on about 25 seconds.

   Also, when I do the same operation on a very similar-structured
table with less rows, I get *much* faster times:

EXPLAIN ANALYZE UPDATE respondent_mapping_ebt SET f1 = false;
                                                          QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------
  Seq Scan on respondent_mapping_ebt  (cost=0.00..1779.03 rows=36003
width=68) (actual time=3.023..76.096 rows=12001 loops=1)
  Total runtime: 894.888 ms
(2 rows)

   Of course that, less rows, less time, but how can 12k rows take
less than one second, and 66k rows take more than one minute?

   I've read some stuff about PgSQL tuning, and played with the
configuration files, but I keep getting the feeling that I'm doing
this in a blind way. I'm trying to guess the problem and avoid it. I'm
sure there's a better way, but I can't seem to find it. My question
is, how can I "ask" PgSQL what's happening? How can I avoid guessing,
and be sure of what is causing this slowdown? Is some buffer too small
for this? Is this related to checkpoints?

   I would appreciate if someone could point me in the right
direction. Of course I don't need to say I'm relatively new to this
kind of problems. :)

   Yours

Miguel Arroz

Miguel Arroz
http://www.terminalapp.net
http://www.ipragma.com




Attachment

Re: UPDATE 66k rows too slow

From
andrew@pillette.com
Date:
Miguel Arroz <arroz@guiamac.com> wrote ..

>    I'm testing an update on 66k rows on Postgresql, and it seems
> something is not right here.
>
>    My server is a Quad-Xeon 3.2 Ghz with 2 GB RAM and a RAID 1 running
> FreeBSD 6.3 and PgSQL 8.3. My development machine is a PowerBook G4
> 1.67 Ghz with 2 GB RAM, OS X Leopard and PgSQL 8.3.

[62 seconds on server, 25 seconds on much weaker development machine]

OK, my guess is that the server's tables are bloated beyond what regular VACUUM can fix. Try a VACUUM FULL VERBOSE or a
re-CLUSTERif the tables are clustered. 

Hope this helps.

Re: UPDATE 66k rows too slow

From
Tom Lane
Date:
Miguel Arroz <arroz@guiamac.com> writes:
> EXPLAIN ANALYZE UPDATE text_answer_mapping_ebt SET f1 = false;
>                                                              QUERY PLAN
>
-----------------------------------------------------------------------------------------------------------------------------------
>   Seq Scan on text_answer_mapping_ebt  (cost=0.00..13945.72
> rows=265072 width=92) (actual time=21.123..1049.054 rows=66268 loops=1)
>   Total runtime: 63235.363 ms
> (2 rows)

Hm, only one second to do the scan ...

I'm thinking the extra time must be going into index updating or
CHECK-constraint checking or some such overhead.  Can we see the full
schema definition of the table?

            regards, tom lane

Re: UPDATE 66k rows too slow

From
Greg Smith
Date:
On Mon, 10 Mar 2008, Miguel Arroz wrote:

> My question is, how can I "ask" PgSQL what's happening? How can I avoid
> guessing, and be sure of what is causing this slowdown?

There are many pieces involved here, and any one or multiple of them could
be to blame.  Someone may make a guess and get lucky about the cause, but
the only generic way to solve this sort of thing is to have a systematic
approach that goes through the likely possible causes one by one until
you've discovered the source of the problem.  Since as you say you're new
to this, you've got the double task of learning that outline and then
finding out how to run each of the tests.

For your particular case, slow updates, I usually follow the following
series of tests.  I happen to have articles on most of these sitting
around because they're common issues:

-Confirm disks are working as expected:
http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm

-Look at differences between fsync commit behavior between the two
systems.  It's often the case that when servers appear slower than
development systems it's because the server is doing fsync properly, while
the development one is caching fsync in a way that is unsafe for database
use but much faster.
http://www.postgresql.org/docs/8.3/static/wal-reliability.html is a brief
intro to this while
http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm goes
into extreme detail.  The test_fsync section there is probably the most
useful one for your comparision.

-Setup basic buffer memory parameters:
http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm

-VACUUM VERBOSE ANALYZE and make sure that's working properly.  This
requires actually understanding the output from that command which is
"fun" to figure out.  A related topic is looking for index bloat which I
haven't found a good tutorial on yet.

-Investigate whether checkpoints are to blame.  Since you're running 8.3
you can just turn on log_checkpoints and see how often they're showing up
and get an idea how big the performance impact is.  Increasing
checkpoint_segments is the usual first thing to do if this is the case.

-Collect data with vmstat, iostat, and top to figure out what's happening
during the problem query

-Look for application problems (not your issue here)

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: UPDATE 66k rows too slow

From
Miguel Arroz
Date:
Hi!

   I read and did many stuff you pointed me too. Raised shared buffers
to 180 MB, and tried again. Same results.

   I deleted the DB, created a new one and generated new test data. I
know have 72k rows, and the same query finishes in... 9 seconds.

   I'm totally clueless. Anyway, two questions:

   1) My working_mem is 2 MB. Does an UPDATE query like main depend on
working_mem?

   2) I still feel this is all very trial-and-error. Change value, run
query, hope it solves the problem. Well, the DB itself knows what is
doing. Isn't there any way to make it tell us that? Like "the working
mem is too low" or anything else. I know the problem is not the
checkpoints, at least nothing appears on the log related to that. But
it irritates me to be in front of a such complex system and not being
able to know what's going on.

   Yours

Miguel Arroz

On 2008/03/10, at 05:10, Greg Smith wrote:

> On Mon, 10 Mar 2008, Miguel Arroz wrote:
>
>> My question is, how can I "ask" PgSQL what's happening? How can I
>> avoid guessing, and be sure of what is causing this slowdown?
>
> There are many pieces involved here, and any one or multiple of them
> could be to blame.  Someone may make a guess and get lucky about the
> cause, but the only generic way to solve this sort of thing is to
> have a systematic approach that goes through the likely possible
> causes one by one until you've discovered the source of the
> problem.  Since as you say you're new to this, you've got the double
> task of learning that outline and then finding out how to run each
> of the tests.
>
> For your particular case, slow updates, I usually follow the
> following series of tests.  I happen to have articles on most of
> these sitting around because they're common issues:
>
> -Confirm disks are working as expected: http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm
>
> -Look at differences between fsync commit behavior between the two
> systems.  It's often the case that when servers appear slower than
> development systems it's because the server is doing fsync properly,
> while the development one is caching fsync in a way that is unsafe
> for database use but much faster. http://www.postgresql.org/docs/8.3/static/wal-reliability.html
>  is a brief intro to this while http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm
>  goes into extreme detail.  The test_fsync section there is probably
> the most useful one for your comparision.
>
> -Setup basic buffer memory parameters: http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm
>
> -VACUUM VERBOSE ANALYZE and make sure that's working properly.  This
> requires actually understanding the output from that command which
> is "fun" to figure out.  A related topic is looking for index bloat
> which I haven't found a good tutorial on yet.
>
> -Investigate whether checkpoints are to blame.  Since you're running
> 8.3 you can just turn on log_checkpoints and see how often they're
> showing up and get an idea how big the performance impact is.
> Increasing checkpoint_segments is the usual first thing to do if
> this is the case.
>
> -Collect data with vmstat, iostat, and top to figure out what's
> happening during the problem query
>
> -Look for application problems (not your issue here)
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com
> Baltimore, MD
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

Miguel Arroz
http://www.terminalapp.net
http://www.ipragma.com




Attachment

Re: UPDATE 66k rows too slow

From
"Joshua D. Drake"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 10 Mar 2008 23:17:54 +0000
Miguel Arroz <arroz@guiamac.com> wrote:

> Hi!
> 
>    I read and did many stuff you pointed me too. Raised shared
> buffers to 180 MB, and tried again. Same results.
> 
>    I deleted the DB, created a new one and generated new test data.
> I know have 72k rows, and the same query finishes in... 9 seconds.
> 
>    I'm totally clueless. Anyway, two questions:
> 
>    1) My working_mem is 2 MB. Does an UPDATE query like main depend
> on working_mem?
> 
>    2) I still feel this is all very trial-and-error. Change value,
> run query, hope it solves the problem. Well, the DB itself knows what
> is doing. Isn't there any way to make it tell us that? Like "the
> working mem is too low" or anything else. I know the problem is not
> the checkpoints, at least nothing appears on the log related to that.
> But it irritates me to be in front of a such complex system and not
> being able to know what's going on.

What does iostat -k 1 tell you during the 9 seconds the query is
running?

Joshua D. Drake



- -- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
      PostgreSQL political pundit | Mocker of Dolphins

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFH1cK3ATb/zqfZUUQRAhllAJ9C9aL9o/4hzq9vZyRaY8J6DknP5QCePDfS
BxJ/umrVArStUJgG3oFYsSE=
=n0uC
-----END PGP SIGNATURE-----

Re: UPDATE 66k rows too slow

From
Miguel Arroz
Date:
Hi!

   It now raised to 40 seconds... here goes the result of iostat:

iostat -K -c 40
       tty             ad4              ad6             cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
    1   78 32.86  34  1.08   0.70   0  0.00  13  0  1  0 86
    0  180  6.00   4  0.02   0.00   0  0.00   0  0  0  0 100
    1   63 39.74  62  2.40   0.00   0  0.00  17  0  1  0 82
    0   60 18.69 815 14.87   0.00   0  0.00  20  0  2  0 79
    0   60 56.17 293 16.06   0.00   0  0.00  41  0  5  0 53
    0   60 55.74 396 21.53   0.00   0  0.00  39  0 10  0 51
    0   60 42.24 357 14.71   0.00   0  0.00  10  0  2  0 88
    0   60 42.92 354 14.82   0.00   0  0.00  12  0  7  1 80
    0   60 38.51 368 13.82   0.00   0  0.00  14  0  6  0 80
    0   60 43.83 326 13.94   0.00   0  0.00   4  0  1  0 95
    0   60 33.30 395 12.83   0.00   0  0.00  11  0  3  0 86
    0   60 41.36 395 15.94   0.00   0  0.00   4  0  3  0 93
    0   60 21.97 684 14.68   0.00   0  0.00  10  0  2  0 88
    0   60 72.44 297 20.99   0.00   0  0.00  42  0  9  0 48
    0   60 38.18 453 16.87   0.00   0  0.00  23  0  8  1 68
    0   60 35.15 365 12.52   0.00   0  0.00   1  0  1  0 97
    0   60 44.40 396 17.15   0.00   0  0.00  17  0  6  0 77
    0   60 43.99 341 14.64   0.00   0  0.00   4  0  2  0 93
    0   60 33.53 440 14.39   0.00   0  0.00  10  0  5  0 85
    0   60 31.22 345 10.51   0.00   0  0.00   0  0  2  0 97
       tty             ad4              ad6             cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
    0   60 33.48 449 14.66   0.00   0  0.00  11  0  3  0 86
    0  180 16.85 599  9.87   0.00   0  0.00   1  0  1  0 98
    0   60 55.37 455 24.58   0.00   0  0.00  25  0  4  1 69
    0   60 49.83 376 18.28   0.00   0  0.00  18  0  5  1 76
    0   60 29.86 363 10.58   0.00   0  0.00   3  0  0  1 96
    0   60 36.21 365 12.90   0.00   0  0.00  12  0  3  1 84
    0   60 33.13 353 11.41   0.00   0  0.00   2  0  2  0 96
    0   60 39.47 345 13.28   0.00   0  0.00  16  0  3  0 80
    0   60 40.48 363 14.34   0.00   0  0.00   8  0  2  0 89
    0   60 30.91 397 11.97   0.00   0  0.00   5  0  2  0 93
    0   60 18.21 604 10.75   0.00   0  0.00   5  0  2  0 93
    0   60 48.65 359 17.04   0.00   0  0.00  20  0  6  0 74
    0   60 32.91 375 12.04   0.00   0  0.00  10  0  4  0 86
    0   60 35.81 339 11.84   0.00   0  0.00   3  0  2  0 96
    0   60 33.38 394 12.83   0.00   0  0.00  11  0  4  0 85
    0   60 34.40 313 10.51   0.00   0  0.00   4  0  2  0 93
    0   60 45.65 358 15.94   0.00   0  0.00  19  0  7  0 74
    0   60 37.41 309 11.28   0.00   0  0.00   3  0  2  0 95
    0   60 32.61 447 14.22   0.00   0  0.00  10  0  3  1 86
    0   60 17.11 516  8.63   0.00   0  0.00   1  0  1  0 98

   There's surely a lot of disk activity going on. With this figures,
I could have written some hundred gigabytes during the query
execution! Something is definitely not right here.

   Yours

Miguel Arroz

On 2008/03/10, at 23:22, Joshua D. Drake wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Mon, 10 Mar 2008 23:17:54 +0000
> Miguel Arroz <arroz@guiamac.com> wrote:
>
>> Hi!
>>
>>   I read and did many stuff you pointed me too. Raised shared
>> buffers to 180 MB, and tried again. Same results.
>>
>>   I deleted the DB, created a new one and generated new test data.
>> I know have 72k rows, and the same query finishes in... 9 seconds.
>>
>>   I'm totally clueless. Anyway, two questions:
>>
>>   1) My working_mem is 2 MB. Does an UPDATE query like main depend
>> on working_mem?
>>
>>   2) I still feel this is all very trial-and-error. Change value,
>> run query, hope it solves the problem. Well, the DB itself knows what
>> is doing. Isn't there any way to make it tell us that? Like "the
>> working mem is too low" or anything else. I know the problem is not
>> the checkpoints, at least nothing appears on the log related to that.
>> But it irritates me to be in front of a such complex system and not
>> being able to know what's going on.
>
> What does iostat -k 1 tell you during the 9 seconds the query is
> running?
>
> Joshua D. Drake
>
>
>
> - --
> The PostgreSQL Company since 1997: http://www.commandprompt.com/
> PostgreSQL Community Conference: http://www.postgresqlconference.org/
> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
>      PostgreSQL political pundit | Mocker of Dolphins
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFH1cK3ATb/zqfZUUQRAhllAJ9C9aL9o/4hzq9vZyRaY8J6DknP5QCePDfS
> BxJ/umrVArStUJgG3oFYsSE=
> =n0uC
> -----END PGP SIGNATURE-----
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

Miguel Arroz
http://www.terminalapp.net
http://www.ipragma.com




Attachment

Re: UPDATE 66k rows too slow

From
"Joshua D. Drake"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Mon, 10 Mar 2008 23:46:10 +0000
Miguel Arroz <arroz@guiamac.com> wrote:
        tty             ad4              ad6             cpu
>   tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
>     0   60 33.48 449 14.66   0.00   0  0.00  11  0  3  0 86
>     0  180 16.85 599  9.87   0.00   0  0.00   1  0  1  0 98
>     0   60 55.37 455 24.58   0.00   0  0.00  25  0  4  1 69
>     0   60 49.83 376 18.28   0.00   0  0.00  18  0  5  1 76
>     0   60 29.86 363 10.58   0.00   0  0.00   3  0  0  1 96
>     0   60 36.21 365 12.90   0.00   0  0.00  12  0  3  1 84
>     0   60 33.13 353 11.41   0.00   0  0.00   2  0  2  0 96
>     0   60 39.47 345 13.28   0.00   0  0.00  16  0  3  0 80
>     0   60 40.48 363 14.34   0.00   0  0.00   8  0  2  0 89
>     0   60 30.91 397 11.97   0.00   0  0.00   5  0  2  0 93
>     0   60 18.21 604 10.75   0.00   0  0.00   5  0  2  0 93
>     0   60 48.65 359 17.04   0.00   0  0.00  20  0  6  0 74
>     0   60 32.91 375 12.04   0.00   0  0.00  10  0  4  0 86
>     0   60 35.81 339 11.84   0.00   0  0.00   3  0  2  0 96
>     0   60 33.38 394 12.83   0.00   0  0.00  11  0  4  0 85
>     0   60 34.40 313 10.51   0.00   0  0.00   4  0  2  0 93
>     0   60 45.65 358 15.94   0.00   0  0.00  19  0  7  0 74
>     0   60 37.41 309 11.28   0.00   0  0.00   3  0  2  0 95
>     0   60 32.61 447 14.22   0.00   0  0.00  10  0  3  1 86
>     0   60 17.11 516  8.63   0.00   0  0.00   1  0  1  0 98
> 
>    There's surely a lot of disk activity going on. With this
> figures, I could have written some hundred gigabytes during the
> query execution! Something is definitely not right here.


Well the above says you are getting ~ 10-15MB/s a second performance.
What is the disk subsystem you have. Also note that the duration
probably went up because you didn't vacuum between tests.

What version of PostgreSQL (I missed it).

Joshua D. Drake 



- -- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
      PostgreSQL political pundit | Mocker of Dolphins

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFH1cq5ATb/zqfZUUQRAhVvAKCfQk4Mg6qLNQfc6uyiI2TBSbkThACeK/5k
Tgc9ltxoOvnTMzKG2hG/4LY=
=Tm4N
-----END PGP SIGNATURE-----

Re: UPDATE 66k rows too slow

From
Miguel Arroz
Date:
Hi!

   The disk subsystem will be a RAID 1, but for now it's just a single
7200 rpm 160 GB SATA hard drive. The PgSQL version is 8.3, the latest
one.

   I have done some performance tests on the drive, and it handles
about 40 MB/s on sequential writes, so I'm assuming it's OK.

   Yours

Miguel Arroz

On 2008/03/10, at 23:56, Joshua D. Drake wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On Mon, 10 Mar 2008 23:46:10 +0000
> Miguel Arroz <arroz@guiamac.com> wrote:
>        tty             ad4              ad6             cpu
>>  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
>>    0   60 33.48 449 14.66   0.00   0  0.00  11  0  3  0 86
>>    0  180 16.85 599  9.87   0.00   0  0.00   1  0  1  0 98
>>    0   60 55.37 455 24.58   0.00   0  0.00  25  0  4  1 69
>>    0   60 49.83 376 18.28   0.00   0  0.00  18  0  5  1 76
>>    0   60 29.86 363 10.58   0.00   0  0.00   3  0  0  1 96
>>    0   60 36.21 365 12.90   0.00   0  0.00  12  0  3  1 84
>>    0   60 33.13 353 11.41   0.00   0  0.00   2  0  2  0 96
>>    0   60 39.47 345 13.28   0.00   0  0.00  16  0  3  0 80
>>    0   60 40.48 363 14.34   0.00   0  0.00   8  0  2  0 89
>>    0   60 30.91 397 11.97   0.00   0  0.00   5  0  2  0 93
>>    0   60 18.21 604 10.75   0.00   0  0.00   5  0  2  0 93
>>    0   60 48.65 359 17.04   0.00   0  0.00  20  0  6  0 74
>>    0   60 32.91 375 12.04   0.00   0  0.00  10  0  4  0 86
>>    0   60 35.81 339 11.84   0.00   0  0.00   3  0  2  0 96
>>    0   60 33.38 394 12.83   0.00   0  0.00  11  0  4  0 85
>>    0   60 34.40 313 10.51   0.00   0  0.00   4  0  2  0 93
>>    0   60 45.65 358 15.94   0.00   0  0.00  19  0  7  0 74
>>    0   60 37.41 309 11.28   0.00   0  0.00   3  0  2  0 95
>>    0   60 32.61 447 14.22   0.00   0  0.00  10  0  3  1 86
>>    0   60 17.11 516  8.63   0.00   0  0.00   1  0  1  0 98
>>
>>   There's surely a lot of disk activity going on. With this
>> figures, I could have written some hundred gigabytes during the
>> query execution! Something is definitely not right here.
>
>
> Well the above says you are getting ~ 10-15MB/s a second performance.
> What is the disk subsystem you have. Also note that the duration
> probably went up because you didn't vacuum between tests.
>
> What version of PostgreSQL (I missed it).
>
> Joshua D. Drake
>
>
>
> - --
> The PostgreSQL Company since 1997: http://www.commandprompt.com/
> PostgreSQL Community Conference: http://www.postgresqlconference.org/
> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
>      PostgreSQL political pundit | Mocker of Dolphins
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
>
> iD8DBQFH1cq5ATb/zqfZUUQRAhVvAKCfQk4Mg6qLNQfc6uyiI2TBSbkThACeK/5k
> Tgc9ltxoOvnTMzKG2hG/4LY=
> =Tm4N
> -----END PGP SIGNATURE-----
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance

Miguel Arroz
http://www.terminalapp.net
http://www.ipragma.com




Attachment

Re: UPDATE 66k rows too slow

From
Greg Smith
Date:
On Mon, 10 Mar 2008, Miguel Arroz wrote:

> I deleted the DB, created a new one and generated new test data. I know have
> 72k rows, and the same query finishes in... 9 seconds.

This seems like more evidence that your problem here is related to dead
rows (this is what Andrew suggested).  If a fresh copy of the database
runs fast but it quickly degrades as you run additional tests that do many
updates on it, that's a popular suspect.

Are you familiar with dead rows?  When you update something, the original
copy doesn't go away; it stays behind until VACUUM gets to cleaning it up.
If you update the same rows, say, 10 times you'll have 9 dead copies of
every row in the way of doing reports on the ones still alive.

Let's go back to your original post a second:

Seq Scan on text_answer_mapping_ebt  (cost=0.00..13945.72 rows=265072
width=92) (actual time=21.123..1049.054 rows=66268 loops=1)

That shows the database estimating there are exactly 4 times your 66268
rows there (4X66268=265072).  That sounds like one active copy of your
data and 3 dead ones left behind from earlier tests.  In that case, it
would take much longer to do that full scan than when the database was
fresh.

> 1) My working_mem is 2 MB. Does an UPDATE query like main depend on
> working_mem?

Nope.  That's used for sorting and that sort of thing.

> Well, the DB itself knows what is doing. Isn't there any way to make it
> tell us that?

Well, the database server itself has a lot of operating system and
hardware components it relies on, and it has no idea how any of those are
working.  So it's unreasonable to expect in every case the database has a
clue what's going on.

In your case, I'm suspecting more strongly the report that will say
something interesting here is the 4th item on the list I sent before,
looking at VACUUM VERBOSE ANALYZE output for a problem.

Here's the educational exercise I'd suggest that might help you track down
what's going on here:

1) Recreate a fresh copy of the database.  Run VACUUM VERBOSE ANALYZE and
save a copy of the output so you know what that looks like with no dead
rows.
2) Run your query with EXPLAIN ANALYZE and save that too.  Should be fast.
3) Do whatever testing it is you do that seems to result in the system
running much slower
4) Save the EXPLAIN ANALYZE output when you're reached slowness
5) Run a VACUUM VERBOSE ANALYZE, save that for comparision to the earlier
6) Run the EXPLAIN ANALYZE again to see if (5) did anything useful.
one
7) Run VACUUM FULL VERBOSE and save that output
8) Run the EXPLAIN ANALYZE again to see if (7) did anything useful.

Comparing the VACUUM reports and the EXPLAIN plans to see what changes
along the way should give you some good insight into what's happening
here.  That is what you're asking for--asking the database to tell you
what it's doing--but actually understanding that report takes a certain
amount of study.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD