On 2021-02-10 11:10:41 +0000, Niels Jespersen wrote:
> >Fra: prachi surangalikar <surangalikarprachi100@gmail.com>
> >We are using Postgres 12.2.1 for fetching per minute data for about 25
> machines but running parallely via a single thread in python.
> >But suddenly the insertion time has increased to a very high level, about 30
>> second for one machine.
>
> >We are in so much problem as the data fetching is becoming slow.
>
> >if anyone could help us to solve this problem it would be of great help to us.
>
> Get your data into a Text.IO memory structure and then use copy https://
> www.psycopg.org/docs/usage.html#using-copy-to-and-copy-from
>
>
>
> This is THE way of high-performant inserts using Postgres.
True, but Prachi wrote that the insert times "suddenly ... increased to a
very high level". It's better to investigate what went wrong than to
blindly make some changes to the code.
As a first measure I would at least turn on statement logging and/or
pg_stat_statements to see which statements are slow, and then
investigate the slow statements further. auto_explain might also be
useful.
hp
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp@hjp.at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"