Hi,
On 12/01/2017 04:19 PM, Сергей А. Фролов wrote:
> The database was created at october 2016 on PG 9.5.3 then
> backuped/restored into PG 9.6.5 and then backuped/restored into PG 9.6.6.
>
> I am sure that the ~10 problematic records were added on PG 9.6.5. and
> ~40 were added on PG 9.6.6.
>
By backup/restore you mean pg_dump? If that's the case, it's pretty sure
the duplicates happened on 9.6.6 (otherwise the restore would fail).
But that contradicts the 9.6.5 -> 9.6.6 upgrade, if your claim that 10
duplicates originate on 9.6.5 is correct.
BTW are you running vanilla PostgreSQL, or some of the EDB versions?
> The file systems is NTFS.
>
> Windows 10 runs as virtual machine under Hyper-V. Windows logs contains
> nothing suspicious on both.
>
No idea. My experience with modern Windows systems is minimal, but I
suppose it certainly shouldn't corrupt data in normal operation.
> I have wrote the script to generate select to check for duplicates all
> tables in all schemas - all other tables are OK.
>
> The only problem I observed - the PG dbugger hanged once and we had to
> kill related postgres process via taskmanager (killing session had no
> effect) , but I am shure that the killed session did not touch the
> problem table at all.
>
Not sure which debugger you mean, but again - killing a process should
not result in data corruption. It may cause the database to crash and
perform recovery, but that's about it.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services