Will Hartung <willhartung@gmail.com> writes:
>> On May 20, 2019, at 5:14 PM, Adrian Klaver <adrian.klaver@aklaver.com> wrote:
>> Well looks like you are down to Tom's suggestion of creating a test case. Given that it seems confined to the jsonb
fieldand corresponding index, I would think that is all that is needed for the test case. Start with some smaller
subset,say 10,000 rows and work up till you start seeing an issue.
> This will take quite some work, and I wouldnt attempt it with less than 5M rows to load.
Well, you're the only one who's seen this problem, and none of the
rest of us have any idea how to reproduce it. So if you want something
to get done in a timely fashion, it's up to you to show us a test case.
My guess is that it wouldn't be that hard to anonymize your data to
the point where it'd be OK to show to someone else. It's unlikely
that the problem depends on the *exact* data you've got --- though it
might depend on string lengths and the number/locations of duplicates.
But you should be able to substitute random strings for the original
values while preserving that.
regards, tom lane