1. I believe we have lots of memory. How much is needed to read one array of 30K float number?
2. What do we need to avoid possible repeated detost, and what it is?
3. We are not going to update individual elements of the arrays. We might occasionally replace the whole thing. When we benchmarked, we did not notice slowness. Can you explain how to reproduce slowness?
TIA!
On Fri, Feb 14, 2014 at 11:03 PM, Pavel Stehule [via PostgreSQL] <[hidden email]> wrote:
Hello
I worked with 80K float fields without any problem.
There are possible issues:
* needs lot of memory for detoast - it can be problem with more parallel queries
* there is a risk of possible repeated detost - some unhappy usage in plpgsql can be slow - it is solvable, but you have to identify this issue
* any update of large array is slow - so these arrays are good for write once data