Re: postgresql v11.1 Segmentation fault: signal 11: by runningSELECT... JIT Issue? - Mailing list pgsql-general

From pabloa98
Subject Re: postgresql v11.1 Segmentation fault: signal 11: by runningSELECT... JIT Issue?
Date
Msg-id CAEjudX5LiF2XbTuhQ7KQK1ODSNcKz5ezOsT9jWyXZ2knN=wpuw@mail.gmail.com
Whole thread Raw
In response to Re: postgresql v11.1 Segmentation fault: signal 11: by runningSELECT... JIT Issue?  (pabloa98 <pabloa98@gmail.com>)
Responses Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?  (Andrew Gierth <andrew@tao11.riddles.org.uk>)
Re: postgresql v11.1 Segmentation fault: signal 11: by runningSELECT... JIT Issue?  (Justin Pryzby <pryzby@telsasoft.com>)
List pgsql-general
I found this article:


It seems I should modify: uint8 t_hoff;
and replace it with something like: uint32 t_hoff; or uint64 t_hoff;

And perhaps should I modify this too?

The fix is easy enough, just adding a
v_hoff = LLVMBuildZExt(b, v_hoff, LLVMInt32Type(), "");
fixes the issue for me.

If that is the case, I am not sure what kind of modification we should do.


I feel I need to explain why we create these huge tables. Basically we want to process big matrices for machine learning.
Using tables with classic columns let us write very clear code. If we have to start using arrays as columns, things would become complicated and not intuitive (besides, some columns store vectors as arrays... ).

We could use JSONB (we do, but for json documents). The problem is, storing large amounts of jsonb columns create performance issues (compared with normal tables).

Since almost everybody is doing ML to apply to different products, perhaps are there other companies interested in a version of Postgres that could deal with tables with thousands of columns?
I did not find any postgres package ready to use like that though.

Pablo




On Tue, Jan 29, 2019 at 12:11 AM pabloa98 <pabloa98@gmail.com> wrote:
I did not modify it.

I guess I should make it bigger than 1765. is 2400 or 3200 fine?

My apologies if my questions look silly. I do not know about the internal format of the database.

Pablo

On Mon, Jan 28, 2019 at 11:58 PM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:
>>>>> "pabloa98" == pabloa98  <pabloa98@gmail.com> writes:

 pabloa98> the table baseline_denull has 1765 columns,

Uhh...

#define MaxHeapAttributeNumber  1600    /* 8 * 200 */

Did you modify that?

(The back of my envelope says that on 64bit, the largest usable t_hoff
would be 248, of which 23 is fixed overhead leaving 225 as the max null
bitmap size, giving a hard limit of 1800 for MaxTupleAttributeNumber and
1799 for MaxHeapAttributeNumber. And the concerns expressed in the
comments above those #defines would obviously apply.)

--
Andrew (irc:RhodiumToad)

pgsql-general by date:

Previous
From: pabloa98
Date:
Subject: Re: postgresql v11.1 Segmentation fault: signal 11: by runningSELECT... JIT Issue?
Next
From: Andrew Gierth
Date:
Subject: Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?