Re: [PROPOSAL] Shared Ispell dictionaries - Mailing list pgsql-hackers

From Andres Freund
Subject Re: [PROPOSAL] Shared Ispell dictionaries
Date
Msg-id 20180302043149.tn2xjgt2vcigknhe@alap3.anarazel.de
Whole thread Raw
In response to Re: [PROPOSAL] Shared Ispell dictionaries  (Arthur Zakirov <a.zakirov@postgrespro.ru>)
Responses Re: [PROPOSAL] Shared Ispell dictionaries  (Arthur Zakirov <a.zakirov@postgrespro.ru>)
Re: [PROPOSAL] Shared Ispell dictionaries  (Arthur Zakirov <a.zakirov@postgrespro.ru>)
List pgsql-hackers
Hi,

On 2018-02-07 19:28:29 +0300, Arthur Zakirov wrote:
> +    {
> +        {"max_shared_dictionaries_size", PGC_POSTMASTER, RESOURCES_MEM,
> +            gettext_noop("Sets the maximum size of all text search dictionaries loaded into shared memory."),
> +            gettext_noop("Currently controls only loading of Ispell dictionaries. "
> +                         "If total size of simultaneously loaded dictionaries "
> +                         "reaches the maximum allowed size then a new dictionary "
> +                         "will be loaded into local memory of a backend."),
> +            GUC_UNIT_KB,
> +        },
> +        &max_shared_dictionaries_size,
> +        100 * 1024, 0, MAX_KILOBYTES,
> +        NULL, NULL, NULL
> +    },

So this uses shared memory, allocated at server start?  That doesn't
seem right. Wouldn't it make more sense to have a
'num_shared_dictionaries' GUC, and then allocate them with dsm? Or even
better not have any such limit and us a dshash table to point to
individual loaded tables?

Is there any chance we can instead can convert dictionaries into a form
we can just mmap() into memory?  That'd scale a lot higher and more
dynamicallly?

Regards,

Andres


pgsql-hackers by date:

Previous
From: Nikhil Sontakke
Date:
Subject: Re: [HACKERS] logical decoding of two-phase transactions
Next
From: Andres Freund
Date:
Subject: Re: 2018-03 Commitfest Summary (Andres #3)