On Wed, Jan 7, 2026 at 8:21 AM Matthias van de Meent
<boekewurm+postgres@gmail.com> wrote:
> I'm concerned they don't actually comply with the SQL standard when
> they process data from those tables, if they assume unenforced unique
> constraints to always be valid.
If you are referring to how they violate rules around consistent query
results, some of them include a TRUSTED/RELY option (see the
SingleStore link) that allows developers to enable the
documentation/statistics features without potentially changing results
if the data isn't truly unique. That could be implemented similarly in
Postgres to force developers to opt in to potentially inconsistent
behaviour while still giving them the chance to take advantage of the
other features of UNIQUE NOT ENFORCED.
> > 2. Create a new "dummy index" index type. This would not include any
> > update triggers and would have an infinite cost to prevent usage in
> > query planning, but it would still serve the purpose of proving the
> > existence of a unique index.
>
> It would be faster than a btree, but it'd still have the issue of
> adding the overhead of column projection costs, and it'd be a (false?)
> target for LR's replication identity, which I'm concerned about.
Yeah, I am thinking a constraint only would be the better approach. I
don't see any reason why the LR index search couldn't be rewritten to
exclude any new flagged dummy index, but it does seem like unnecessary
added complexity, and, of course, the additional overhead of the index
in general isn't great.
> I think this feature would be a loaded footgun, especially if the
> planner starts to consider unenforced constraints as valid.
As mentioned above, there are ways we could mitigate the potential
risks by isolating the more risky functionality in the constraint
interface.