On Thu, Feb 13, 2025 at 10:02 AM Adrian Klaver
<adrian.klaver@aklaver.com> wrote:
> This needs more information:
> 1) Postgres version.
That would be for Postgres 15.
> 2) The table definition.
This is a simplified version of the original table:
CREATE TABLE bookings (
id SERIAL PRIMARY KEY,
resource_id INT NOT NULL,
start_date DATE NOT NULL,
end_date DATE NOT NULL
);
> 3) The exclusion constraint definition.
The copy table would have an exclusion constraint such as:
ALTER TABLE bookings
ADD CONSTRAINT no_date_overlap_for_resource_id
EXCLUDE USING gist (
resource_id WITH =,
daterange(start_date, end_date, '[]') WITH &&
);
> 4) Definition of what 'fairly large' is.
This table is over 400GB
> 5) How is the application interfacing with the database?
This is a web application that interfaces with the database using psycopg.
Also pulling in your question in the other reply:
> Why can't you just add the exclusion constraint to the original table?
With unique constraints, one can use a unique index to create the constraint
concurrently.
With check constraints, one can create the constraint as invalid and then
validate it while only requiring a share update exclusive lock.
But with exclusion constraints, neither of those techniques are available. In
that sense, there is no way to create this type of constraint in a large table
without copying the original table, adding the constraint, and performing a
table swap.
This is done to avoid having to hold an exclusive lock for a long amount of
time, thus creating application outages.
Hope that clarifies the situation a bit better
- Marcelo