Re: Query on postgres_fdw extension - Mailing list pgsql-general
From | Duarte Carreira |
---|---|
Subject | Re: Query on postgres_fdw extension |
Date | |
Msg-id | CAHE-9zC7O1d_opQfZHLbNTEyDTHg-dCbh3GDnBCo=MzUJiRR7w@mail.gmail.com Whole thread Raw |
In response to | Re: Query on postgres_fdw extension (Vijaykumar Jain <vijaykumarjain.github@gmail.com>) |
List | pgsql-general |
Thanks for your help!
I'm not going forward with the id generating scheme... I prefer to let the bd do that work on its own. Sharding is way over my head.
For now I just created the 2 tables, one for inserting (without the id column), another for everything else. It's awkward and prone to human error but as long as nothing changes and no one deletes it thinking it's garbage...
Thanks.
Vijaykumar Jain <vijaykumarjain.github@gmail.com> escreveu no dia quinta, 20/01/2022 à(s) 17:39:
On Thu, 20 Jan 2022 at 21:29, Duarte Carreira <dncarreira@gmail.com> wrote:Hello everyone.I don't know... realistically what do you guys see as a best/simple approach?We implemented a custom sharding (directory sharding with lookup tables) layer of 10 shards, but it was write local, read global.the api was responsible for all rebalancing incase of hotspots.other api sharding examples ...although it worked really well, when you are maintaining it on your own, it gets really painful, much beyond id generation globally.i will not go into the details, but in short, sharded setup is not the same as local setup. there would be many more things that would not work as expectedwhich would otherwise work really well on a standalone setup.writes over shard may work, but you realize it is over the network, so you can lock you table for a much longer duration and cause a much more serious outage,if you really wanted to have distributed writes with unique keys, you can go with uuid i think or have your own seq generator globally (see below).Move ID generation out of the database to an ID generation service outside of the database… As soon as a piece of work enters their system, an ID gets assigned to it… and that ID generated in a way that is known to be globally unique within their systemIndex of /shard_manager/shard_manager-0.0.1/ (pgxn.org) (pretty old but if you can use your coordinator server as a id_generator(), then you can generate ids which are globally unique)imho, do not try sharding manually, unless you have enough dbas to maintain the shards, try using citus, it would make a lot of the manual stuff easier.also, the below work arounds are bad, incase you just want to rush throughpostgres=# \c localdbYou are now connected to database "localdb" as user "postgres".localdb=#localdb=# \dtDid not find any relations.localdb=# \detList of foreign tablesSchema | Table | Server--------+-------+---------------public | t | remote_server(1 row)localdb=# \det+ tList of foreign tablesSchema | Table | Server | FDW options | Description--------+-------+---------------+----------------------------------------+-------------public | t | remote_server | (schema_name 'public', table_name 't') |(1 row)localdb=# \det tList of foreign tablesSchema | Table | Server--------+-------+---------------public | t | remote_server(1 row)localdb=# create or replace function getnext() returns int as $_$ select id FROM dblink ('dbname = remotedb', $$ select nextval('t_id_seq') $$ ) as newtable(id int); $_$ language sql;CREATE FUNCTIONlocaldb=# \c remotedbYou are now connected to database "remotedb" as user "postgres".remotedb=# \dt tList of relationsSchema | Name | Type | Owner--------+------+-------+----------public | t | table | postgres(1 row)remotedb=# \ds t_id_seqList of relationsSchema | Name | Type | Owner--------+----------+----------+----------public | t_id_seq | sequence | postgres(1 row)remotedb=# \c localdbYou are now connected to database "localdb" as user "postgres".localdb=# insert into t values (getnext(), 100);INSERT 0 1localdb=# insert into t values (getnext(), 100);INSERT 0 1localdb=# select * from t;id | col1----+------11 | 412 | 513 | 10014 | 100(4 rows)just my opinion, ignore it not useful.
pgsql-general by date: