Thread: Networking feature for postgresql...
Hi, I'm trying to add a -project specific- networking feature to my postgres build (or database as function). What I want to do is to send a Query instance (as a String-retrieved through an SPI function) to other machines and (after they have executed it) to receive result tuples. It's about a mediator-wrapper project. My first thought was to write 2 SPI functions (one for the server (concurrent) and the other for client) but I'm not sure if this is going to work. I'm worried about setting up the server process running on the background while other SPI calls are made. Can this be done? Should I consider something else first, before writting any code? Any suggestions would be appreciated! Best regards, Ntinos Katsaros
Katsaros Kwn/nos wrote: > Hi, > > I'm trying to add a -project specific- networking feature to my postgres > build (or database as function). What I want to do is to send a Query > instance (as a String-retrieved through an SPI function) to other > machines and (after they have executed it) to receive result tuples. > It's about a mediator-wrapper project. My first thought was to write 2 > SPI functions (one for the server (concurrent) and the other for client) > but I'm not sure if this is going to work. I'm worried about setting up > the server process running on the background while other SPI calls are > made. Have you looked at the dblink code? -- Richard Huxton Archonet Ltd
Well, actually no :) ! Thanks for the hint! But just from curiosity, would the scenario I described work? I mean is it possible for an SPI process to run in the background while other SPI calls are made? Ntinos Katsaros On Thu, 2004-10-14 at 11:15, Richard Huxton wrote: > Katsaros Kwn/nos wrote: > > Hi, > > > > I'm trying to add a -project specific- networking feature to my postgres > > build (or database as function). What I want to do is to send a Query > > instance (as a String-retrieved through an SPI function) to other > > machines and (after they have executed it) to receive result tuples. > > It's about a mediator-wrapper project. My first thought was to write 2 > > SPI functions (one for the server (concurrent) and the other for client) > > but I'm not sure if this is going to work. I'm worried about setting up > > the server process running on the background while other SPI calls are > > made. > > Have you looked at the dblink code?
Katsaros Kwn/nos wrote: > Well, actually no :) ! Thanks for the hint! > > But just from curiosity, would the scenario I described work? > I mean is it possible for an SPI process to run in the background while > other SPI calls are made? I don't think so, you're running in a backend process, so you'd need to fork the backend itself. > On Thu, 2004-10-14 at 11:15, Richard Huxton wrote: > >>Katsaros Kwn/nos wrote: >> >>>Hi, >>> >>>I'm trying to add a -project specific- networking feature to my postgres >>>build (or database as function). What I want to do is to send a Query >>>instance (as a String-retrieved through an SPI function) to other >>>machines and (after they have executed it) to receive result tuples. >>>It's about a mediator-wrapper project. My first thought was to write 2 >>>SPI functions (one for the server (concurrent) and the other for client) >>>but I'm not sure if this is going to work. I'm worried about setting up >>>the server process running on the background while other SPI calls are >>>made. >> >>Have you looked at the dblink code? -- Richard Huxton Archonet Ltd
Hi again, Having taken a look at the dblink code I have some questions: Having a user defined function, is it possible -with no serious memory overheads- to fork it (outside SPI calls code) in order to make concurrent dblink calls? What I'm thinking of doing is to create a function which opens an SPI session, creates some objects (Query nodes), closes it and then forks as many times as the number of required dblink calls (having available the appropriate Query node, something like for(all nodes) fork();...). My guess is that it is possible cause the backend calls for dblink are done on the other side (when speaking for Select statements only).However the returned tuples can only be merged (to produce the final result) in main memory cause storing them first (e.g. temp table) would need concurrent SPI calls.Am I right? If so is there any mechanism that can multiplex this storing procedures? On the other side, I suppose a server can serve multiple incoming queries. Regards, Ntinos Katsaros On Thu, 2004-10-14 at 11:57, Richard Huxton wrote: > Katsaros Kwn/nos wrote: > > Well, actually no :) ! Thanks for the hint! > > > > But just from curiosity, would the scenario I described work? > > I mean is it possible for an SPI process to run in the background while > > other SPI calls are made? > > I don't think so, you're running in a backend process, so you'd need to > fork the backend itself. > > > On Thu, 2004-10-14 at 11:15, Richard Huxton wrote: > > > >>Katsaros Kwn/nos wrote: > >> > >>>Hi, > >>> > >>>I'm trying to add a -project specific- networking feature to my postgres > >>>build (or database as function). What I want to do is to send a Query > >>>instance (as a String-retrieved through an SPI function) to other > >>>machines and (after they have executed it) to receive result tuples. > >>>It's about a mediator-wrapper project. My first thought was to write 2 > >>>SPI functions (one for the server (concurrent) and the other for client) > >>>but I'm not sure if this is going to work. I'm worried about setting up > >>>the server process running on the background while other SPI calls are > >>>made. > >> > >>Have you looked at the dblink code? >
Katsaros Kwn/nos wrote: > Having taken a look at the dblink code I have some questions: > ISTM that you might start with dblink_record() and modify it to suit using SPI and asynchronous libpq calls. See: http://www.postgresql.org/docs/current/static/libpq-async.html Joe