Thread: Blob question -(((
Hello , I have a question about BLOBs in PGSQL. I know some people getting crazy about this question, but I need an answer to finish the portability to Postgres for my content-system. Here are my questions: * Are they any limits to the OID Datatype? I need 50-100k per row. * Can I do fulltext-search with OID fields (VERY IMPORTANT QUESTION!!!!)? * How can I store 50-100k values? What datatype should I use? The answer of this questions is very important to me, hope someone has time to give me an answer. Now something for the archive: Yes I did my homework and I was searching around for some stuff about pg and blobs. Just for the archive, I will repeat some of them here. SELECT lo_export(image.raster, "/tmp/myfile" from image WHERE name = 'somename'; (from http://postgresql.adetti.iscte.pt/mhonarc/pgsql-sql/1999-03/msg00026.html ) and this: create table images (imgname name,imgoid oid); insert into images values (`test.gif`,lo_import(`/home/pmount/test.gif`)); from http://postgresql.adetti.iscte.pt/mhonarc/pgsql-sql/1999-03/msg00026.html -- Boris http://www.x-itec.de
On Mon, 1 Jan 2001, Boris wrote: > Here are my questions: > > * Are they any limits to the OID Datatype? I need 50-100k per row. 50-100k COLUMNs per row? Or are you talking about binary files of 50-100K? You definitely need to use the large object fetaures of PostgreSQL. > * Can I do fulltext-search with OID fields (VERY IMPORTANT QUESTION!!!!)? OIDs are just numbers. If you are using them for large objects, they are just 'pointers' to the binary data of a file maintained by the operating system. If you need to search, you should design your database with apprpriate description or keyword fields. -- Brett http://www.chapelperilous.net/~bmccoy/ --------------------------------------------------------------------------- Be circumspect in your liaisons with women. It is better to be seen at the opera with a man than at mass with a woman. -- De Maintenon
Hello Brett, Monday, January 01, 2001, 9:09:10 AM, you wrote: BWM> On Mon, 1 Jan 2001, Boris wrote: >> Here are my questions: >> >> * Are they any limits to the OID Datatype? I need 50-100k per row. BWM> 50-100k COLUMNs per row? Or are you talking about binary files of BWM> 50-100K? You definitely need to use the large object fetaures of BWM> PostgreSQL. Yes I need approx 50-100k to store ascii data for later fulltext-search -(( It looks like as if there is a 8kb limit that can be extended to 32k per row in src/config.h But it´s still not enough -(( >> * Can I do fulltext-search with OID fields (VERY IMPORTANT QUESTION!!!!)? BWM> OIDs are just numbers. If you are using them for large objects, they are BWM> just 'pointers' to the binary data of a file maintained by the operating BWM> system. If you need to search, you should design your database with Ahhh interesting to know. BWM> apprpriate description or keyword fields. Hmm, nice idea, but there are some more columns with TEXT datatype storing additional text, too. I can not decentralize all the documents of all fields - or better said I am not sure. The 8/32kb limit is not very well -((( I need 50-100kb per row. -((( -- Boris http://www.x-itec.de
On Mon, 1 Jan 2001, Boris wrote: > BWM> 50-100k COLUMNs per row? Or are you talking about binary files of > BWM> 50-100K? You definitely need to use the large object fetaures of > BWM> PostgreSQL. > > Yes I need approx 50-100k to store ascii data for later > fulltext-search -(( Ah, now I see. large objects may not be the solution, if you are storing text, because it won't be searchable (unless you build up an external search like mnoGoSearch, but that's really for web stuff). However, all is not lost -- you can either break up your text into distinct fields, like title, author, abstract, text paragraph 1, text paragraph 2, and so on (this will entail a good bit of analysis and design of proper data structures on your part), and use the full text search that is in the contrib directory of the soruce distribution, or you can go the bleeding edge route and use the beta TOAST project, which will allow one to have row sizes of greater than the current limitations. The latter may not be a good solution for a production database. See http://postgresql.readysetnet.com/projects/devel-toast.html for more details on TOAST. -- Brett http://www.chapelperilous.net/~bmccoy/ --------------------------------------------------------------------------- How come everyone's going so slow if it's called rush hour?
Is it possible to turn off referential integrity checks when doing a bulk copy into a database. I am doing a bulk copy into a database and keep getting referential integrity errors, which I don't really want to fix for various reasons (too many, and don't really have too). Thanks Huy
Does a tool exist like the phpmyadmin tool for Pgsql? It is really handy for mysql. Thanks George Loch
>Does a tool exist like the phpmyadmin tool for Pgsql? It is really handy for >mysql. Yup, check out http://www.greatbridge.org/project/phppgadmin/projdisplay.php ------------------------ Chris Smith http://www.squiz.net