Thread: Large tables management question
Hello, I am designing an information system for image processing. I decided to base it on PostgreSQL because of its advanced object-relational features. The database will grow 200M per day or in another numbers 40G per year. I want to ask you if there is some way to manage such amount of data in postgresql? This means to add additional disc to databasa storage when the first is full, to split some of the tables over multiple discs etc? Or I have to manage this with another means, for example on OS level (linear raid) or on application level (to store the large data files on filesystem). I am asking for something similar to "merge" tables in mysql although this is not the best silution. Thanks in advance luben -- _________________________________________________________ Luben Karavelov [phone] +359 2 9877088 Network Administrator [ICQ#] 34741625 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Attachment
Luben Karavelov <luben@bgone.net> writes: > I want to ask you if there is some way to manage such amount of data in > postgresql? At the moment, managing it at the RAID level is the only really convenient answer. Postgres can cope with tables up to the terabyte range, but it expects you to supply a filesystem that can hold 'em. In theory you could try to manage the space manually using symlinks, but I can't recommend that; too tedious and error-prone. regards, tom lane
Hi we are developing image databases since years ... design hint: - store the thumbnails in database as a blob ! two tables (image and thumbnail) -- image points to thumbnail!! (not a joke - a good hint) - store preview and original outside in filesystem - store the path to preview and orig. in the database --> these will be fast, solid and usefull! If it is a realy production system - please use a hardware (!!!) raid system. Use 2 80GB ide drives on a hardware mirror - it is cheap and secure. Better are 5 73GB scsi drives as raid 5 (hardware) using one as hotspare ... Michael