Re: oracle to psql migration - slow query in postgres - Mailing list pgsql-performance
From | Tony Capobianco |
---|---|
Subject | Re: oracle to psql migration - slow query in postgres |
Date | |
Msg-id | 1287157702.1730.166.camel@tony1.localdomain Whole thread Raw |
In response to | Re: oracle to psql migration - slow query in postgres (Mladen Gogala <mladen.gogala@vmsinfo.com>) |
List | pgsql-performance |
The recommendations on the numeric columns are fantastic. Thank you very much. We will revisit our methods of assigning datatypes when we migrate our data over from Oracle. Regarding the full table scans; it appears inevitable that full table scans are necessary for the volume of data involved and the present design of our indexes. Over time, indexes were added/removed to satisfy particular functionality. Considering this is our most important table, I will research exactly how this table is queried to better optimize/reorganize our indexes. Thanks for your help. Tony On Thu, 2010-10-14 at 23:59 -0400, Mladen Gogala wrote: > On 10/14/2010 4:10 PM, Jon Nelson wrote: > > The first thing I'd do is think real hard about whether you really > > really want 'numeric' instead of boolean, smallint, or integer. The > > second thing is that none of your indices (which specify a whole bunch > > of fields, by the way) have only just emailok, emailbounced, or only > > the pair of them. Without knowing the needs of your app, I would > > reconsider your index choices and go with fewer columns per index. > > > Also, make sure that the statistics is good, that histograms are large > enough and that Geico (the genetic query optimizer) will really work > hard to save you 15% or more on the query execution time. You can also > make sure that any index existing index is used, by disabling the > sequential scan and then activating and de-activating indexes with the > dummy expressions, just as it was done with Oracle's rule based optimizer. > I agree that a good data model is even more crucial for Postgres than is > the case with Oracle. Oracle, because of its rich assortment of tweaking > & hacking tools and parameters, can be made to perform, even if the > model is designed by someone who didn't apply the rules of good design. > Postgres is much more susceptible to bad models and it is much harder to > work around a badly designed model in Postgres than in Oracle. What > people do not understand is that every application in the world will > become badly designed after years of maintenance, adding columns, > creating additional indexes, views, tables and triggers and than > deploying various tools to design applications. As noted by Murphy, > things develop from bad to worse. Keep Postgres models simple and > separated, because it's much easier to keep clearly defined models > simple and effective than to keep models with 700 tables and 350 views, > frequently with conflicting names, different columns named the same and > same columns named differently. And monitor, monitor, monitor. Use > strace, ltrace, pgstatspack, auto_explain, pgfouine, pgadmin, top, sar, > iostat and all tools you can get hold of. Without the event interface, > it's frequently a guessing game. It is, however, possible to manage > things. If working with partitioning, be very aware that PostgreSQL > optimizer has certain problems with partitions, especially with group > functions. If you want speed, everything must be prefixed with > partitioning column: indexes, expressions, joins. There is no explicit > star schema and creating hash indexes will not buy you much, as a matter > of fact, Postgres community is extremely suspicious of the hash indexes > and I don't see them widely used. > Having said that, I was able to solve the problems with my speed and > partitioning. > > -- > Mladen Gogala > Sr. Oracle DBA > 1500 Broadway > New York, NY 10036 > (212) 329-5251 > www.vmsinfo.com > >
pgsql-performance by date: