Thread: enable_sort optimization problem

enable_sort optimization problem

From
Dave E Martin
Date:
(8.0.1 on debian/linux 2.6.11 kernel)

I have noticed that if I set enable_sort=false in the .conf file, my
queries are running faster. I had a query which if I did a limit 20, ran
in 6 milliseconds, but if I changed it to limit 21, it took around 19
seconds (or 19000 milliseconds). It also took longer if I did limit 19
offset 2. (I don't know what it is about the 21st record). In any case,
I noticed that in the analysis, the long version was doing a sort and
the quick version was not, so I tried the enable_sort=false setting, and
now things are generally running faster.

I HAVE done analyzes, and vacuums, and vacuum analyzes.  In prior
experimenting with this, there were even some seq_scans, which turned
into index_scans when I set enable_seqscan=false, and became moderately
faster.

I am using 8.0.1, and below are the two query plans, first the
enable_sort=true version, then the enable_sort=false version, note the
VAST difference in speed. What is the problem, and how can I convince
the query optimizer to do the right thing (short of enable_sort=false)?

from the config file:
# - Planner Cost Constants -

#effective_cache_size = 1000    # typically 8KB each
#random_page_cost = 4           # units are one sequential page fetch cost
#cpu_tuple_cost = 0.01          # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#cpu_operator_cost = 0.0025     # (same)


With enable_sort=true: (the default)
opennms=# explain analyze SELECT OUTAGES.*, NODE.NODELABEL,
IPINTERFACE.IPHOSTNAME, SERVICE.SERVICENAME, NOTIFICATIONS.NOTIFYID,
NOTIFICATIONS.ANSWEREDBY FROM OUTAGES JOIN NODE USING(NODEID) JOIN
IPINTERFACE ON OUTAGES.NODEID=IPINTERFACE.NODEID AND
OUTAGES.IPADDR=IPINTERFACE.IPADDR  LEFT OUTER JOIN SERVICE ON
OUTAGES.SERVICEID=SERVICE.SERVICEID LEFT OUTER JOIN NOTIFICATIONS ON
SVCLOSTEVENTID=NOTIFICATIONS.EVENTID ORDER BY OUTAGEID DESC LIMIT 29
OFFSET 2;

QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=127422.75..127422.82 rows=29 width=126) (actual
time=40858.260..40858.608 rows=29 loops=1)
   ->  Sort  (cost=127422.74..127423.14 rows=161 width=126) (actual
time=40858.230..40858.367 rows=31 loops=1)
         Sort Key: outages.outageid
         ->  Nested Loop Left Join  (cost=1047.62..127416.84 rows=161
width=126) (actual time=2448.671..38122.920 rows=134579 loops=1)
               ->  Hash Left Join  (cost=1047.62..126533.18 rows=161
width=102) (actual time=2448.616..23963.962 rows=132250 loops=1)
                     Hash Cond: ("outer".serviceid = "inner".serviceid)
                     ->  Hash Join  (cost=1044.17..126527.32 rows=161
width=92) (actual time=1996.055..21178.268 rows=132250 loops=1)
                           Hash Cond: (("outer".nodeid = "inner".nodeid)
AND (("outer".ipaddr)::text = ("inner".ipaddr)::text))
                           ->  Index Scan using outages_serviceid_idx on
outages  (cost=0.00..123497.86 rows=132245 width=52) (actual
time=81.413..15330.873 rows=132250 loops=1)
                           ->  Hash  (cost=1037.31..1037.31 rows=1372
width=64) (actual time=1914.558..1914.558 rows=0 loops=1)
                                 ->  Merge Join  (cost=0.00..1037.31
rows=1372 width=64) (actual time=150.436..1906.441 rows=1372 loops=1)
                                       Merge Cond: ("outer".nodeid =
"inner".nodeid)
                                       ->  Index Scan using pk_nodeid on
node  (cost=0.00..229.55 rows=1043 width=24) (actual time=0.022..623.197
rows=1043 loops=1)
                                       ->  Index Scan using
ipinterface_nodeid_idx on ipinterface  (cost=0.00..788.01 rows=1372
width=40) (actual time=150.369..1259.366 rows=1372 loops=1)
                     ->  Hash  (cost=3.36..3.36 rows=33 width=14)
(actual time=452.519..452.519 rows=0 loops=1)
                           ->  Index Scan using pk_serviceid on service
(cost=0.00..3.36 rows=33 width=14) (actual time=452.193..452.369 rows=33
loops=1)
               ->  Index Scan using notifications_eventid_idx on
notifications  (cost=0.00..5.46 rows=2 width=28) (actual
time=0.078..0.079 rows=0 loops=132250)
                     Index Cond: ("outer".svclosteventid =
notifications.eventid)
 Total runtime: 40868.130 ms
(19 rows)


And the enable_sort=false version:
opennms=# explain analyze SELECT OUTAGES.*, NODE.NODELABEL,
IPINTERFACE.IPHOSTNAME, SERVICE.SERVICENAME, NOTIFICATIONS.NOTIFYID,
NOTIFICATIONS.ANSWEREDBY FROM OUTAGES JOIN NODE USING(NODEID) JOIN
IPINTERFACE ON OUTAGES.NODEID=IPINTERFACE.NODEID AND
OUTAGES.IPADDR=IPINTERFACE.IPADDR  LEFT OUTER JOIN SERVICE ON
OUTAGES.SERVICEID=SERVICE.SERVICEID LEFT OUTER JOIN NOTIFICATIONS ON
SVCLOSTEVENTID=NOTIFICATIONS.EVENTID ORDER BY OUTAGEID DESC LIMIT 29
OFFSET 2;

QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=12600.25..195303.93 rows=29 width=126) (actual
time=211.185..300.336 rows=29 loops=1)
   ->  Nested Loop Left Join  (cost=0.00..1014320.39 rows=161 width=126)
(actual time=147.328..300.072 rows=31 loops=1)
         ->  Nested Loop Left Join  (cost=0.00..1013436.73 rows=161
width=102) (actual time=147.264..298.671 rows=31 loops=1)
               ->  Nested Loop  (cost=0.00..1012949.54 rows=161
width=92) (actual time=137.286..287.138 rows=31 loops=1)
                     ->  Nested Loop  (cost=0.00..1012208.81 rows=159
width=76) (actual time=126.277..254.709 rows=31 loops=1)
                           ->  Index Scan Backward using pk_outageid on
outages  (cost=0.00..252480.62 rows=132245 width=52) (actual
time=77.021..77.484 rows=31 loops=1)
                           ->  Index Scan using
ipinterface_nodeid_ipaddr_ismanaged_idx on ipinterface  (cost=0.00..5.73
rows=1 width=40) (actual time=5.304..5.686 rows=1 loops=31)
                                 Index Cond: (("outer".nodeid =
ipinterface.nodeid) AND (("outer".ipaddr)::text =
(ipinterface.ipaddr)::text))
                     ->  Index Scan using pk_nodeid on node
(cost=0.00..4.65 rows=1 width=24) (actual time=1.010..1.015 rows=1 loops=31)
                           Index Cond: ("outer".nodeid = node.nodeid)
               ->  Index Scan using pk_serviceid on service
(cost=0.00..3.01 rows=1 width=14) (actual time=0.337..0.344 rows=1 loops=31)
                     Index Cond: ("outer".serviceid = service.serviceid)
         ->  Index Scan using notifications_eventid_idx on
notifications  (cost=0.00..5.46 rows=2 width=28) (actual
time=0.019..0.019 rows=0 loops=31)
               Index Cond: ("outer".svclosteventid = notifications.eventid)
 Total runtime: 300.856 ms
(15 rows)

I'm much happier with 300ms vs 40868ms. Am I doing any harm by leaving
enable_sort=false? (I assume it will still use sort if it has no
choice?) More importantly for the long term, why is the optimizer making
such a blunder? (I notice just now that if I remove the limit clause
completely, even the enable_sort=false version takes a very long time,
so it appears the optimizer isn't handling limit well).

p.s. My only relationship to opennms is as a user (one who is trying to
make some reports go faster).

With limit 50 offset 200 variation (with enable_sort=false): Total
runtime: 59.288 ms
With limit 50 offset 200 variation (with enable_sort=true): Total
runtime:  18764.598 ms
No limit or offset and enable_sort=true: Total runtime: 19510.191 ms
No limit or offset and enable_sort=false: Total runtime: 53530.251 ms

Hm, disabling sort seems to hinder non limited queries, so this seems to
be a catch-22.


Re: enable_sort optimization problem

From
Richard Huxton
Date:
Dave E Martin wrote:
> (8.0.1 on debian/linux 2.6.11 kernel)
>
> I have noticed that if I set enable_sort=false in the .conf file, my
> queries are running faster. I had a query which if I did a limit 20, ran
> in 6 milliseconds, but if I changed it to limit 21, it took around 19
> seconds (or 19000 milliseconds). It also took longer if I did limit 19
> offset 2. (I don't know what it is about the 21st record). In any case,
> I noticed that in the analysis, the long version was doing a sort and
> the quick version was not, so I tried the enable_sort=false setting, and
> now things are generally running faster.
>
> I HAVE done analyzes, and vacuums, and vacuum analyzes.  In prior
> experimenting with this, there were even some seq_scans, which turned
> into index_scans when I set enable_seqscan=false, and became moderately
> faster.

This sort of thing is useful as a way of testing whether a better plan
exists. It's not terribly good as a way of tuning a live system.

> I am using 8.0.1, and below are the two query plans, first the
> enable_sort=true version, then the enable_sort=false version, note the
> VAST difference in speed. What is the problem, and how can I convince
> the query optimizer to do the right thing (short of enable_sort=false)?
>
> from the config file:
> # - Planner Cost Constants -
>
> #effective_cache_size = 1000    # typically 8KB each
> #random_page_cost = 4           # units are one sequential page fetch cost
> #cpu_tuple_cost = 0.01          # (same)
> #cpu_index_tuple_cost = 0.001   # (same)
> #cpu_operator_cost = 0.0025     # (same)

You should probably start with the performance-tuning articles here:
  http://www.powerpostgresql.com/PerfList
  http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php

Certainly your effective-cache-size is (hopefully) too low, and
random-page-cost might well be so too.

If sorts in particular seem slow, you might want to increase work_mem
(called "sort_mem" in older releases). BUT make changes one step at a
time and look at the total impact on the system, otherwise you can end
up making one query fast and nine slow.

--
   Richard Huxton
   Archonet Ltd

Re: enable_sort optimization problem

From
Tom Lane
Date:
Dave E Martin <postgresql-to.dave@dave.to> writes:
> I have noticed that if I set enable_sort=false in the .conf file, my
> queries are running faster.

You mean one specific example is running faster.  If you do that you'll
probably slow down other queries.

It looks like the main estimation error is here:

>                      ->  Nested Loop  (cost=0.00..1012208.81 rows=159
> width=76) (actual time=126.277..254.709 rows=31 loops=1)
>                            ->  Index Scan Backward using pk_outageid on
> outages  (cost=0.00..252480.62 rows=132245 width=52) (actual
> time=77.021..77.484 rows=31 loops=1)
>                            ->  Index Scan using
> ipinterface_nodeid_ipaddr_ismanaged_idx on ipinterface  (cost=0.00..5.73
> rows=1 width=40) (actual time=5.304..5.686 rows=1 loops=31)
>                                  Index Cond: (("outer".nodeid =
> ipinterface.nodeid) AND (("outer".ipaddr)::text =
> (ipinterface.ipaddr)::text))

Why does it think that only 159 of the 132245 rows in outages will have
join partners in ipinterface?  The actual results look like they all do.
It might be worth looking at the pg_stats rows for the join columns to
see if there's something odd about the statistics.

            regards, tom lane

Re: enable_sort optimization problem

From
Dave E Martin
Date:
tom lane wrote:

 > Why does it think that only 159 of the 132245 rows in outages will have
 > join partners in ipinterface?  The actual results look like they all do.
 > It might be worth looking at the pg_stats rows for the join columns to
 > see if there's something odd about the statistics.
 >


Here are the pg_stats (as of today, I haven't done any analyzes or
vacuums since the night of my first posting) for outages and ipinterface
(I've obscured some addresses in the ipiddr row, and removed iphostname):

schemaname | tablename |      attname       |  null_frac  | avg_width |
n_distinct
|
most_common_vals

|
most_common_freqs
|
histogram_bounds
           
| correlation

------------+-----------+--------------------+-------------+-----------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------
public     | outages   | outageid           |           0 |         4
|         -1
|
                                                                                                        
|
|
{201,14775,27621,39600,53231,66043,79629,92779,105267,119744,134644}

   
|    0.390484
public     | outages   | svclosteventid     |           0 |         4 |
-0.945011 |
{280277,356772}
                                                                                                      
|
{0.000666667,0.000666667}
|
{17842,54830,81745,107030,135793,163240,191819,219223,268449,309128,371234}

   
|    0.390484
public     | outages   | svcregainedeventid | 0.000333333 |         4 |
-0.958031 |
{280279}
                                                                                                      
|
{0.000666667}
|
{17844,54856,81787,107063,135821,163381,191856,219405,268450,309572,371494}

   
|    0.390818
public     | outages   | nodeid             |           0 |         4
|        396 |
{962,397,191,185,377,139,237,378,1295,231}
                                                                                                      
|
{0.0173333,0.00966667,0.00866667,0.00833333,0.00833333,0.00766667,0.00766667,0.00766667,0.00766667,0.007}
|
{3,158,178,206,236,258,293,316,358,395,1452}

   
|  -0.0325868
public     | outages   | ipaddr             |           0 |        16
|        396 |

{208....,172.22.0.158,172.20.0.237,172.20.0.231,172.22.35.56,172.17.2.5,172.20.0.180,172.21.240.91,172.23.181.16,172.21.240.93}
                                                                                       
|
{0.0173333,0.00966667,0.00866667,0.00833333,0.00833333,0.00766667,0.00766667,0.00766667,0.00766667,0.007}
|

{172.16.156.43,172.20.0.202,172.20.0.225,172.20.0.9,172.22.0.146,172.22.240.151,172.22.240.180,172.22.35.30,172.23.45.17,207...,209....}
                                                                                            
|  -0.0116046
public     | outages   | serviceid          |           0 |         4
|         10 |
{9}
                                                                                                      
|
{0.87}
|
{1,1,1,1,1,1,11,14,23}

   
|    0.781509
public     | outages   | iflostservice      |           0 |         8 |
-0.351555 | {"2005-04-12 18:27:16","2005-04-13 16:26:04","2005-04-13
17:49:33","2005-04-13 18:45:26","2005-04-13 20:07:07","2005-04-13
20:29:17","2005-04-17 01:34:25","2005-04-22 19:56:45","2005-04-11
14:32:39","2005-04-12 14:02:14"} |
{0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.000666667,0.000666667}
| {"2005-04-07 20:23:28","2005-04-12 14:51:23","2005-04-12
22:14:25","2005-04-13 05:10:24","2005-04-13 13:01:27","2005-04-13
20:51:31","2005-04-14 04:35:32","2005-04-14 11:40:45","2005-04-24
15:53:49","2005-05-14 13:57:16","2005-05-26 06:13:01"} |    0.390484
public     | outages   | ifregainedservice  |           0 |         8 |
-0.332081 | {"2005-04-11 14:33:51","2005-04-12 18:27:47","2005-04-13
16:26:35","2005-04-13 17:49:33","2005-04-13 20:07:46","2005-04-13
20:29:58","2005-04-17 01:34:56","2005-04-12 12:19:17","2005-04-12
14:02:45","2005-04-12 16:07:00"} |
{0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.000666667,0.000666667,0.000666667}
| {"2005-04-07 20:23:59","2005-04-12 14:55:59","2005-04-12
22:19:32","2005-04-13 05:16:01","2005-04-13 13:10:44","2005-04-13
20:56:58","2005-04-14 04:43:27","2005-04-14 11:44:15","2005-04-24
15:54:20","2005-05-14 13:58:35","2005-05-26 09:18:35"} |    0.390543
(8 rows)


schemaname |  tablename  |     attname     | null_frac | avg_width |
n_distinct
|
   
most_common_vals
                   
|
most_common_freqs
|
                 
histogram_bounds
                                
| correlation

------------+-------------+-----------------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------
public     | ipinterface | nodeid          |         0 |         4 |
-0.760204 |
{383,195,132,149,380,381,382,1138,1142,1296}

                   
|
{0.0349854,0.0189504,0.0174927,0.0174927,0.0174927,0.0174927,0.0174927,0.0153061,0.0153061,0.0123907}
|
{1,92,205,291,396,694,920,1031,1144,1352,1455}

                                              
|    0.405376
public     | ipinterface | ipaddr          |         0 |        16 |
-0.773324 |
{0.0.0.0,10.0.1.1}

                   
|
{0.225948,0.00218659}
|
{172.16.156.1,172.21.240.3,172.22.35.30,207...,207...,207...,208...,208....,208....,209....,216...}
                                                                                                       
|     0.14107
public     | ipinterface | ifindex         |  0.706997 |         4
|         51 |
{1,-100}

                   
|
{0.0327988,0.0262391}
|
{2,3,4,6,8,11,14,18,21,105,16777219}

                                              
|    0.111678
public     | ipinterface | ismanaged       |         0 |         5
|          3 |
{M,U,F}

                   
|
{0.686589,0.225948,0.0874636}
|

                                                
|    0.503811
public     | ipinterface | ipstatus        |  0.733236 |         4
|          2 |
{1,2}

                   
|
{0.228134,0.0386297}
|

                                                
|    0.836075
public     | ipinterface | iplastcapsdpoll |         0 |         8 |
-0.765306 | {"2005-05-25 12:24:27.161","2005-05-25
11:41:20.384","2005-05-25 14:01:54.991","2005-05-25
14:30:18.367","2005-05-26 03:38:57.83","2005-04-18
14:12:28.958","2005-04-25 06:57:51.826","2005-05-03
19:39:53.817","2005-05-12 20:31:11.786","2005-05-12 20:41:40.77"} |
{0.0182216,0.0174927,0.0174927,0.0174927,0.0174927,0.0167638,0.0167638,0.0167638,0.0153061,0.0153061}
| {"2005-04-22 03:38:21.991","2005-05-12 20:38:13.719","2005-05-25
13:08:05.957","2005-05-25 16:33:14.61","2005-05-25
18:26:20.926","2005-05-25 19:33:52.111","2005-05-25
22:07:54.466","2005-05-25 23:14:54.804","2005-05-26
02:54:39.604","2005-05-26 07:29:45.067","2005-05-26 11:49:17.825"} |
-0.0102781
public     | ipinterface | issnmpprimary   | 0.0495627 |         5
|          4 |
{N,P,C,S}

                   
|
{0.868805,0.053207,0.021137,0.00728863}
|

                                                
|    0.871739
(8 rows)


Re: enable_sort optimization problem

From
Tom Lane
Date:
Dave E Martin <postgresql-to.dave@dave.to> writes:
> tom lane wrote:
>>> Why does it think that only 159 of the 132245 rows in outages will have
>>> join partners in ipinterface?  The actual results look like they all do.
>>> It might be worth looking at the pg_stats rows for the join columns to
>>> see if there's something odd about the statistics.

> Here are the pg_stats (as of today, I haven't done any analyzes or
> vacuums since the night of my first posting) for outages and ipinterface
> (I've obscured some addresses in the ipiddr row, and removed iphostname):

Oh, I see it: the most common values in the outages table have nothing
to do with the most common values in the ipinterface table.  (Not
surprising ... presumably only a small part of your interfaces have
recurring problems.)  The calculation that the planner does therefore
leads to the conclusion that the join will be fairly sparse.

I guess at this point I'm wondering why neither nodeid nor ipaddr is a
unique key for ipinterface ... is their combination unique?

If you could get rid of the apparently rather prevalent 0.0.0.0 entries
in ipinterface, you'd probably see a better estimation result.  (I'm too
lazy to go check, but if you can replace these with nulls I think it
will deter the planner from making the bogus calculation.)

            regards, tom lane

Re: enable_sort optimization problem

From
dave
Date:
tom lane wrote:

>Why does it think that only 159 of the 132245 rows in outages will have
>join partners in ipinterface?  The actual results look like they all do.
>It might be worth looking at the pg_stats rows for the join columns to
>see if there's something odd about the statistics.
>


Here are the pg_stats (as of today, I haven't done any analyzes or
vacuums since the night of my first posting) for outages and ipinterface
(I've obscured some addresses in the ipiddr row, and removed iphostname):

 schemaname | tablename |      attname       |  null_frac  | avg_width |
n_distinct
|
most_common_vals

|
most_common_freqs
|
histogram_bounds
           
| correlation

------------+-----------+--------------------+-------------+-----------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------
 public     | outages   | outageid           |           0 |         4
|         -1
|
                                                                                                        
|
|
{201,14775,27621,39600,53231,66043,79629,92779,105267,119744,134644}

   
|    0.390484
 public     | outages   | svclosteventid     |           0 |         4
|  -0.945011 |
{280277,356772}
                                                                                                      
|
{0.000666667,0.000666667}
|
{17842,54830,81745,107030,135793,163240,191819,219223,268449,309128,371234}

   
|    0.390484
 public     | outages   | svcregainedeventid | 0.000333333 |         4
|  -0.958031 |
{280279}
                                                                                                      
|
{0.000666667}
|
{17844,54856,81787,107063,135821,163381,191856,219405,268450,309572,371494}

   
|    0.390818
 public     | outages   | nodeid             |           0 |         4
|        396 |
{962,397,191,185,377,139,237,378,1295,231}
                                                                                                      
|
{0.0173333,0.00966667,0.00866667,0.00833333,0.00833333,0.00766667,0.00766667,0.00766667,0.00766667,0.007}
|
{3,158,178,206,236,258,293,316,358,395,1452}

   
|  -0.0325868
 public     | outages   | ipaddr             |           0 |        16
|        396 |

{208....,172.22.0.158,172.20.0.237,172.20.0.231,172.22.35.56,172.17.2.5,172.20.0.180,172.21.240.91,172.23.181.16,172.21.240.93}
                                                                                       
|
{0.0173333,0.00966667,0.00866667,0.00833333,0.00833333,0.00766667,0.00766667,0.00766667,0.00766667,0.007}
|

{172.16.156.43,172.20.0.202,172.20.0.225,172.20.0.9,172.22.0.146,172.22.240.151,172.22.240.180,172.22.35.30,172.23.45.17,207...,209....}
                                                                                            
|  -0.0116046
 public     | outages   | serviceid          |           0 |         4
|         10 |
{9}
                                                                                                      
|
{0.87}
|
{1,1,1,1,1,1,11,14,23}

   
|    0.781509
 public     | outages   | iflostservice      |           0 |         8
|  -0.351555 | {"2005-04-12 18:27:16","2005-04-13 16:26:04","2005-04-13
17:49:33","2005-04-13 18:45:26","2005-04-13 20:07:07","2005-04-13
20:29:17","2005-04-17 01:34:25","2005-04-22 19:56:45","2005-04-11
14:32:39","2005-04-12 14:02:14"} |
{0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.000666667,0.000666667}
| {"2005-04-07 20:23:28","2005-04-12 14:51:23","2005-04-12
22:14:25","2005-04-13 05:10:24","2005-04-13 13:01:27","2005-04-13
20:51:31","2005-04-14 04:35:32","2005-04-14 11:40:45","2005-04-24
15:53:49","2005-05-14 13:57:16","2005-05-26 06:13:01"} |    0.390484
 public     | outages   | ifregainedservice  |           0 |         8
|  -0.332081 | {"2005-04-11 14:33:51","2005-04-12 18:27:47","2005-04-13
16:26:35","2005-04-13 17:49:33","2005-04-13 20:07:46","2005-04-13
20:29:58","2005-04-17 01:34:56","2005-04-12 12:19:17","2005-04-12
14:02:45","2005-04-12 16:07:00"} |
{0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.000666667,0.000666667,0.000666667}
| {"2005-04-07 20:23:59","2005-04-12 14:55:59","2005-04-12
22:19:32","2005-04-13 05:16:01","2005-04-13 13:10:44","2005-04-13
20:56:58","2005-04-14 04:43:27","2005-04-14 11:44:15","2005-04-24
15:54:20","2005-05-14 13:58:35","2005-05-26 09:18:35"} |    0.390543
(8 rows)


 schemaname |  tablename  |     attname     | null_frac | avg_width |
n_distinct
|
   
most_common_vals
                   
|
most_common_freqs
|
                 
histogram_bounds
                                
| correlation

------------+-------------+-----------------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------
 public     | ipinterface | nodeid          |         0 |         4 |
-0.760204 |
{383,195,132,149,380,381,382,1138,1142,1296}

                   
|
{0.0349854,0.0189504,0.0174927,0.0174927,0.0174927,0.0174927,0.0174927,0.0153061,0.0153061,0.0123907}
|
{1,92,205,291,396,694,920,1031,1144,1352,1455}

                                              
|    0.405376
 public     | ipinterface | ipaddr          |         0 |        16 |
-0.773324 |
{0.0.0.0,10.0.1.1}

                   
|
{0.225948,0.00218659}
|
{172.16.156.1,172.21.240.3,172.22.35.30,207...,207...,207...,208...,208....,208....,209....,216...}
                                                                                                       
|     0.14107
 public     | ipinterface | ifindex         |  0.706997 |         4
|         51 |
{1,-100}

                   
|
{0.0327988,0.0262391}
|
{2,3,4,6,8,11,14,18,21,105,16777219}

                                              
|    0.111678
public     | ipinterface | ismanaged       |         0 |         5
|          3 |
{M,U,F}

                   
|
{0.686589,0.225948,0.0874636}
|

                                                
|    0.503811
 public     | ipinterface | ipstatus        |  0.733236 |         4
|          2 |
{1,2}

                   
|
{0.228134,0.0386297}
|

                                                
|    0.836075
 public     | ipinterface | iplastcapsdpoll |         0 |         8 |
-0.765306 | {"2005-05-25 12:24:27.161","2005-05-25
11:41:20.384","2005-05-25 14:01:54.991","2005-05-25
14:30:18.367","2005-05-26 03:38:57.83","2005-04-18
14:12:28.958","2005-04-25 06:57:51.826","2005-05-03
19:39:53.817","2005-05-12 20:31:11.786","2005-05-12 20:41:40.77"} |
{0.0182216,0.0174927,0.0174927,0.0174927,0.0174927,0.0167638,0.0167638,0.0167638,0.0153061,0.0153061}
| {"2005-04-22 03:38:21.991","2005-05-12 20:38:13.719","2005-05-25
13:08:05.957","2005-05-25 16:33:14.61","2005-05-25
18:26:20.926","2005-05-25 19:33:52.111","2005-05-25
22:07:54.466","2005-05-25 23:14:54.804","2005-05-26
02:54:39.604","2005-05-26 07:29:45.067","2005-05-26 11:49:17.825"} |
-0.0102781
 public     | ipinterface | issnmpprimary   | 0.0495627 |         5
|          4 |
{N,P,C,S}

                   
|
{0.868805,0.053207,0.021137,0.00728863}
|

                                                
|    0.871739
(8 rows)