6 aug 2007 kl. 16:58 skrev Tom Lane:
> Henrik Zagerholm <henke@mac.se> writes:
>> ... FROM tbl_file_structure
>> JOIN tbl_file ON pk_file_id = fk_file_id
>> JOIN tbl_structure ON pk_structure_id = fk_structure_id
>> JOIN tbl_archive ON pk_archive_id = fk_archive_id
>> JOIN tbl_share ON pk_share_id = fk_share_id
>> JOIN tbl_computer ON pk_computer_id = fk_computer_id
>> JOIN tbl_filetype ON pk_filetype_id = fk_filetype_id
>> JOIN tbl_acl ON fk_file_structure_id = pk_file_structure_id
>> LEFT OUTER JOIN tbl_job ON tbl_archive.fk_job_id =
>> pk_job_id
>> LEFT OUTER JOIN tbl_job_group ON tbl_job.fk_job_group_id =
>> pk_job_group_id
>> WHERE LOWER(file_name) LIKE LOWER('awstats%') AND
>> archive_complete = true AND job_group_type != 'R' GROUP BY
>> file_name, file_ctime, structure_path, pk_computer_id,
>> filetype_icon, computer_name,
>> share_name, share_path ...
>
> Perhaps raising join_collapse_limit and/or work_mem would help.
> Although I'm not really sure why you expect the above query to be fast
> --- with the file_name condition matching 50K rows, and no selectivity
> worth mentioning in any other WHERE-condition, it's gonna have to do a
> heck of a lot of joining in any case.
>
I did test to raise work_mem to 10MB and join_collapse_limit to 10,12
and 16 with no significant performance boost.
I know the query retrieves way more which is really necessary to show
to the user so I would gladly come up with a way to limit the query
so the GUI doesn't hang for several minutes if a user does a bad search.
The problem is that I don't know a good way of limit the search
efficiently as only going on tbl_file with limit 100 could make the
query only to return 10 rows if the user doesn't have access to 900
of the files (This is what the join with tbl_acl does). Using cursors
doesn't help because I really don't retrieve that much data
Would sub selects work best in these kinds of scenarios? It mush be a
quite common problem with users doing queries that is too wide.
Thanks for all your help.
> regards, tom lane