Castle, Lindsay wrote:
> I'm working on a project that has a data set of approximately 6million rows
> with about 12,000 different elements, each element has 7 columns of data.
>
> I'm wondering what would be faster from a scanning perspective (SELECT
> statements with some calculations) for this type of set up;
> one table for all the data
> one table for each data element (12,000 tables)
> one table per subset of elements (eg all elements that start with
> "a" in a table)
>
I, for one, am having difficulty understanding exactly what your data
looks like, so it's hard to give advice. Maybe some concrete examples of
what you are calling "rows", "elements", and "columns" would help.
Does each of 6 million rows have 12000 elements, each with 7 columns? Or
do you mean that out of 6 million rows, there are 12000 distinct kinds
of elements?
> Can I do anything with Indexing to help with performance? I suspect for the
> majority of scans I will need to evaluate an outcome based on 4 or 5 of the
> 7 columns of data.
>
Again, this isn't clear to me -- but maybe I'm just being dense ;-)
Does this mean you expect 4 or 5 items in your WHERE clause?
Joe