"I need gather data into one table for consistency and easy for export and import, it's ok if I split data to smaller tables, but when export/import/update, i must excute query on alot of table. And this way lead data to inconsistency if I forget update/export/import on 1 or more table. It is terrible."
That statement is wrong on many levels:
Easy for import and export...
Multiple tables / one table are identical for import export purposes.
export/import/update, i must excute query on alot of table.
That's what SQL is for...
lead data to inconsistency if I forget update/export/import on 1 or more table. It is terrible.
You build your process once and test it... Additional runs of the process are 'free'...
And as someone else mentioned, the 34 indexes are additional tables anyway.
There is probably a way to optimize your current system... There often is no matter how horrible the implementation...
But I would start by normalizing that as much as possible and then running performance tests against a normalized jobs. There's lots of tools to do that... But they probably aren't much help with your current schema.
@Gary I'm working on big data, because of the demands of the job so I export/import/update data on this table every day.
I guess it's possible that each query would need all 417 columns but it seems unlikely... --> Yes, not at all but 2/3 of 417 columns :)
There is also the matter of the 34 indexes, which can thought of as 34 axillary tables that need to be kept in sync with the main table.
I need gather data into one table for consistency and easy for export and import, it's ok if I split data to smaller tables, but when export/import/update, i must excute query on alot of table. And this way lead data to inconsistency if I forget update/export/import on 1 or more table. It is terrible.
@John
I will learing about it. Thanks alot.
@All: If you have any suggestion, please let me known. Thanks for you help