David Pratt <fairwinds@eastlink.ca> writes:
> It was suggested that I look at an array.
I think that was me. I tried not to say there's only one way to do it. Only
that I chose to go this way and I think it has worked a lot better for me.
Having the text right there in the column saves a *lot* of work dealing with
the tables. Especially since many tables would have multiple localized
strings.
> I think my table will be pretty simple;
> CREATE TABLE multi_language (
> id SERIAL,
> lang_code_and_text TEXT[][]
> );
>
> So records would look like:
>
> 1, {{'en','the brown cow'},{'fr','la vache brun'}}
> 2, {{'en','the blue turkey'},{'fr','la dandon bleu'}}
That's a lot more complicated than my model.
Postgres doesn't have any functions for handling arrays like these as
associative arrays like you might want. And as you've discovered it's not so
easy to ship the whole array to your client where it might be easier to work
with.
I just have things like (hypothetically):
CREATE TABLE states (
abbrev text,
state_name text[],
state_capitol text[]
)
And then in my application code data layer I mark all "internationalized
columns" and the object that handles creating the actual select automatically
includes a "[$lang_id]" after every column in that list.
The list of languages supported and the mapping of languages to array
positions is fixed. I can grow it later but I can't reorganize them. This is
fine for me since pretty much everything has exactly two languages.
--
greg