Table "counter" will have triggers that will create a sequence with a new row is inserted called counter_1_1_seq, counter_2_1_seq, counter_1_3_seq, etc that will be used to insert values in **another** table. It will be used for nothing else. When we insert millions of pairs group/element, the trigger in that table will generate millions of sequences.
My question is how PostgreSQL will behave. Could it deal with millions of sequences? What about system operations as vacuum, etc?
First, it sounds like you care about there being no gaps in the records you end up saving. If that is the case then sequences will not work for you.
.
.
.
If you are still reading because you can live with some gaps - then does having one sequence per pair really matter?
Regardless, at this scale you probably should setup a performance test as part of your continuous integration/test infrastructure, and let it answer the "which performs better" question.
However, one other consideration with sequences: do you care that PostgreSQL will cache/pin (i.e., no release) every single sequence you touch for the lifetime of the session? (I do not think DISCARD matters here but I'm just guessing)
A big determinant would seem to be how your sessions would go about using the sequences. You've described the model requirement but haven't describe the process by which the model will be used; and without that information useful comments pertaining to alternative implementations are difficult to formulate.