Thread: Want to run PostgreSQL on new associative memory technology

Want to run PostgreSQL on new associative memory technology

From
"Melchior, Tim"
Date:
My name is Tim Melchior.  I am an engineer at UTMC Microelectronic Systems.  I am considering modifying PostgreSQL to
exploita new hardware technology that we have developed.  I would appreciate any information or suggestions anyone
couldgive me.  I am particularly interested in contract/consulting resources that anyone might be able to suggest. 

My objective is to bring to market a new paradigm breaking associative memory technology which UTMC Microelectronic
Systemshas developed.  With simple cursor operations, we have benchmarked our hardware at 6000 times faster than
MicrosoftSQL Server.  What we lack is the ODBC and SQL interfaces to this hardware. 

Our e.Card Distributed Query Processor is a PCI card which employs two of our content addressable memory engine ICs
(originallydeveloped for telecom routers) and a fully programmable 64 bit MIPS processor.  The resulting product is
essentiallya database kernel that performs at router speed.  I believe that this product has enormous potential for
highperformance database applications.  Each card can store up to 2 gigabytes of memory.  (see http:www.utmc.com/ecard/
formore information) 

Any suggestions that anyone can give me would be greatly appreciated.

Thank you,

Tim Melchior
Sr. Principal Engineer
UTMC Microelectronic Systems
Colorado Springs, CO
melchior@utmc.aeroflex.com
719-594-8162




"Tuple is too big"

From
"Steve Wolfe"
Date:
   After moving a database to a new machine, I tried a vaccum analyze, and
get "ERROR:  Tuple is too big: size 8180, max size 8140".

  I know that there's a limit on the tuple size and all of that - my
question is:  How do I fix it?  Vaccum analyze barfs, as does trying "\d
table name".  I tried it on both machines, and the same thing.

  I suppose that I could write a parser to go through the pg_dump and find
offending fields, but I'm hoping that there's a way for PostgreSQL to fix
it.  If there isn't a way for it to fix data that it's created, that's
scary. : )  It is postgresql 6.5.3.

steve



Re: "Tuple is too big"

From
Paulo Jan
Date:
Steve Wolfe wrote:
>
>    After moving a database to a new machine, I tried a vaccum analyze, and
> get "ERROR:  Tuple is too big: size 8180, max size 8140".
>
>   I know that there's a limit on the tuple size and all of that - my
> question is:  How do I fix it?  Vaccum analyze barfs, as does trying "\d
> table name".  I tried it on both machines, and the same thing.
>
>   I suppose that I could write a parser to go through the pg_dump and find
> offending fields, but I'm hoping that there's a way for PostgreSQL to fix
> it.  If there isn't a way for it to fix data that it's created, that's
> scary. : )  It is postgresql 6.5.3.
>


    Me too.
    It's been happening for the last weeks with a database that didn't have
any problems before. By experimenting, I've observed that this behaviour
disappeared when I removed a certain table that another co-worker
created; the problem is that such table doesn't have any tuple bigger
than the max. supported size. Looking at the data stored in it, I don't
see anything bigger than 8000 bytes (more or less) either.
    We are using that table (and others) to store texts, in field defined
with varchar(8000). I suppose that if somebody had tried to insert a
text bigger than that, the database would have refused with an error...
Just in case, is there any character that, when inserted, will make the
tuple grow beyond the maximum size, while still taking technically just
one byte? (Some of the inserted texts were Front Page-generated HTML,
and had all kinds of tabs, return carriages and such).



                        Paulo Jan.
                        DDnet.