I am running a capacity test just for fun, and notice that if I run
one update the system uses 100% of the CPU, but if I use 2 updates at
the same time the system is only about 3% loaded and the postgres
backends rotate between
select
RUN
semwait
...this is on Freebsd.
The test was just to store 100,000 new values in a single transaction...
the table definition looks like
--
-- Selected TOC Entries:
--
\connect - mvh
--
-- TOC Entry ID 2 (OID 21452)
--
-- Name: boogers Type: TABLE Owner: mvh
--
CREATE TABLE "boogers" (
"num" integer,
"name" character varying(20),
"id" serial,
"modtime" timestamp with time zone DEFAULT now()
);
--
-- Data for TOC Entry ID 5 (OID 21452)
--
-- Name: boogers Type: TABLE DATA Owner: mvh
--
COPY "boogers" FROM stdin;
\.
--
-- TOC Entry ID 3 (OID 21452)
--
-- Name: "boogers_name" Type: INDEX Owner: mvh
--
CREATE INDEX "boogers_name" on "boogers" using btree ( "name" "varchar_ops" );
--
-- TOC Entry ID 4 (OID 21452)
--
-- Name: "boogers_num" Type: INDEX Owner: mvh
--
CREATE INDEX "boogers_num" on "boogers" using btree ( "num" "int4_ops" );
... and the perl code looks like
#!/usr/bin/perl
use DBI;
$dbh = DBI->connect("dbi:Pg:dbname=mvh","","",{AutoCommit => 0});
$sth = $dbh->prepare("INSERT INTO boogers VALUES (?,?)");
# insert a bunch of values
for( $i=0; $i< 100000; $i++) {
$sth->execute( $i, $i);
}
$dbh->commit;
$rc = $dbh->disconnect;
print $rc, "\n";