Обсуждение: Re: [ADMIN] v7.1b4 bad performance
"Schmidt, Peter" <peter.schmidt@prismedia.com> writes: > So, is it OK to use commit_delay=0? Certainly. In fact, I think that's about to become the default ;-) I have now experimented with several different platforms --- HPUX, FreeBSD, and two considerably different strains of Linux --- and I find that the minimum delay supported by select(2) is 10 or more milliseconds on all of them, as much as 20 msec on some popular platforms. Try it yourself (my test program is attached). Thus, our past arguments about whether a few microseconds of delay before commit are a good idea seem moot; we do not have any portable way of implementing that, and a ten millisecond delay for commit is clearly Not Good. regards, tom lane /* To use: gcc test.c, then time ./a.out N N=0 should return almost instantly, if your select(2) does not block as per spec. N=1 shows the minimum achievable delay, * 1000 --- for example, if time reports the elapsed time as 10 seconds, then select has rounded your 1-microsecond delay request up to 10 milliseconds. Some Unixen seem to throw in an extra ten millisec of delay just for good measure, eg, on FreeBSD 4.2 N=1 takes 20 sec, N=20000 takes 30. */ #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> int main(int argc, char** argv) { struct timeval delay; int i, del; del = atoi(argv[1]); for (i = 0; i < 1000; i++) { delay.tv_sec = 0; delay.tv_usec = del; (void) select(0, NULL, NULL, NULL, &delay); } return 0; }
I wrote: > Thus, our past arguments about whether a few microseconds of delay > before commit are a good idea seem moot; we do not have any portable way > of implementing that, and a ten millisecond delay for commit is clearly > Not Good. I've now finished running a spectrum of pgbench scenarios, and I find no case in which commit_delay = 0 is worse than commit_delay > 0. Now this is just one benchmark on just one platform, but it's pretty damning... Platform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think). Minimum select(2) delay is 10 msec on this platform. POSTMASTER OPTIONS: -i -B 1024 -N 100 $ PGOPTIONS='-c commit_delay=1' pgbench -c 1 -t 1000 bench tps = 13.304624(including connections establishing) tps = 13.323967(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 1 -t 1000 bench tps = 16.614691(including connections establishing) tps = 16.645832(excluding connections establishing) $ PGOPTIONS='-c commit_delay=1' pgbench -c 10 -t 100 bench tps = 13.612502(including connections establishing) tps = 13.712996(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 10 -t 100 bench tps = 14.674477(including connections establishing) tps = 14.787715(excluding connections establishing) $ PGOPTIONS='-c commit_delay=1' pgbench -c 30 -t 100 bench tps = 10.875912(including connections establishing) tps = 10.932836(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 30 -t 100 bench tps = 12.853009(including connections establishing) tps = 12.934365(excluding connections establishing) $ PGOPTIONS='-c commit_delay=1' pgbench -c 50 -t 100 bench tps = 9.476856(including connections establishing) tps = 9.520800(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 50 -t 100 bench tps = 9.807925(including connections establishing) tps = 9.854161(excluding connections establishing) With -F (no fsync), it's the same story: POSTMASTER OPTIONS: -i -o -F -B 1024 -N 100 $ PGOPTIONS='-c commit_delay=1' pgbench -c 1 -t 1000 bench tps = 40.584300(including connections establishing) tps = 40.708855(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 1 -t 1000 bench tps = 51.585629(including connections establishing) tps = 51.797280(excluding connections establishing) $ PGOPTIONS='-c commit_delay=1' pgbench -c 10 -t 100 bench tps = 35.811729(including connections establishing) tps = 36.448439(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 10 -t 100 bench tps = 43.878827(including connections establishing) tps = 44.856029(excluding connections establishing) $ PGOPTIONS='-c commit_delay=1' pgbench -c 30 -t 100 bench tps = 23.490464(including connections establishing) tps = 23.749558(excluding connections establishing) $ PGOPTIONS='-c commit_delay=0' pgbench -c 30 -t 100 bench tps = 23.452935(including connections establishing) tps = 23.716181(excluding connections establishing) I vote for commit_delay = 0, unless someone can show cases where positive delay is significantly better than zero delay. regards, tom lane
> "Schmidt, Peter" <peter.schmidt@prismedia.com> writes: > > So, is it OK to use commit_delay=0? > > Certainly. In fact, I think that's about to become the default ;-) I agree with Tom. I did some benchmarking tests using pgbench for a computer magazine in Japan. I got a almost equal or better result for 7.1 than 7.0.3 if commit_delay=0. See included png file. -- Tatsuo Ishii
Вложения
Tatsuo Ishii <t-ishii@sra.co.jp> writes: > I agree with Tom. I did some benchmarking tests using pgbench for a > computer magazine in Japan. I got a almost equal or better result for > 7.1 than 7.0.3 if commit_delay=0. See included png file. Interesting curves. One thing you might like to know is that while poking around with a profiler this afternoon, I found that the vast majority of the work done for this benchmark is in the uniqueness checks driven by the unique indexes. Declare those as plain (non unique) and the TPS figures would probably go up noticeably. That doesn't make the test invalid, but it does suggest that pgbench is emphasizing one aspect of system performance to the exclusion of others ... regards, tom lane
> ... See included png file. What kind of machine was this run on? - Thomas
lockhart> > ... See included png file. lockhart> lockhart> What kind of machine was this run on? lockhart> lockhart> - Thomas Sorry to forget to mention about that. SONY VAIO Z505CR/K (note PC) Pentium III 750MHz/256MB memory/20GB IDE HDD Linux (kernel 2.2.17) configure --enable-multibyte=EUC_JP postgresql.conf: fsync = on max_connections = 128 shared_buffers = 1024 silent_mode = on commit_delay = 0 postmaster opts for 7.0.3: -B 1024 -N 128 -S pgbench settings: scaling factor = 1 data excludes connetion establishing time number of total transactions are always 640 (see included scripts I ran for the testing) ------------------------------------------------------ #! /bin/sh pgbench -i test for i in 1 2 4 8 16 32 64 128 do t=`expr 640 / $i` pgbench -t $t -c $i test echo "===== sync ======" sync;sync;sync;sleep 10 echo "===== sync done ======" done ------------------------------------------------------ -- Tatsuo Ishii
* Tom Lane <tgl@sss.pgh.pa.us> [010216 22:49]: > "Schmidt, Peter" <peter.schmidt@prismedia.com> writes: > > So, is it OK to use commit_delay=0? > > Certainly. In fact, I think that's about to become the default ;-) > > I have now experimented with several different platforms --- HPUX, > FreeBSD, and two considerably different strains of Linux --- and I find > that the minimum delay supported by select(2) is 10 or more milliseconds > on all of them, as much as 20 msec on some popular platforms. Try it > yourself (my test program is attached). > > Thus, our past arguments about whether a few microseconds of delay > before commit are a good idea seem moot; we do not have any portable way > of implementing that, and a ten millisecond delay for commit is clearly > Not Good. > > regards, tom lane Here is another one. UnixWare 7.1.1 on a P-III 500 256 Meg Ram: $ cc -o tgl.test -O tgl.test.c $ time ./tgl.test 0 real 0m0.01s user 0m0.01s sys 0m0.00s $ time ./tgl.test 1 real 0m10.01s user 0m0.00s sys 0m0.01s $ time ./tgl.test 2 real 0m10.01s user 0m0.00s sys 0m0.00s $ time ./tgl.test 3 real 0m10.11s user 0m0.00s sys 0m0.01s $ uname -a UnixWare lerami 5 7.1.1 i386 x86at SCO UNIX_SVR5 $ -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 972-414-9812 E-Mail: ler@lerctr.org US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749
Tom Lane wrote: > > I wrote: > > Thus, our past arguments about whether a few microseconds of delay > > before commit are a good idea seem moot; we do not have any portable way > > of implementing that, and a ten millisecond delay for commit is clearly > > Not Good. > > I've now finished running a spectrum of pgbench scenarios, and I find > no case in which commit_delay = 0 is worse than commit_delay > 0. > Now this is just one benchmark on just one platform, but it's pretty > damning... > In your test cases I always see "where bid = 1" at "update branches" i.e. update branches set bbalance = bbalance + ... where bid = 1 ISTM there's no multiple COMMIT in your senario-s due to their lock conflicts. Regards, Hiroshi Inoue
I did not realize how much WAL improved performance when using fsync. > > "Schmidt, Peter" <peter.schmidt@prismedia.com> writes: > > > So, is it OK to use commit_delay=0? > > > > Certainly. In fact, I think that's about to become the default ;-) > > I agree with Tom. I did some benchmarking tests using pgbench for a > computer magazine in Japan. I got a almost equal or better result for > 7.1 than 7.0.3 if commit_delay=0. See included png file. > -- > Tatsuo Ishii [ Attachment, skipping... ] -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Hiroshi Inoue <Inoue@tpf.co.jp> writes: > In your test cases I always see "where bid = 1" at "update branches" > i.e. > update branches set bbalance = bbalance + ... where bid = 1 > ISTM there's no multiple COMMIT in your senario-s due to > their lock conflicts. Hmm. It looks like using a 'scaling factor' larger than 1 is necessary to spread out the updates of "branches". AFAIR, the people who reported runs with scaling factors > 1 got pretty much the same results though. regards, tom lane
Tom Lane wrote: > > Hiroshi Inoue <Inoue@tpf.co.jp> writes: > > In your test cases I always see "where bid = 1" at "update branches" > > i.e. > > update branches set bbalance = bbalance + ... where bid = 1 > > > ISTM there's no multiple COMMIT in your senario-s due to > > their lock conflicts. > > Hmm. It looks like using a 'scaling factor' larger than 1 is necessary > to spread out the updates of "branches". AFAIR, the people who reported > runs with scaling factors > 1 got pretty much the same results though. > People seem to believe your results are decisive and would raise your results if the evidence is required. All clients of pgbench execute the same sequence of queries. There could be various conflicts e.g. oridinary lock, buffer lock, IO spinlock ... I've been suspicious if pgbench is an (unique) appropiriate test case for evaluaing commit_delay. Regards, Hiroshi Inoue
Hiroshi Inoue <Inoue@tpf.co.jp> writes: > I've been suspicious if pgbench is an (unique) > appropiriate test case for evaluaing commit_delay. Of course it isn't. Never trust only one benchmark. I've asked the Great Bridge folks to run their TPC-C benchmark with both zero and small nonzero commit_delay. It will be a couple of days before we have the results, however. Can anyone else offer any comparisons based on other multiuser benchmarks? regards, tom lane
Tom Lane wrote: > > Hiroshi Inoue <Inoue@tpf.co.jp> writes: > > I've been suspicious if pgbench is an (unique) > > appropiriate test case for evaluaing commit_delay. > > Of course it isn't. Never trust only one benchmark. > > I've asked the Great Bridge folks to run their TPC-C benchmark with both > zero and small nonzero commit_delay. It will be a couple of days before > we have the results, however. Can anyone else offer any comparisons > based on other multiuser benchmarks? > I changed pgbench so that different connection connects to the different database and got the following results. The results of pgbench -c 10 -t 100 [CommitDelay=0] 1st)tps = 18.484611(including connections establishing) tps = 19.827988(excluding connections establishing) 2nd)tps = 18.754826(including connections establishing) tps = 19.352268(excluditp connections establishing) 3rd)tps = 18.771225(including connections establishing) tps = 19.261843(excluding connections establishing) [CommitDelay=1] 1st)tps = 20.317649(including connections establishing) tps = 20.975151(excluding connections establishing) 2nd)tps = 24.208025(including connections establishing) tps = 24.663665(excluding connections establishing) 3rd)tps = 25.821156(including connections establishing) tps = 26.842741(excluding connections establishing) Regards, Hiroshi Inoue
Hiroshi Inoue <Inoue@tpf.co.jp> writes: > I changed pgbench so that different connection connects > to the different database and got the following results. Hmm, you mean you set up a separate test database for each pgbench "client", but all under the same postmaster? > The results of > pgbench -c 10 -t 100 > [CommitDelay=0] > 1st)tps = 18.484611(including connections establishing) > tps = 19.827988(excluding connections establishing) > 2nd)tps = 18.754826(including connections establishing) > tps = 19.352268(excluditp connections establishing) > 3rd)tps = 18.771225(including connections establishing) > tps = 19.261843(excluding connections establishing) > [CommitDelay=1] > 1st)tps = 20.317649(including connections establishing) > tps = 20.975151(excluding connections establishing) > 2nd)tps = 24.208025(including connections establishing) > tps = 24.663665(excluding connections establishing) > 3rd)tps = 25.821156(including connections establishing) > tps = 26.842741(excluding connections establishing) What platform is this on --- in particular, how long a delay is CommitDelay=1 in reality? What -B did you use? regards, tom lane
> -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > > Hiroshi Inoue <Inoue@tpf.co.jp> writes: > > I changed pgbench so that different connection connects > > to the different database and got the following results. > > Hmm, you mean you set up a separate test database for each pgbench > "client", but all under the same postmaster? > Yes. Different database is to make the conflict as less as possible. The conflict among backends is a greatest enemy of CommitDelay. > > The results of > > pgbench -c 10 -t 100 > > > [CommitDelay=0] > > 1st)tps = 18.484611(including connections establishing) > > tps = 19.827988(excluding connections establishing) > > 2nd)tps = 18.754826(including connections establishing) > > tps = 19.352268(excluditp connections establishing) > > 3rd)tps = 18.771225(including connections establishing) > > tps = 19.261843(excluding connections establishing) > > [CommitDelay=1] > > 1st)tps = 20.317649(including connections establishing) > > tps = 20.975151(excluding connections establishing) > > 2nd)tps = 24.208025(including connections establishing) > > tps = 24.663665(excluding connections establishing) > > 3rd)tps = 25.821156(including connections establishing) > > tps = 26.842741(excluding connections establishing) > > What platform is this on --- in particular, how long a delay > is CommitDelay=1 in reality? What -B did you use? > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) min delay) 10msec according to your test program. -B) 64 (all other settings are default) Regards, Hiroshi Inoue
"Hiroshi Inoue" <Inoue@tpf.co.jp> writes: >> Hmm, you mean you set up a separate test database for each pgbench >> "client", but all under the same postmaster? > Yes. Different database is to make the conflict as less as possible. > The conflict among backends is a greatest enemy of CommitDelay. Okay, so this errs in the opposite direction from the original form of the benchmark: there will be *no* cross-backend locking delays, except for accesses to the common WAL log. That's good as a comparison point, but we shouldn't trust it absolutely either. >> What platform is this on --- in particular, how long a delay >> is CommitDelay=1 in reality? What -B did you use? > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) > min delay) 10msec according to your test program. > -B) 64 (all other settings are default) Thanks. Could I trouble you to run it again with a larger -B, say 1024 or 2048? What I've found is that at -B 64, the benchmark is so constrained by limited buffer space that it doesn't reflect performance at a more realistic production setting. regards, tom lane
Tom Lane wrote: > > "Hiroshi Inoue" <Inoue@tpf.co.jp> writes: > >> Hmm, you mean you set up a separate test database for each pgbench > >> "client", but all under the same postmaster? > > > Yes. Different database is to make the conflict as less as possible. > > The conflict among backends is a greatest enemy of CommitDelay. > > Okay, so this errs in the opposite direction from the original form of > the benchmark: there will be *no* cross-backend locking delays, except > for accesses to the common WAL log. That's good as a comparison point, > but we shouldn't trust it absolutely either. > Of cource it's only one of the test cases. Because I've ever seen one-sided test cases, I had to provide this test case unwillingly. There are some obvious cases that CommitDelay is harmful and I've seen no test case other than such cases i.e 1) There's only one session. 2) The backends always conflict(e.g pgbench with scaling factor 1). > >> What platform is this on --- in particular, how long a delay > >> is CommitDelay=1 in reality? What -B did you use? > > > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) > > min delay) 10msec according to your test program. > > -B) 64 (all other settings are default) > > Thanks. Could I trouble you to run it again with a larger -B, say > 1024 or 2048? What I've found is that at -B 64, the benchmark is > so constrained by limited buffer space that it doesn't reflect > performance at a more realistic production setting. > OK I would try it later though I'm not sure I could increase -B that large in my current environment. Regards, Hiroshi Inoue
Tom Lane wrote: > > > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) > > min delay) 10msec according to your test program. > > -B) 64 (all other settings are default) > > Thanks. Could I trouble you to run it again with a larger -B, say > 1024 or 2048? What I've found is that at -B 64, the benchmark is > so constrained by limited buffer space that it doesn't reflect > performance at a more realistic production setting. > Hmm the result doesn't seem that obvious. First I got the following result. [CommitDelay=0] 1)tps = 23.024648(including connections establishing) tps = 23.856420(excluding connections establishing) 2)tps = 30.276270(including connections establishing) tps = 30.996459(excluding connections establishing) [CommitDelay=1] 1)tps = 23.065921(including connections establishing) tps = 23.866029(excluding connections establishing) 2)tps = 34.024632(including connections establishing) tps = 35.671566(excluding connections establishing) The result seems inconstant and after disabling checkpoint process I got the following. [CommitDelay=0] 1)tps = 24.060970(including connections establishing) tps = 24.416851(excluding connections establishing) 2)tps = 21.361134(including connections establishing) tps = 21.605583(excluding connections establishing) 3)tps = 20.377635(including connections establishing) tps = 20.646523(excluding connections establishing) [CommitDelay=1] 1)tps = 22.164379(including connections establishing) tps = 22.790772(excluding connections establishing) 2)tps = 22.719068(including connections establishing) tps = 23.040485(excluding connections establishing) 3)tps = 24.341675(including connections establishing) tps = 25.869479(excluding connections establishing) Unfortunately I have no more time to check today. Please check the similar test case. [My test case] I created and initialized 10 datatabases as follows. 1) create databases. createdb inoue1 craetedb inoue2 . createdb inoue10 2) pgbench -i inoue1 pgbench -i inoue2 . pgbench -i inoue10 3) invoke a modified pgbench pgbench -c 10 -t 100 inoue I've attached a patch to change pgbench so that each connection connects to different database whose name is 'xxxx%d'(xxxx is the specified database? name). Regards, Hiroshi InoueIndex: pgbench.c =================================================================== RCS file: /home/cvs/pgcurrent/contrib/pgbench/pgbench.c,v retrieving revision 1.1 diff -c -r1.1 pgbench.c *** pgbench.c 2001/02/20 07:55:21 1.1 --- pgbench.c 2001/02/20 09:31:13 *************** *** 540,545 **** --- 540,546 ---- PGconn *con; PGresult *res; + char dbn[48]; while ((c = getopt(argc, argv, "ih:nvp:dc:t:s:S")) != EOF) { *************** *** 639,645 **** } /* opening connection... */ ! con = PQsetdb(pghost, pgport, NULL, NULL, dbName); if (PQstatus(con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName); --- 640,648 ---- } /* opening connection... */ ! /*con = PQsetdb(pghost, pgport, NULL, NULL, dbName);*/ ! sprintf(dbn, "%s1", dbName); ! con = PQsetdb(pghost, pgport, NULL, NULL, dbn); if (PQstatus(con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName); *************** *** 726,732 **** /* make connections to the database */ for (i = 0; i < nclients; i++) { ! state[i].con = PQsetdb(pghost, pgport, NULL, NULL, dbName); if (PQstatus(state[i].con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName); --- 729,737 ---- /* make connections to the database */ for (i = 0; i < nclients; i++) { ! /*state[i].con = PQsetdb(pghost, pgport, NULL, NULL, dbName);*/ ! sprintf(dbn, "%s%d", dbName, i + 1); ! state[i].con = PQsetdb(pghost, pgport, NULL, NULL, dbn); if (PQstatus(state[i].con) == CONNECTION_BAD) { fprintf(stderr, "Connection to database '%s' failed.\n", dbName);
I Inoue wrote: > > Tom Lane wrote: > > > > > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2) > > > min delay) 10msec according to your test program. > > > -B) 64 (all other settings are default) > > > > Thanks. Could I trouble you to run it again with a larger -B, say > > 1024 or 2048? What I've found is that at -B 64, the benchmark is > > so constrained by limited buffer space that it doesn't reflect > > performance at a more realistic production setting. > > > > Hmm the result doesn't seem that obvious. > > First I got the following result. Sorry I forgot to mention the -B setting of my previous posting. All results are with -B 1024. Regards, Hiroshi Inoue
> Tom Lane wrote: > > > > > platform) i686-pc-linux-gnu, compiled by GCC > egcs-2.91.60(turbolinux 4.2) > > > min delay) 10msec according to your test program. > > > -B) 64 (all other settings are default) > > > > Thanks. Could I trouble you to run it again with a larger -B, say > > 1024 or 2048? What I've found is that at -B 64, the benchmark is > > so constrained by limited buffer space that it doesn't reflect > > performance at a more realistic production setting. > > > > Hmm the result doesn't seem that obvious. > I tried with -B 1024 10 times for commit_delay=0 and 1 respectively. The average result of 'pgbench -c 10 -t 100' is as follows. [commit_delay=0]26.462817(including connections establishing)26.788047(excluding connections establishing) [commit_delay=1]27.630405(including connections establishing)28.042666(excluding connections establishing) Hiroshi Inoue
Just another data point. I downloaded a snapshot yesterday - Changelogs dated Feb 20 17:02 It's significantly slower than "7.0.3 with fsync off" for one of my webapps. 7.0.3 with fsync off gets me about 55 hits per sec max (however it's interesting that the speed keeps dropping with continued tests). ( PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66) For 7.1b4 snapshot I get about 23 hits per second (drops gradually too). I'm using Pg::DBD compiled using the 7.1 libraries for both tests. (PostgreSQL 7.1beta4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66) For a simple "select only" webapp I'm getting 112 hits per sec for 7.0.3. and 109 hits a sec for the 7.1 beta4 snapshot. These results remain quite stable over many repeated tests. The first webapp does a rollback, begin, select, update, commit, begin, a bunch of selects in sequence and rollback. So my guess is that the 7.1 updates (with default fsync) are significantly slower than 7.0.3 fsync=off now. But it's interesting that the updates slow things down significantly. Going from 50 to 30 hits per second after a few thousand hits for 7.0.3, and 23 to 17 after about a thousand hits for 7.1beta4. For postgresql 7.0.3 to speed things back up from 30 to 60 hits per sec I had to do: lylyeoh=# delete from session; DELETE 1 lylyeoh=# vacuum; vacuum analyze; VACUUM NOTICE: RegisterSharedInvalid: SI buffer overflow NOTICE: InvalidateSharedInvalid: cache state reset VACUUM (Not sure why the above happened, but I repeated the vacuum again for good measure) lylyeoh=# vacuum; vacuum analyze; VACUUM VACUUM Then I ran the apachebench again (after visiting the webpage once to create the session). Note that even with only one row in the session table it kept getting slower and slower as it kept getting updated, even when I kept trying to vacuum and vacuum analyze it. I had to delete the row and vacuum only then was there a difference. I didn't try this on 7.1beta4. Cheerio, Link.
I wrote: > > I tried with -B 1024 10 times for commit_delay=0 and 1 respectively. > The average result of 'pgbench -c 10 -t 100' is as follows. > > [commit_delay=0] > 26.462817(including connections establishing) > 26.788047(excluding connections establishing) > [commit_delay=1] > 27.630405(including connections establishing) > 28.042666(excluding connections establishing) > I got another clear result by simplifying pgbench. [commit_delay = 0] 1)tps = 52.682295(including connections establishing) tps = 53.574140(excluding connections establishing) 2)tps = 54.580892(including connections establishing) tps = 55.672988(excluding connections establishing) 3)tps = 60.409452(including connections establishing) tps = 61.740995(excluding connections establishing) 4)tps = 60.787502(including connections establishing) tps = 62.131317(excluding connections establishing) 5)tps = 60.968409(including connections establishing) tps = 62.328142(excluding connections establishing) 6)tps = 62.396566(including connections establishing) tps = 63.614357(excluding connections establishing) 7)tps = 52.720152(including connections establishing) tps = 54.811739(excluding connections establishing) 8)tps = 53.417274(including connections establishing) tps = 54.454355(excluding connections establishing) 9)tps = 54.862412(including connections establishing) tps = 55.953512(excluding connections establishing) 10)tps = 60.616255(including connections establishing) tps = 63.423590(excluding connections establishing) [commit_delay = 1] 1)tps = 68.458715(including connections establishing) tps = 71.147012(excluding connections establishing) 2)tps = 71.059064(including connections establishing) tps = 72.685829(excluding connections establishing) 3)tps = 67.625556(including connections establishing) tps = 69.288699(excluding connections establishing) 4)tps = 84.749505(including connections establishing) tps = 87.430563(excluding connections establishing) 5)tps = 83.001418(including connections establishing) tps = 85.525377(excluding connections establishing) 6)tps = 66.235768(including connections establishing) tps = 67.830999(excluding connections establishing) 7)tps = 80.993308(including connections establishing) tps = 87.333491(excluding connections establishing) 8)tps = 69.844893(including connections establishing) tps = 71.640972(excluding connections establishing) 9)tps = 71.135311(including connections establishing) tps = 72.979021(excluding connections establishing) 10)tps = 68.091439(including connections establishing) tps = 69.539728(excluding connections establishing) The patch to let pgbench execute 1 query/trans is the following. Index: pgbench.c =================================================================== RCS file: /home/cvs/pgcurrent/contrib/pgbench/pgbench.c,v retrieving revision 1.1 diff -c -r1.1 pgbench.c *** pgbench.c 2001/02/20 07:55:21 1.1 --- pgbench.c 2001/02/22 10:03:52 *************** *** 217,222 **** --- 217,224 ---- st->state = 0; } + if (st->state > 1) + st->state=6; switch (st->state) { case 0: /* about to start */ Regards, Hiroshi Inoue
Lincoln Yeoh wrote: > > Just another data point. > > I downloaded a snapshot yesterday - Changelogs dated Feb 20 17:02 > > It's significantly slower than "7.0.3 with fsync off" for one of my webapps. > > 7.0.3 with fsync off gets me about 55 hits per sec max (however it's > interesting that the speed keeps dropping with continued tests). > ( PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66) > > For 7.1b4 snapshot I get about 23 hits per second (drops gradually too). > I'm using Pg::DBD compiled using the 7.1 libraries for both tests. > (PostgreSQL 7.1beta4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66) > > For a simple "select only" webapp I'm getting 112 hits per sec for 7.0.3. > and 109 hits a sec for the 7.1 beta4 snapshot. These results remain quite > stable over many repeated tests. > > The first webapp does a rollback, begin, select, update, commit, begin, a > bunch of selects in sequence and rollback. It may be that WAL has changed the rollback time-characteristics to worse than pre-wal ? If that is the case tha routeinely rollbacking transactions is no longer a good programming practice. It may have used to be as I think that before wal both rollback and commit had more or less the same cost. > So my guess is that the 7.1 updates (with default fsync) are significantly > slower than 7.0.3 fsync=off now. the consensus seems to be that they are only "a little" slower. > But it's interesting that the updates slow things down significantly. Going > from 50 to 30 hits per second after a few thousand hits for 7.0.3, and 23 > to 17 after about a thousand hits for 7.1beta4. > > For postgresql 7.0.3 to speed things back up from 30 to 60 hits per sec I > had to do: ------------- Hannu
On Sat, 17 Feb 2001, Tom Lane wrote: [skip] TL> Platform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think). TL> Minimum select(2) delay is 10 msec on this platform. [skip] TL> I vote for commit_delay = 0, unless someone can show cases where TL> positive delay is significantly better than zero delay. BTW, for modern versions of FreeBSD kernels, there is HZ kernel option which describes maximum timeslice granularity (actually, HZ value is number of timeslice periods per second, with default of 100 = 10 ms). On modern CPUs HZ may be increased to at least 1000, and sometimes even to 5000 (unfortunately, I haven't test platform by hand). So, maybe you can test select granularity at ./configure phase and then define default commit_delay accordingly. Your thoughts? Sincerely, D.Marck [DM5020, DM268-RIPE, DM3-RIPN] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------
On Sun, 18 Feb 2001, Dmitry Morozovsky wrote: I just done the experiment with increasing HZ to 1000 on my own machine (PII 374). Your test program reports 2 ms instead of 20. The other side of increasing HZ is surely more overhead to scheduler system. Anyway, it's a bit of data to dig into, I suppose ;-) Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM DTLA IDE in ATA66 mode with tag queueing and soft updates turned on) >> default delay (5 us) number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 96.678008(including connections establishing) tps = 96.982619(excluding connections establishing) number of clients: 10 number of transactions per client: 100 number of transactions actually processed: 1000/1000 tps = 77.538398(including connections establishing) tps = 79.126914(excluding connections establishing) number of clients: 20 number of transactions per client: 50 number of transactions actually processed: 1000/1000 tps = 68.448429(including connections establishing) tps = 70.957500(excluding connections establishing) >> delay of 0 number of clients: 1 number of transactions per client: 1000 number of transactions actually processed: 1000/1000 tps = 111.939751(including connections establishing) tps = 112.335089(excluding connections establishing) number of clients: 10 number of transactions per client: 100 number of transactions actually processed: 1000/1000 tps = 84.262936(including connections establishing) tps = 86.152702(excluding connections establishing) number of clients: 20 number of transactions per client: 50 number of transactions actually processed: 1000/1000 tps = 79.678831(including connections establishing) tps = 83.106418(excluding connections establishing) Results are very close... Another thing to dig into. BTW, postgres parameters were: -B 256 -F -i -S DM> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option DM> which describes maximum timeslice granularity (actually, HZ value is DM> number of timeslice periods per second, with default of 100 = 10 ms). On DM> modern CPUs HZ may be increased to at least 1000, and sometimes even to DM> 5000 (unfortunately, I haven't test platform by hand). DM> DM> So, maybe you can test select granularity at ./configure phase and then DM> define default commit_delay accordingly. DM> DM> Your thoughts? DM> DM> Sincerely, DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN] DM> ------------------------------------------------------------------------ DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** DM> ------------------------------------------------------------------------ DM> Sincerely, D.Marck [DM5020, DM268-RIPE, DM3-RIPN] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------
On Sat, 17 Feb 2001, Tom Lane wrote: [skip] TL> Platform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think). TL> Minimum select(2) delay is 10 msec on this platform. [skip] TL> I vote for commit_delay = 0, unless someone can show cases where TL> positive delay is significantly better than zero delay. BTW, for modern versions of FreeBSD kernels, there is HZ kernel option which describes maximum timeslice granularity (actually, HZ value is number of timeslice periods per second, with default of 100 = 10 ms). On modern CPUs HZ may be increased to at least 1000, and sometimes even to 5000 (unfortunately, I haven't test platform by hand). So, maybe you can test select granularity at ./configure phase and then define default commit_delay accordingly. Your thoughts? Sincerely, D.Marck [DM5020, DM268-RIPE, DM3-RIPN] ------------------------------------------------------------------------ *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru *** ------------------------------------------------------------------------