Обсуждение: When should I worry?

Поиск
Список
Период
Сортировка

When should I worry?

От
Tom Allison
Дата:
I've started a database that's doing wonderfully and I'm watching the tables
grow and a steady clip.

Performance is great, indexes are nice, sql costs are low.  As far as I can
tell, I've done a respectable job of setting up the database, tables, sequence,
indexes...


But a little math tells me that I have one table that's particularly ugly.

This is for a total of 6 users.

If the user base gets to 100 or more, I'll be hitting a billion rows before too
long.  I add about 70,000 rows per user per day.  At 100 users this is 7 million
rows per day.  I'll hit a billion in 142 days, call it six months for simplicity.


The table itself is small (two columns: bigint, int) but I'm wondering when I'll
start to hit a knee in performance and how I can monitor that.  I know where I
work (day job) they have Oracle tables with a billion rows that just plain suck.
  I don't know if a billion is bad or if the DBA's were not given the right
opportunity to make their tables work.

But if they are any indication, I'll feeling some hurt when I exceed a billion
rows.  Am I going to just fold up and die in six months?

I can't really expect anyone to have an answer regarding hardware, table size,
performance speeds ...  but is there some way I can either monitor for this or
estimate it before it happens?

Re: When should I worry?

От
"Alexander Staubo"
Дата:
On 6/10/07, Tom Allison <tom@tacocat.net> wrote:
> The table itself is small (two columns: bigint, int) but I'm wondering when I'll
> start to hit a knee in performance and how I can monitor that.

You don't say anything about what the data is in the table or what
queries you run against it, so there's not much here to give advice
about.

For the monitoring, however, you can log your queries along with
timings and timestamps, and copy them into a tool like R to
statistically analyze your performance over time. You will be able to
predict the point at which your system will be too slow to use, if
indeed the performance degradation is expontential.

You can also periodically look at the pg_stat* tables to count the
number of index scans, table scans etc. on your table (see
http://www.postgresql.org/docs/8.2/interactive/monitoring-stats.html).

Alexander.

Re: When should I worry?

От
Bill Moran
Дата:
Tom Allison <tom@tacocat.net> wrote:
>
> I've started a database that's doing wonderfully and I'm watching the tables
> grow and a steady clip.
>
> Performance is great, indexes are nice, sql costs are low.  As far as I can
> tell, I've done a respectable job of setting up the database, tables, sequence,
> indexes...
>
>
> But a little math tells me that I have one table that's particularly ugly.
>
> This is for a total of 6 users.
>
> If the user base gets to 100 or more, I'll be hitting a billion rows before too
> long.  I add about 70,000 rows per user per day.  At 100 users this is 7 million
> rows per day.  I'll hit a billion in 142 days, call it six months for simplicity.
>
>
> The table itself is small (two columns: bigint, int) but I'm wondering when I'll
> start to hit a knee in performance and how I can monitor that.  I know where I
> work (day job) they have Oracle tables with a billion rows that just plain suck.
>   I don't know if a billion is bad or if the DBA's were not given the right
> opportunity to make their tables work.
>
> But if they are any indication, I'll feeling some hurt when I exceed a billion
> rows.  Am I going to just fold up and die in six months?
>
> I can't really expect anyone to have an answer regarding hardware, table size,
> performance speeds ...  but is there some way I can either monitor for this or
> estimate it before it happens?

Why not just create a simulation of 100 users and run it as hard as your
can until it starts to degrade?  Then you'll have some real-world experience
to tell you how much you can handle.

Since you don't describe anything about the schema or application, I can't
say for sure, but over the last six months, every time this has come
up, we've been able to fix the problem by reorganizing the data some
(i.e. materialized data, temp tables, etc)

--
Bill Moran
http://www.potentialtech.com

Re: When should I worry?

От
Joe Conway
Дата:
Bill Moran wrote:
> Tom Allison <tom@tacocat.net> wrote:
>>
>> If the user base gets to 100 or more, I'll be hitting a billion rows before too
>> long.  I add about 70,000 rows per user per day.  At 100 users this is 7 million
>> rows per day.  I'll hit a billion in 142 days, call it six months for simplicity.
>>
>> The table itself is small (two columns: bigint, int) but I'm wondering when I'll
>> start to hit a knee in performance and how I can monitor that.  I know where I
>> work (day job) they have Oracle tables with a billion rows that just plain suck.
>>   I don't know if a billion is bad or if the DBA's were not given the right
>> opportunity to make their tables work.
>>
>> But if they are any indication, I'll feeling some hurt when I exceed a billion
>> rows.  Am I going to just fold up and die in six months?

Alot depends on your specific use case.

- Will you be just storing the data for archival purposes, or frequently
querying the data?

- If you need to run queries, are they well bounded to certain subsets
of the data (e.g. a particular range of time for a particular user) or
are they aggregates across the entire billion rows?

- Is the data temporal in nature, and if so do you need to purge it
after some period of time?

As an example, I have an application with temporal data, that needs
periodic purging, and is typically queried for small time ranges (tens
of minutes). We have set up partitioned tables (partitioned by date
range and data source -- akin to your users) using constraint exclusion
that contain 3 or 4 billion rows (total of all partitions), and we have
no problem at all with performance. But I suspect that if we needed to
do an aggregate across the entire thing it would not be particularly
fast ;-)

> Why not just create a simulation of 100 users and run it as hard as your
> can until it starts to degrade?  Then you'll have some real-world experience
> to tell you how much you can handle.

This is good advice. Without much more detail, folks on the list won't
be able to help much, but a with simulation such as this you can answer
your own question...

Joe


Re: When should I worry?

От
Tom Allison
Дата:
On Jun 10, 2007, at 2:14 PM, Joe Conway wrote:

>
> Bill Moran wrote:
>> Tom Allison <tom@tacocat.net> wrote:
>>>
>>> If the user base gets to 100 or more, I'll be hitting a billion
>>> rows before too long.  I add about 70,000 rows per user per day.
>>> At 100 users this is 7 million rows per day.  I'll hit a billion
>>> in 142 days, call it six months for simplicity.
>>>
>>> The table itself is small (two columns: bigint, int) but I'm
>>> wondering when I'll start to hit a knee in performance and how I
>>> can monitor that.  I know where I work (day job) they have Oracle
>>> tables with a billion rows that just plain suck.   I don't know
>>> if a billion is bad or if the DBA's were not given the right
>>> opportunity to make their tables work.
>>>
>>> But if they are any indication, I'll feeling some hurt when I
>>> exceed a billion rows.  Am I going to just fold up and die in six
>>> months?
>
> Alot depends on your specific use case.
>
> - Will you be just storing the data for archival purposes, or
> frequently querying the data?
>
> - If you need to run queries, are they well bounded to certain
> subsets of the data (e.g. a particular range of time for a
> particular user) or are they aggregates across the entire billion
> rows?
>
> - Is the data temporal in nature, and if so do you need to purge it
> after some period of time?
>
> As an example, I have an application with temporal data, that needs
> periodic purging, and is typically queried for small time ranges
> (tens of minutes). We have set up partitioned tables (partitioned
> by date range and data source -- akin to your users) using
> constraint exclusion that contain 3 or 4 billion rows (total of all
> partitions), and we have no problem at all with performance. But I
> suspect that if we needed to do an aggregate across the entire
> thing it would not be particularly fast ;-)
>
>> Why not just create a simulation of 100 users and run it as hard
>> as your
>> can until it starts to degrade?  Then you'll have some real-world
>> experience
>> to tell you how much you can handle.
>
> This is good advice. Without much more detail, folks on the list
> won't be able to help much, but a with simulation such as this you
> can answer your own question...
>
> Joe
>


Good questions.  I guess there are two answers.  There are times when
I will want aggregate data and I'm not as concerned about the
execution time.
But there are other queries that are part of the application design.
These are always going to be of a type where I know a single specific
primary key value and I want to find all the rows that are related.


First table has a row of
idx serial primary key
Third table has a row of
idx bigserial primary key

and a second table (the billion row table) consistes of two rows:
first_idx integer not null references first(idx) on delete cascade,
third_idx bigint not null references third(idx) on delete cascade,
constraint pkey_first_third primary key (first_idx, third_idx)

The common query will be:

select t.string
from first f, second s, third t
where f.idx = s.first_idx
and s.third_idx = t.idx
and f.idx = 4 (or whatever...).

So, I think the answer is that the data isn't going to be temporal or
otherwise segragated or subsets.
I'll assume this is a lead in for partitions?
The data will be queried very frequently.  Probably plan on a query
every 10 seconds and I don't know what idx ranges will be involved.
Would it be possible to partition this by the first_idx value?  An
improvement?

Re: When should I worry?

От
"Filip Rembiałkowski"
Дата:
2007/6/10, Alexander Staubo <alex@purefiction.net>:
> On 6/10/07, Tom Allison <tom@tacocat.net> wrote:
> > The table itself is small (two columns: bigint, int) but I'm wondering when I'll
> > start to hit a knee in performance and how I can monitor that.
>
> You don't say anything about what the data is in the table or what
> queries you run against it, so there's not much here to give advice
> about.
>
> For the monitoring, however, you can log your queries along with
> timings and timestamps, and copy them into a tool like R to
> statistically analyze your performance over time. You will be able to
> predict the point at which your system will be too slow to use, if
> indeed the performance degradation is expontential.

Could you please share some details about this "tool like R"? Maybe
some links or usage examples?

TIA.



--
Filip Rembiałkowski

Re: When should I worry?

От
Steve Crawford
Дата:
Filip Rembiałkowski wrote:

>> For the monitoring, however, you can log your queries along with
>> timings and timestamps, and copy them into a tool like R to
>> statistically analyze your performance over time. You will be able to
>> predict the point at which your system will be too slow to use, if
>> indeed the performance degradation is expontential.
>
> Could you please share some details about this "tool like R"? Maybe
> some links or usage examples?


Find R at http://www.r-project.org/. Or use any other analysis and/or
graphing tool of your choosing (gnumeric, OO-calc, gnuplot, roll-your-own).

Cheers,
Steve


Re: When should I worry?

От
Steve Crawford
Дата:
Alexander Staubo wrote:
> ....
> For the monitoring, however, you can log your queries along with
> timings and timestamps, and copy them into a tool like R to
> statistically analyze your performance over time. You will be able to
> predict the point at which your system will be too slow to use, if
> indeed the performance degradation is expontential.
> ...


In my experience the more common situation is to "go off a cliff."
Everything hums along fine and the increases in table-size and user-base
have very little impact on your response times. Then suddenly you run
out of some resource (usually memory first).You hit swap and as your
few-millisecond query takes seconds or minutes your request queue backs
up, new connections are denied and everything goes downhill fast.

I think that keeping an eye on system resource trends via sar or similar
is more likely to provide the desired warnings of "sudden dropoff ahead".

Cheers,
Steve

Re: When should I worry?

От
Greg Smith
Дата:
On Mon, 11 Jun 2007, Steve Crawford wrote:

> In my experience the more common situation is to "go off a cliff."

Yeah, I think the idea that you'll notice performance degrading and be
able to extrapolate future trends using statistical techniques is a
bit...optimistic.

Anyway, back to the original question here.  If you're worried about
catching when performance starts becoming an issue, you need to do some
sort of logging of how long statements are taking to execute.  The main
choice is whether to log everything, at which point the logging and
sorting through all the data generated may become its own performance
concern, or whether to just log statements that take a long time and then
count how many of them show up.  Either way will give you some sort of
early warning once you get a baseline; it may take a bit of tweaking to
figure out where to draw the line at for what constitutes a "long"
statement if you only want to see how many of those you get.

There are two tools you should look at initially to help process the
logging information you get back:  pgFouine and PQA.  Here are intros to
each that also mention how to configure the postgresql.conf file:

http://pgfouine.projects.postgresql.org/tutorial.html
http://www.databasejournal.com/features/postgresql/article.php/3323561

As they're similar programs, which would work better for you is hard to
say; check out both and see which seems more practical or easier to get
running.  For example, if you only have one of PHP/Ruby installed, that
may make one tool or the easier preferred.

If you can get yourself to the point where you can confidently say
something like "yesterday we had 346 statements that took more then 200ms
to execute, which is 25% above this month's average", you'll be in a
positition to catch performance issues before they completely blindside
you; makes you look good in meetings, too.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: When should I worry?

От
Tom Allison
Дата:
Greg Smith wrote:
>
> On Mon, 11 Jun 2007, Steve Crawford wrote:
>
>> In my experience the more common situation is to "go off a cliff."
>
> Yeah, I think the idea that you'll notice performance degrading and be
> able to extrapolate future trends using statistical techniques is a
> bit...optimistic.
>
> Anyway, back to the original question here.  If you're worried about
> catching when performance starts becoming an issue, you need to do some
> sort of logging of how long statements are taking to execute.  The main
> choice is whether to log everything, at which point the logging and
> sorting through all the data generated may become its own performance
> concern, or whether to just log statements that take a long time and
> then count how many of them show up.  Either way will give you some sort
> of early warning once you get a baseline; it may take a bit of tweaking
> to figure out where to draw the line at for what constitutes a "long"
> statement if you only want to see how many of those you get.
>
> There are two tools you should look at initially to help process the
> logging information you get back:  pgFouine and PQA.  Here are intros to
> each that also mention how to configure the postgresql.conf file:
>
> http://pgfouine.projects.postgresql.org/tutorial.html
> http://www.databasejournal.com/features/postgresql/article.php/3323561
>
> As they're similar programs, which would work better for you is hard to
> say; check out both and see which seems more practical or easier to get
> running.  For example, if you only have one of PHP/Ruby installed, that
> may make one tool or the easier preferred.
>
> If you can get yourself to the point where you can confidently say
> something like "yesterday we had 346 statements that took more then
> 200ms to execute, which is 25% above this month's average", you'll be in
> a positition to catch performance issues before they completely
> blindside you; makes you look good in meetings, too.
>

Starting to sound like a sane idea.
I've been running a test job for almost 24 hours and have accumulated only 8
million rows.  That's another 125 days to get to the big 'B'.  I think by then
I'll have blown a hard drive or worse.  I'm running this on some very old
hardware that I have available (more of this at the bottom).

However, at this point the machine is running all of the SQL at < 0.2 seconds
each.  Which I consider just fine for 7,599,519 rows.

Here's some specifics about the tables:
count() from headers: 890300
count() from tokens:  890000
count() from header_token: 7599519


CREATE TABLE header_token (
     header_idx integer NOT NULL,
     token_idx integer NOT NULL
);

CREATE TABLE headers (
     idx serial NOT NULL,
     hash character varying(64) NOT NULL
);

CREATE TABLE tokens (
     idx bigserial NOT NULL,
     hash character varying(64) NOT NULL
);

ALTER TABLE ONLY headers
     ADD CONSTRAINT headers_hash_key UNIQUE (hash);
ALTER TABLE ONLY headers
     ADD CONSTRAINT headers_pkey PRIMARY KEY (idx);
ALTER TABLE ONLY header_token
     ADD CONSTRAINT pkey_header_token PRIMARY KEY (header_idx, token_idx);
ALTER TABLE ONLY tokens
     ADD CONSTRAINT tokens_hash_key UNIQUE (hash);
ALTER TABLE ONLY tokens
     ADD CONSTRAINT tokens_pkey PRIMARY KEY (idx);
ALTER TABLE ONLY header_token
     ADD CONSTRAINT header_token_header_idx_fkey FOREIGN KEY (header_idx)
REFERENCES headers(idx) ON DELETE CASCADE;
ALTER TABLE ONLY header_token
     ADD CONSTRAINT header_token_token_idx_fkey FOREIGN KEY (token_idx)
REFERENCES tokens(idx) ON DELETE CASCADE;



The SQL I was timing were:
select t.hash, h.hash
from headers h, header_token ht, tokens t
where h.idx = ht.header_idx
and ht.token_idx = t.idx
and h.idx = ?


insert into header_token
select $header, idx from tokens where idx in (...)

The SELECT was <0.2
The INSERT was easily <.7 (most of the time -- ranged because the idx IN (..)
varied from 200 to 700.  The min was <2 and the max was >1.0 from a few minutes
of observation.


All of this was run on a Pentium II 450 MHz with 412MB RAM and a software linear
0 pair or UDMA 66 7200RPM 8MB Cache drives (really old) on seperate IDE channels
with ReiserFS disk format.  The actual script was running on a seperate machine
across a 100-base-T full duplex network through a firewall machine between the
two subnets.

I can't imagine how long it would take to run:
delete from tokens;
with the CASCADE option...

Re: When should I worry?

От
Greg Smith
Дата:
On Mon, 11 Jun 2007, Tom Allison wrote:

> All of this was run on a Pentium II 450 MHz with 412MB RAM and a software
> linear 0 pair or UDMA 66 7200RPM 8MB Cache drives (really old) on seperate
> IDE channels with ReiserFS disk format.

Sometimes it's not clear if someone can speed up what they're doing simply
by using more expensive hardware.  In your case, I think it's safe to say
you've got quite a bit of margin for improvement that way when you run
into a problem.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: When should I worry?

От
Tom Allison
Дата:
On Jun 12, 2007, at 12:00 AM, Greg Smith wrote:

>
> On Mon, 11 Jun 2007, Tom Allison wrote:
>
>> All of this was run on a Pentium II 450 MHz with 412MB RAM and a
>> software linear 0 pair or UDMA 66 7200RPM 8MB Cache drives (really
>> old) on seperate IDE channels with ReiserFS disk format.
>
> Sometimes it's not clear if someone can speed up what they're doing
> simply by using more expensive hardware.  In your case, I think
> it's safe to say you've got quite a bit of margin for improvement
> that way when you run into a problem.

No doubt!  But I think it's worth nothing how much performance I
*can* get out of such an old piece of hardware.

My other computer is a Cray.  No, that's the bumper sticker on my car.

My other computer is an Athlong 64 2.?GHz with a single disk but less
RAM.  It's a xen virtual machine that I'm renting, so increasing the
power of the machine is actually very easy to do, but not yet required.

But I was impressed with how well it works on such an old machine.

Re: When should I worry?

От
Robert Treat
Дата:
On Tuesday 12 June 2007 06:04, Tom Allison wrote:
> On Jun 12, 2007, at 12:00 AM, Greg Smith wrote:
> > On Mon, 11 Jun 2007, Tom Allison wrote:
> >> All of this was run on a Pentium II 450 MHz with 412MB RAM and a
> >> software linear 0 pair or UDMA 66 7200RPM 8MB Cache drives (really
> >> old) on seperate IDE channels with ReiserFS disk format.
> >
> > Sometimes it's not clear if someone can speed up what they're doing
> > simply by using more expensive hardware.  In your case, I think
> > it's safe to say you've got quite a bit of margin for improvement
> > that way when you run into a problem.
>
> No doubt!  But I think it's worth nothing how much performance I
> *can* get out of such an old piece of hardware.
>
> My other computer is a Cray.  No, that's the bumper sticker on my car.
>
> My other computer is an Athlong 64 2.?GHz with a single disk but less
> RAM.  It's a xen virtual machine that I'm renting, so increasing the
> power of the machine is actually very easy to do, but not yet required.
>
> But I was impressed with how well it works on such an old machine.
>

When you're running these test, make sure to look for where your bottlenecks
are.  Going from a Pentium II to an Atholon may sound great, but if your
bottlenecks are all i/o based, it won't give you nearly the jump you might be
expecting.

--
Robert Treat
Database Architect
http://www.omniti.com

Re: When should I worry?

От
Robert Treat
Дата:
On Sunday 10 June 2007 18:25, Tom Allison wrote:
> On Jun 10, 2007, at 2:14 PM, Joe Conway wrote:
> > Bill Moran wrote:
> >> Tom Allison <tom@tacocat.net> wrote:
> >>> If the user base gets to 100 or more, I'll be hitting a billion
> >>> rows before too long.  I add about 70,000 rows per user per day.
> >>> At 100 users this is 7 million rows per day.  I'll hit a billion
> >>> in 142 days, call it six months for simplicity.
> >>>
> >>> The table itself is small (two columns: bigint, int) but I'm
> >>> wondering when I'll start to hit a knee in performance and how I
> >>> can monitor that.  I know where I work (day job) they have Oracle
> >>> tables with a billion rows that just plain suck.   I don't know
> >>> if a billion is bad or if the DBA's were not given the right
> >>> opportunity to make their tables work.
> >>>
> >>> But if they are any indication, I'll feeling some hurt when I
> >>> exceed a billion rows.  Am I going to just fold up and die in six
> >>> months?
> >
> > Alot depends on your specific use case.
> >
> > - Will you be just storing the data for archival purposes, or
> > frequently querying the data?
> >
> > - If you need to run queries, are they well bounded to certain
> > subsets of the data (e.g. a particular range of time for a
> > particular user) or are they aggregates across the entire billion
> > rows?
> >
> > - Is the data temporal in nature, and if so do you need to purge it
> > after some period of time?
> >
> > As an example, I have an application with temporal data, that needs
> > periodic purging, and is typically queried for small time ranges
> > (tens of minutes). We have set up partitioned tables (partitioned
> > by date range and data source -- akin to your users) using
> > constraint exclusion that contain 3 or 4 billion rows (total of all
> > partitions), and we have no problem at all with performance. But I
> > suspect that if we needed to do an aggregate across the entire
> > thing it would not be particularly fast ;-)
> >
> >> Why not just create a simulation of 100 users and run it as hard
> >> as your
> >> can until it starts to degrade?  Then you'll have some real-world
> >> experience
> >> to tell you how much you can handle.
> >
> > This is good advice. Without much more detail, folks on the list
> > won't be able to help much, but a with simulation such as this you
> > can answer your own question...
> >
> > Joe
>
> Good questions.  I guess there are two answers.  There are times when
> I will want aggregate data and I'm not as concerned about the
> execution time.
> But there are other queries that are part of the application design.
> These are always going to be of a type where I know a single specific
> primary key value and I want to find all the rows that are related.
>
>
> First table has a row of
> idx serial primary key
> Third table has a row of
> idx bigserial primary key
>
> and a second table (the billion row table) consistes of two rows:
> first_idx integer not null references first(idx) on delete cascade,
> third_idx bigint not null references third(idx) on delete cascade,
> constraint pkey_first_third primary key (first_idx, third_idx)
>
> The common query will be:
>
> select t.string
> from first f, second s, third t
> where f.idx = s.first_idx
> and s.third_idx = t.idx
> and f.idx = 4 (or whatever...).
>
> So, I think the answer is that the data isn't going to be temporal or
> otherwise segragated or subsets.
> I'll assume this is a lead in for partitions?
> The data will be queried very frequently.  Probably plan on a query
> every 10 seconds and I don't know what idx ranges will be involved.
> Would it be possible to partition this by the first_idx value?  An
> improvement?
>

It sounds like it should be possible to partition on the first_idx value
(either by range or maybe a modulo type operation).  It will certainly be an
improvement over a single billion row table (we have a couple and queries
across large portions of them are painful).

The other thing the test will tell you is how well your hardware will hold up.
Specing proper hardware to handle these types of loads can be tricky, so
verifying what you have will hold up is also of some value before you have
huge amounts of datayou have to move around.

--
Robert Treat
Database Architect
http://www.omniti.com/