Обсуждение: limiting query time and/or RAM

Поиск
Список
Период
Сортировка

limiting query time and/or RAM

От
Alan McKay
Дата:
Is there any way to limit a query to a certain amount of RAM and / or
certain runtime?

i.e. automatically kill it if it exceeds either boundary?

We've finally narrowed down our system crashes and have a smoking gun,
but no way to fix it in the immediate term.  This sort of limit would
really help us.

--
“Don't eat anything you've ever seen advertised on TV”
         - Michael Pollan, author of "In Defense of Food"

Re: limiting query time and/or RAM

От
Scott Marlowe
Дата:
On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay <alan.mckay@gmail.com> wrote:
> Is there any way to limit a query to a certain amount of RAM and / or
> certain runtime?
>
> i.e. automatically kill it if it exceeds either boundary?
>
> We've finally narrowed down our system crashes and have a smoking gun,
> but no way to fix it in the immediate term.  This sort of limit would
> really help us.

Generally speaking work_mem limits ram used.  What are your
non-default postgresql.conf settings?

Re: limiting query time and/or RAM

От
Alan McKay
Дата:
> Generally speaking work_mem limits ram used.  What are your
> non-default postgresql.conf settings?

This cannot be right because we had queries taking 4G and see our
setting is such :

work_mem = 2MB                          # min 64kB

I'll have to find a copy of the default file to figure out what my
non-defaults are

--
“Don't eat anything you've ever seen advertised on TV”
         - Michael Pollan, author of "In Defense of Food"

Re: limiting query time and/or RAM

От
Scott Marlowe
Дата:
On Thu, Sep 17, 2009 at 1:19 PM, Alan McKay <alan.mckay@gmail.com> wrote:
>> Generally speaking work_mem limits ram used.  What are your
>> non-default postgresql.conf settings?
>
> This cannot be right because we had queries taking 4G and see our
> setting is such :

Are you sure they were using that much memory?  If you see this in TOP:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 8528 postgres  16   0 8458m 359m 349m R 38.7  1.1   0:04.46 postgres:

You do know that the query here is using 359-349 Megs, right, not 8G.

> work_mem = 2MB                          # min 64kB
>
> I'll have to find a copy of the default file to figure out what my
> non-defaults are

Just look for things that have no # in front of them.

Re: limiting query time and/or RAM

От
Bill Moran
Дата:
In response to Scott Marlowe <scott.marlowe@gmail.com>:

> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay <alan.mckay@gmail.com> wrote:
> > Is there any way to limit a query to a certain amount of RAM and / or
> > certain runtime?
> >
> > i.e. automatically kill it if it exceeds either boundary?
> >
> > We've finally narrowed down our system crashes and have a smoking gun,
> > but no way to fix it in the immediate term.  This sort of limit would
> > really help us.
>
> Generally speaking work_mem limits ram used.  What are your
> non-default postgresql.conf settings?

work_mem limits memory usage _per_sort_.

A big query can easily have many sorts.  Each sort will be limited to
work_mem memory usage, but the total could be much higher.

The only way I can think is to set a per-process limit in the OS and allow
the OS to kill a process when it gets out of hand.  Not ideal, though.

--
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

Re: limiting query time and/or RAM

От
Scott Marlowe
Дата:
On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran <wmoran@potentialtech.com> wrote:
> In response to Scott Marlowe <scott.marlowe@gmail.com>:
>
>> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay <alan.mckay@gmail.com> wrote:
>> > Is there any way to limit a query to a certain amount of RAM and / or
>> > certain runtime?
>> >
>> > i.e. automatically kill it if it exceeds either boundary?
>> >
>> > We've finally narrowed down our system crashes and have a smoking gun,
>> > but no way to fix it in the immediate term.  This sort of limit would
>> > really help us.
>>
>> Generally speaking work_mem limits ram used.  What are your
>> non-default postgresql.conf settings?
>
> work_mem limits memory usage _per_sort_.
>
> A big query can easily have many sorts.  Each sort will be limited to
> work_mem memory usage, but the total could be much higher.
>
> The only way I can think is to set a per-process limit in the OS and allow
> the OS to kill a process when it gets out of hand.  Not ideal, though.

True, but with a work_mem of 2M, I can't imagine having enough sorting
going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
the OP was looking at top and misunderstanding what the numbers mean,
which is pretty common really.

Re: limiting query time and/or RAM

От
Scott Marlowe
Дата:
On Thu, Sep 17, 2009 at 1:35 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran <wmoran@potentialtech.com> wrote:
>> In response to Scott Marlowe <scott.marlowe@gmail.com>:
>>
>>> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay <alan.mckay@gmail.com> wrote:
>>> > Is there any way to limit a query to a certain amount of RAM and / or
>>> > certain runtime?
>>> >
>>> > i.e. automatically kill it if it exceeds either boundary?
>>> >
>>> > We've finally narrowed down our system crashes and have a smoking gun,
>>> > but no way to fix it in the immediate term.  This sort of limit would
>>> > really help us.
>>>
>>> Generally speaking work_mem limits ram used.  What are your
>>> non-default postgresql.conf settings?
>>
>> work_mem limits memory usage _per_sort_.
>>
>> A big query can easily have many sorts.  Each sort will be limited to
>> work_mem memory usage, but the total could be much higher.
>>
>> The only way I can think is to set a per-process limit in the OS and allow
>> the OS to kill a process when it gets out of hand.  Not ideal, though.
>
> True, but with a work_mem of 2M, I can't imagine having enough sorting
> going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
> the OP was looking at top and misunderstanding what the numbers mean,
> which is pretty common really.

Note in followup, the danger is when pgsql looks at a hashagg subplan,
and thinks "that'll fit in work_mem" and goes ahead but in reality it
needs 1,000 times or more work_mem for such a plan, and exhausts
memory.  But to believe that's happening, I'll need to see what the OP
saw to convince him it was happening.  It's not unheard of, but it's
not that common either.

Re: limiting query time and/or RAM

От
Alan McKay
Дата:
On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
> True, but with a work_mem of 2M, I can't imagine having enough sorting
> going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
> the OP was looking at top and misunderstanding what the numbers mean,
> which is pretty common really.

Our databases are pretty big, and our queries pretty complex.

Here is a snippet from last night's fun, leaving in a few "normal"
rows, and the 3 errant ones which were an order of magnitude bigger

The ps man page does not seem to say what the "DRS" field is.  One of
our DB guys read it as such.  May well be misreading, but the fact is
we had a few queries running that were an order of magnitude bigger
than others, and once we isloated this this morning we were able to
reproduce the problem in our test environment, and hang it.   Just
prior to this happening, Munin shows committed memory spikes from
about 1.5G to 18G which equals RAM + SWAP

ps -U postgres -v

  PID TTY      STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
 1064 ?        Ss     0:01      0  3562 636289 7232  0.0 postgres:
foobar pgdb001 192.168.3.151(46867) idle

14235 ?        Ss    29:41      0  3562 6316881 4852556 29.5 postgres:
foobar pgdb001 192.168.2.66(60421) SELECT
14491 ?        Ss     0:01      0  3562 636545 7284  0.0 postgres:
foobar pgdb001 192.168.2.66(55705) SELECT

14889 ?        Rs    29:36     12  3562 6316937 4876228 29.6 postgres:
foobar pgdb001 192.168.2.62(48275) SELECT

14940 ?        Ss     0:00      0  3562 636845 7912  0.0 postgres:
foobar pgdb001 192.168.2.62(43561) SELECT
14959 ?        Rs    29:34     16  3562 6315141 4885224 29.7 postgres:
foobar pgdb001 192.168.2.62(48314) SELECT
14985 ?        Ss     0:01      0  3562 636545 7288  0.0 postgres:
foobar pgdb001 192.168.2.66(55946) SELECT


--
“Don't eat anything you've ever seen advertised on TV”
         - Michael Pollan, author of "In Defense of Food"

Re: limiting query time and/or RAM

От
Scott Marlowe
Дата:
On Thu, Sep 17, 2009 at 1:56 PM, Alan McKay <alan.mckay@gmail.com> wrote:
> On Thu, Sep 17, 2009 at 3:35 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>> True, but with a work_mem of 2M, I can't imagine having enough sorting
>> going on to need 4G of ram.  (2000 sorts? That's a lot)  I'm betting
>> the OP was looking at top and misunderstanding what the numbers mean,
>> which is pretty common really.
>
> Our databases are pretty big, and our queries pretty complex.
>
> Here is a snippet from last night's fun, leaving in a few "normal"
> rows, and the 3 errant ones which were an order of magnitude bigger
>
> The ps man page does not seem to say what the "DRS" field is.  One of

It means Data Resident Size.  If it's like RES in top, it means the
Total memory being access.  You'd need to subtract however much of
your shared_buffers it's touching to know how much it's really using.
Which is why top is so handy, it shows both RES and SHR next to each
other.

> our DB guys read it as such.  May well be misreading, but the fact is
> we had a few queries running that were an order of magnitude bigger
> than others, and once we isloated this this morning we were able to
> reproduce the problem in our test environment, and hang it.   Just
> prior to this happening, Munin shows committed memory spikes from
> about 1.5G to 18G which equals RAM + SWAP
>
> ps -U postgres -v
>
>  PID TTY      STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
>  1064 ?        Ss     0:01      0  3562 636289 7232  0.0 postgres:
> foobar pgdb001 192.168.3.151(46867) idle
>
> 14235 ?        Ss    29:41      0  3562 6316881 4852556 29.5 postgres:
> foobar pgdb001 192.168.2.66(60421) SELECT

I'm gonna make a SWAG that you've got 4 to 4.5G shared buffers, and if
you subract that from DRS you'll find it's using a few hundred to
several hundred megs.  still a lot, but not in the 4G range you're
expecting.  What does top say about this?

Re: limiting query time and/or RAM

От
Sam Mason
Дата:
On Thu, Sep 17, 2009 at 03:56:09PM -0400, Alan McKay wrote:
> Our databases are pretty big, and our queries pretty complex.

How big is "big" and how complex is "complex"?

An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
your common queries would help a lot here.

--
  Sam  http://samason.me.uk/

Re: limiting query time and/or RAM

От
Alan McKay
Дата:
> I'm gonna make a SWAG that you've got 4 to 4.5G shared buffers, and if
> you subract that from DRS you'll find it's using a few hundred to
> several hundred megs.  still a lot, but not in the 4G range you're
> expecting.  What does top say about this?

I've just add this in my cronjob with "top -b -n 1 -u postgres".
Hopefully I won't let you know tomorrow :-)

listen_addresses = '*'       # what IP address(es) to listen on;
port = 5432                     # (change requires restart)
max_connections = 350           # (change requires restart)
shared_buffers = 500MB          # min 128kB or max_connections*16kB
work_mem = 2MB                  # min 64kB
maintenance_work_mem = 128MB    # min 1MB
max_fsm_pages = 153600          # min max_fsm_relations*16, 6 bytes each
fsync = on                      # turns forced synchronization on or off
wal_sync_method = fsync      # the default is the first option
full_page_writes = on           # recover from partial page writes
wal_buffers = 1MB               # min 32kB
commit_delay = 0                # range 0-100000, in microseconds
checkpoint_segments = 16        # in logfile segments, min 1, 16MB each
archive_mode = on               # allows archiving to be done
archive_command = 'test ! -f /data/pgsql/backups/wal_arch/%f.gz && cp
%p  /var/lib/pgsql/backups/wal_arch/%f'  # command to use to archive a
logfile segment
archive_timeout = 0          # force a logfile segment switch after this
effective_cache_size = 10GB
default_statistics_target = 100      # range 1-1000
logging_collector = on       # Enable capturing of stderr and csvlog
log_directory = 'pg_log'        # directory where log files are written,
log_connections = off
log_disconnections = off
log_hostname = off
log_line_prefix = '%t:%u@%r:%p'         # special values:
autovacuum = off                # Enable autovacuum subprocess?  'on'
search_path = '"$user",public,quant001,dbprc001,price001,instrument001,client001'
              # schema names
vacuum_freeze_min_age = 1000000000
datestyle = 'iso, mdy'
lc_messages = 'en_US.utf8'           # locale for system error message
lc_monetary = 'en_US.utf8'           # locale for monetary formatting
lc_numeric = 'en_US.utf8'            # locale for number formatting
lc_time = 'en_US.utf8'               # locale for time formatting
default_text_search_config = 'pg_catalog.english'
custom_variable_classes = 'olap'
olap.rownum_name = 'default'


--
“Don't eat anything you've ever seen advertised on TV”
         - Michael Pollan, author of "In Defense of Food"

Re: limiting query time and/or RAM

От
Alan McKay
Дата:
> An EXPLAIN (EXPLAIN ANALYSE if it's not going to hurt things) of some of
> your common queries would help a lot here.

Yes, we are just about to start getting into that sort of thing.

--
“Don't eat anything you've ever seen advertised on TV”
         - Michael Pollan, author of "In Defense of Food"