Обсуждение: Problem with VACUUM after very large delete?

Поиск
Список
Период
Сортировка

Problem with VACUUM after very large delete?

От
Doug Fields
Дата:
Hello all,

(7.2.1 on Debian Woody 3.0 with various 7.2.3 patches backported)

Earlier today I deleted about 31 million records from one of our tables with a very complex query which required a full table scan. This query took quite a few hours to run (in fact, overnight).

Later, I did a VACUUM and then an ANALYZE.

However, it does not appear that either:

1) The records were actually deleted or

2) The VACUUM is working properly

Why? The VACUUM doesn't seem to be vacuuming the two indexes on the table. I'm not sure what's going on, but if you could make any suggestions, I'd appreciate it.

Details are below.

Thanks,

Doug

pexicast_lg=# ANALYZE VERBOSE audit_log;
NOTICE:  Analyzing audit_log
ANALYZE
pexicast_lg=# VACUUM VERBOSE audit_log;
NOTICE:  --Relation audit_log--
NOTICE:  Pages 2083605: Changed 2961, Empty 0; Tup 105773248: Vac 0, Keep 31106432, UnUsed 2.
        Total CPU 89.49s/13.67u sec elapsed 1101.17 sec.
VACUUM
pexicast_lg=# \d audit_log
                  Table "audit_log"
  Column  |           Type           |   Modifiers
----------+--------------------------+---------------
 at       | timestamp with time zone | default now()
 type     | integer                  |
 severity | integer                  |
 who      | integer                  |
 target   | integer                  |
 var1     | integer                  |
 var2     | integer                  |
 message  | character varying(255)   |
 ip       | character(15)            |
 login    | integer                  |
Indexes: audit_type_target_idx,
         audit_type_var1_idx

Re: Problem with VACUUM after very large delete?

От
Robert Treat
Дата:
On Tue, 2002-12-17 at 17:47, Doug Fields wrote:
> Hello all,
>
> (7.2.1 on Debian Woody 3.0 with various 7.2.3 patches backported)
>
> Earlier today I deleted about 31 million records from one of our tables
> with a very complex query which required a full table scan. This query
> took quite a few hours to run (in fact, overnight).
>
> Later, I did a VACUUM and then an ANALYZE.
>
> However, it does not appear that either:
>
> 1) The records were actually deleted or
>
> 2) The VACUUM is working properly
>
> Why? The VACUUM doesn't seem to be vacuuming the two indexes on the
> table. I'm not sure what's going on, but if you could make any
> suggestions, I'd appreciate it.
>

While vacuum will delete dead index entries, it doesn't collapse out
empty pages in the indexes. This causes indexes to never get smaller
and, in cases where the range of the index continually expands, causes
indexes to grow larger and larger. The generally accepted way to fix
this is via reindexing. (note this was not fixed in 7.3)

Robert Treat




Re: Problem with VACUUM after very large delete?

От
Doug Fields
Дата:
At 06:12 PM 12/17/2002, Robert Treat wrote:
While vacuum will delete dead index entries, it doesn't collapse out
empty pages in the indexes. This causes indexes to never get smaller
and, in cases where the range of the index continually expands, causes
indexes to grow larger and larger. The generally accepted way to fix
this is via reindexing. (note this was not fixed in 7.3)

Thanks Robert. However, if I can quote from my previous message:

pexicast_lg=# VACUUM VERBOSE audit_log;
NOTICE:  --Relation audit_log--
NOTICE:  Pages 2083605: Changed 2961, Empty 0; Tup 105773248: Vac 0, Keep 31106432, UnUsed 2.
        Total CPU 89.49s/13.67u sec elapsed 1101.17 sec.
VACUUM

You'll notice that there are no "deleted" tuples listed. However, there are "Keep" tuples listed. I'm not sure what those are - but that's almost exactly the size of what I attempted to delete earlier. What are these "Keep" tuples?

Also - whenever I vacuum my other tables, it does vacuum the indexes, with additional NOTICE statements. In this case, it does not.

What's up?

Thanks,

Doug

Re: Problem with VACUUM after very large delete?

От
Tom Lane
Дата:
Doug Fields <dfields-pg-general@pexicom.com> writes:
> Earlier today I deleted about 31 million records from one of our tables
> with a very complex query which required a full table scan. This query took
> quite a few hours to run (in fact, overnight).

> Why? The VACUUM doesn't seem to be vacuuming the two indexes on the table.

Have you got any open transactions hanging about?  VACUUM probably
thinks it can't remove the tuples yet because they're still potentially
visible to somebody under MVCC rules.

            regards, tom lane