Hi Tom,
Enabling the zero_damaged_pages solved the problem. I
am in the process of dumping & restoring.
Thanks for the help.
Gokul.
--- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> gokulnathbabu manoharan <gokulnathbabu@yahoo.com>
> writes:
> > In my sample databases the relfilenode for
> pg_class
> > was 1259. So I checked the block number 190805 of
> the
> > 1259 file. Since the block size is 8K, 1259 was
> in
> > two files 1259 & 1259.1. The block number 190805
> > falls in the second file whose block number is
> > 58733((190805 - (1G/8K)) = 58733).
>
> You've got a pg_class catalog exceeding a gigabyte??
> Apparently you've been exceedingly lax about
> vacuuming.
> You need to do something about that, because it's
> surely
> hurting performance.
>
> You did the math wrong --- the damaged block would
> be 59733, not
> 58733, which is why pg_filedump isn't noticing
> anything wrong here.
>
> It seems almost certain that there are only dead
> rows in the
> damaged block, so it'd be sufficient to zero out the
> block,
> either manually with dd or by turning on
> zero_damaged_pages.
> After that I'd recommend a dump, initdb, reload,
> since there may
> be other damage you don't know about.
>
> regards, tom lane
>
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com