On Thu, Feb 25, 2010 at 6:04 PM, Tomeh, Husam <HTomeh@facorelogic.com> wrote:
> We have a huge table with hundred million records. We need to increase the
> size of an existing column with varchar type, and it has been running for
> more than 12 hours.
>
>
>
> We’re using: ALTER TABLE Property ALTER COLUMN "situs-number" TYPE
> varchar(30);
>
it's not better to have the field beign type text and don't worry
about length but just in check constraints if really necesary?
>
> 1) Can you explain what’s happening internally that make this a very
> long process? Does the table get re-created?
>
yes, and all it's indexes rebuild not just the one you dropped
and the FK's rechecked (don't think so but can't remember right now)?
>
>
> 2) Assuming the Alter statement finished successfully, And if I didn’t
> drop the index (on that column), do I have to rebuild the index? Does the
> index get invalidated for just alter the indexed column?
>
it's get rebuilt
>
>
> 3) Some folks referred to directly updating Postgres internal tables
> (pg_attribute) which takes seconds to make the column change happen. How
> safe if this and would potentially cause any corruption?
>
no, that's insane
>
> 4) Is there a more practical and safe method to alter a huge table
> with reasonable amount of time?
>
>
use text fields instead of varchar(n)
--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157