Обсуждение: Out of memory error during pg_upgrade in big DB with large objects
Hi everybody,
I'm trying to upgrade a huge DB from postgres 10 to 14
This cluster is 70+ TB, with one database having more than 2 billion records in pg_largeobject
I'm trying pg_upgrade in hard link mode, but the dump of databas schema phase always fails with
pg_dump: error: query failed: out of memory for query result
pg_dump: error: query was: SELECT l.oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = l.lomowner) AS rolname, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(l.lomacl,pg_catalog.acldefault('L',l.lomowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('L',l.lomowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS lomacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('L',l.lomowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(l.lomacl,pg_catalog.acldefault('L',l.lomowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) AS rlomacl, CASE WHEN privtype = 'e' THEN (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(pip.initprivs) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(pg_catalog.acldefault('L',l.lomowner)) AS privm(orig_acl) WHERE acl = orig_acl)) as foo) END AS initlomacl, CASE WHEN privtype = 'e' THEN (SELECT pg_catalog.array_agg(acl) FROM (SELECT acl, row_n FROM pg_catalog.unnest(pg_catalog.acldefault('L',l.lomowner)) WITH ORDINALITY AS privp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(pip.initprivs) AS initp(init_acl) WHERE acl = init_acl)) as foo) END AS initrlomacl FROM pg_largeobject_metadata l LEFT JOIN pg_init_privs pip ON (l.oid = pip.objoid AND pip.classoid = 'pg_largeobject'::regclass AND pip.objsubid = 0)
I upgraded server memory to 450 GB and it was not enough, now I'm trying with 680 GB RAM but I'm not confident it could work....
From what I found is a common and known problem of pg_dump with many LOBs
Could pg_upgrade without -k option make any difference ?
Anybody can suggest a workaround ?
Massimo Ortensi - Responsabile ICT
Via Cristoforo Colombo, 21 - 40131, Bologna
Tel. 051 4195069
Cell. 3351092560
mortensi@unimaticaspa.it
Questa e-mail e i suoi allegati contengono informazioni di proprietà di Unimatica-RGI S.p.A. e devono essere utilizzati esclusivamente dal destinatario in relazione alle finalità per le quali sono stati ricevuti. E’ vietata qualsiasi forma di riproduzione o di divulgazione senza l’esplicito consenso di Unimatica-RGI S.p.A. Qualora la presente e-mail fosse stata ricevuta per errore, si prega di informare tempestivamente il mittente e distruggere la copia in possesso.
Privacy: I dati personali contenuti in questa e-mail, nonché nei file ivi inclusi, risultano oggetto di tutela ai sensi del Reg. UE 2016/679 (GDPR). Il Titolare del trattamento dei suddetti dati è Unimatica-RGI S.p.A. Gli interessati potranno esercitare tutti i diritti ex artt. 15 e ss. del GDPR inviando un messaggio all’indirizzo privacy@pec.unimaticaspa.it. Qualsiasi trattamento effettuato da chi ha ricevuto per errore tali dati costituisce violazione delle disposizioni previste dal GDPR. In ogni momento è possibile proporre reclamo all’Autorità competente. Per maggiori informazioni si rinvia al sito www.unimaticaspa.it. Ambiente: Considera la responsabilità che hai verso l’ambiente prima di stampare questa e-mail
Вложения
Massimo Ortensi <mortensi@unimaticaspa.it> writes: > I'm trying to upgrade a huge DB from postgres 10 to 14 > This cluster is 70+ TB, with one database having more than 2 billion > records in pg_largeobject > I'm trying pg_upgrade in hard link mode, but the dump of databas schema > phase always fails with > pg_dump: error: query failed: out of memory for query result > pg_dump: error: query was: SELECT l.oid, (SELECT rolname FROM > pg_catalog.pg_roles WHERE oid = l.lomowner) AS rolname, (SELECT > pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM FWIW, this query was rewritten pretty substantially in v15. It's still going to produce a row per large object, but it should be a lot narrower because most of the ACL-wrangling now happens somewhere else. I don't know if migrating to v15 instead of v14 is an option for you, and I can't promise that that'd be enough savings to fix it anyway. But it's something to think about. regards, tom lane
I tried (even if vers 15 is not feasible at the moment as we tested only vers 14). It ended with the same Out of memory failure, just more quickly ( 1 hour instead of 12 hours) Il 21/11/2022 18:30, Tom Lane ha scritto: > Massimo Ortensi <mortensi@unimaticaspa.it> writes: >> I'm trying to upgrade a huge DB from postgres 10 to 14 >> This cluster is 70+ TB, with one database having more than 2 billion >> records in pg_largeobject >> I'm trying pg_upgrade in hard link mode, but the dump of databas schema >> phase always fails with >> pg_dump: error: query failed: out of memory for query result >> pg_dump: error: query was: SELECT l.oid, (SELECT rolname FROM >> pg_catalog.pg_roles WHERE oid = l.lomowner) AS rolname, (SELECT >> pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM > FWIW, this query was rewritten pretty substantially in v15. > It's still going to produce a row per large object, but it > should be a lot narrower because most of the ACL-wrangling > now happens somewhere else. I don't know if migrating to > v15 instead of v14 is an option for you, and I can't promise > that that'd be enough savings to fix it anyway. But it's > something to think about. > > regards, tom lane