Обсуждение: Streaming LIBPQ? Sliding Window in LIBPQ?
Is there a way to break the PGresult array to chuncks Like Oracle?
I mean, without changing the text of given queries,
can we somehow limit the memory consumption of a client that is using LIBPQ?
The API is ...
extern PGresult *PQexec(PGconn *conn, const char *query);
Is there any "SLiding Window" layer that breaks the PGresult into chuncks?
Big queries take a lot more memory on the client side compared to other DB clients.
Thanks
Danny
Danny Abraham
BMC Software
CTM&D Business Unit
972-52-4286-513
danny_abraham@bmc.com
On Wed, Nov 28, 2007 at 09:11:07AM -0600, Abraham, Danny wrote: > Is there a way to break the PGresult array to chuncks Like Oracle? > > I mean, without changing the text of given queries, > > can we somehow limit the memory consumption of a client that is using > LIBPQ? Use non-blocking mode to get the data in chunks. You can't easy control the number of rows you get each time though... Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Вложения
"Martijn van Oosterhout" <kleptog@svana.org> writes: > On Wed, Nov 28, 2007 at 09:11:07AM -0600, Abraham, Danny wrote: >> Is there a way to break the PGresult array to chuncks Like Oracle? >> >> I mean, without changing the text of given queries, >> >> can we somehow limit the memory consumption of a client that is using >> LIBPQ? > > Use non-blocking mode to get the data in chunks. You can't easy control > the number of rows you get each time though... This doesn't really help. It's nonblocking but you still can't get libpq to actually give you a result data structure until the entire results have accumulated. The only interface like this libpq supports is to use cursors in your SQL and then FETCH n for each chunk. There's no libpq support for receiving results incrementally. If you're writing a driver implementing the protocol from scratch you could expose chunks of results to the application but there's no protocol-level support for it so you can't directly control the rate at which results arrive or the chunk size or anything like that. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!
On Wed, Nov 28, 2007 at 03:27:56PM +0000, Gregory Stark wrote: > > Use non-blocking mode to get the data in chunks. You can't easy control > > the number of rows you get each time though... > > This doesn't really help. It's nonblocking but you still can't get libpq to > actually give you a result data structure until the entire results have > accumulated. It certainly used to work. You get a whole PGresult structure for each few rows usually so it's not terribly efficient. I posted an example in Perl a while back... The trick was to set non-blocking mode and send an async query. Then PQisBusy() would return false when any data had been received, not just when all data had been received. At that point you could call PQgetResult to get those rows. You would get a zero-length result when you reached the end of data. Admittedly, I havn't tested it on recent versions. The program I posted a while back that tested if the locks blocked as documented drove two connections simultaneously this way. http://archives.postgresql.org/pgsql-hackers/2005-08/msg01073.php http://archives.postgresql.org/pgsql-general/2006-07/msg00806.php Have a nice day, -- Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/ > Those who make peaceful revolution impossible will make violent revolution inevitable. > -- John F Kennedy
Вложения
On Nov 28, 2007 10:51 AM, Martijn van Oosterhout <kleptog@svana.org> wrote: > The trick was to set non-blocking mode and send an async query. Then > PQisBusy() would return false when any data had been received, not just > when all data had been received. At that point you could call > PQgetResult to get those rows. You would get a zero-length result when > you reached the end of data. > > Admittedly, I havn't tested it on recent versions. The program I posted > a while back that tested if the locks blocked as documented drove two > connections simultaneously this way. > > http://archives.postgresql.org/pgsql-hackers/2005-08/msg01073.php > http://archives.postgresql.org/pgsql-general/2006-07/msg00806.php aiui, PQgetResult does not allow streaming of partial results. it does however allow returning multiple results for multiple queries sent in a batch...so, a 'kinda sorta' streaming could be rigged in certain cases if the big query could be split to multiple queries and chained. pulling back and using a piece of a single result is not possible, and never has been. also, there is always the cursor technique which i only find appealing in very special circumstances. merlin