Обсуждение: Sphinx indexing problem
I am trying to create a Sphinx index on a fairly large Postgres table. My problem is the fact that the Postgres API is trying to put the entire result set into the memory: [root@medo etc]# ../bin/indexer --all Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file '/usr/local/etc/sphinx.conf'... indexing index 'test1'... E<font color="#ff0000">RROR: index 'test1': sql_query: out of memory for query result (DSN=pgsql://news:***@medo:5432/news). total 0 docs, 0 bytes total 712.593 sec, 0 bytes/sec, 0.00 docs/sec total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg Corresponding log entries on the Postgres side are: STATEMENT: SELECT segment_id,air_date,start_time,end_time,source_type, market_name,station_name,program_name, content_text FROM news_segments LOG: unexpected EOF on client connection LOG: unexpected EOF on client connection LOG: unexpected EOF on client connection The Postgres message isn't exactly helpful, but given the circumstances, it can't be more helpful. The problem is on the client side. The table I am using is pretty large and has 14.3 million rows: news=> select count(*) from news_segments; count ---------- 14366286 (1 row) Time: 233759.639 ms Is there anything I can do to prevent the API from attempting to put the entire query result in memory? I can partition the table, create separate indexes and merge them, but that is a large unnecessary maintenance. I also suspect that the other queries with a large result set will start to fail. I temporarily solved my problem by using "range query" option offered by sphinx: sql_query_range = \ SELECT min(segment_id),max(segment_id) FROM news_segments sql_range_step=10000 sql_query =\ SELECT segment_id,air_date,start_time,end_time,source_type, \ market_name,station_name,program_name, segment_text \ FROMnews_segments \ WHERE segment_id>=$start and segment_id<$end Segment_id is a numeric field and the query will be executed many times, which is less than optimal. It does make the thing work, though. Would it be possible to set maximum memory for the query result caching from the API itself? How can I increase the maximum memory size used by the client API? -- Mladen Gogala Sr. Oracle DBA 1500 Broadway New York, NY 10036 (212) 329-5251 www.vmsinfo.com
On Sun, May 23, 2010 at 4:36 PM, Mladen Gogala <mladen.gogala@vmsinfo.com> wrote: > I am trying to create a Sphinx index on a fairly large Postgres table. My > problem is the fact that the Postgres API is trying to put the entire > result set into the memory: > > [root@medo etc]# ../bin/indexer --all > Sphinx 0.9.9-release (r2117) > Copyright (c) 2001-2009, Andrew Aksyonoff > > using config file '/usr/local/etc/sphinx.conf'... > indexing index 'test1'... > ERROR: index 'test1': sql_query: out of memory for query result > (DSN=pgsql://news:***@medo:5432/news). > total 0 docs, 0 bytes > total 712.593 sec, 0 bytes/sec, 0.00 docs/sec > total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg > total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg > Is there anything I can do to prevent the API from attempting to put the > entire query result in memory? Use a cursor, and fetch chunks of the result set one at a time. http://www.postgresql.org/docs/current/interactive/sql-declare.html -- Joshua Tolley / eggyknap End Point Corporation
Joshua Tolley wrote: <blockquote cite="mid:AANLkTinR6Fo2M3gAuduV78sqYqPEOw0RX_5-BdbeMdZS@mail.gmail.com" type="cite"> Is there anything I can do to prevent the API from attempting to put the entire query result in memory? Use a cursor, and fetch chunks of the result set one at a time. http://www.postgresql.org/docs/current/interactive/sql-declare.html -- Joshua Tolley / eggyknap End Point Corporation I would have done so, had I written the application. Unfortunately, the application was written by somebody else. Putting the entire result set in memory is a bad idea and Postgres client should be changed, probably by adding some configuration options, like maximum memory that the client is allowed to consume and a "swap file". These options should be configurable per user, not system-wide. As I have said in my post, I do have a solution for my immediate problem but this slows things down: [root@medo etc]# ../bin/indexer --all Sphinx 0.9.9-release (r2117) Copyright (c) 2001-2009, Andrew Aksyonoff using config file '/usr/local/etc/sphinx.conf'... indexing index 'test1'... collected 12757000 docs, 21313.6 MB It's still running, from slightly before I sent my post. -- Mladen Gogala Sr. Oracle DBA 1500 Broadway New York, NY 10036 (212) 329-5251 www.vmsinfo.com
On Mon, May 24, 2010 at 6:02 AM, Mladen Gogala <mladen.gogala@vmsinfo.com> wrote: >> Joshua Tolley wrote: >>> Is there anything I can do to prevent the API from attempting to put the >>> entire query result in memory? >> Use a cursor, and fetch chunks of the result set one at a time. > I would have done so, had I written the application. Unfortunately, the > application was written by somebody else. Putting the entire result set in > memory is a bad idea and Postgres client should be changed, probably by > adding some configuration options, like maximum memory that the client is > allowed to consume and a "swap file". These options should be configurable > per user, not system-wide. As I have said in my post, I do have a solution > for my immediate problem but this slows things down: You're definitely right; the current behavior is painful in some cases. Using a cursor is the typical solution, in cases where it's possible. The change you have in mind is on the TODO list (cf. http://wiki.postgresql.org/wiki/Todo, "Allow statement results to be automatically batched to the client"); it hasn't been tackled at this point. - Josh
On Mon, May 24, 2010 at 8:02 AM, Mladen Gogala <mladen.gogala@vmsinfo.com> wrote:
Sounds like a case for talking to the sphinx community?
Sean
Joshua Tolley wrote:I would have done so, had I written the application. Unfortunately, the application was written by somebody else. Putting the entire result set in memory is a bad idea and Postgres client should be changed, probably by adding some configuration options, like maximum memory that the client is allowed to consume and a "swap file". These options should be configurable per user, not system-wide. As I have said in my post, I do have a solution for my immediate problem but this slows things down:Is there anything I can do to prevent the API from attempting to put the entire query result in memory?Use a cursor, and fetch chunks of the result set one at a time. http://www.postgresql.org/docs/current/interactive/sql-declare.html -- Joshua Tolley / eggyknap End Point Corporation[root@medo etc]# ../bin/indexer --allcollected 12757000 docs, 21313.6 MB
Sphinx 0.9.9-release (r2117)
Copyright (c) 2001-2009, Andrew Aksyonoff
using config file '/usr/local/etc/sphinx.conf'...
indexing index 'test1'...
It's still running, from slightly before I sent my post.
Sounds like a case for talking to the sphinx community?
Sean