> and dropped into a single table, which will become ~20GB. Analysis happens
> on a Windows client (over a network) that queries the data in chunks
across
> parallel connections. I'm running the DB on a dual gig p3 w/ 512 memory
> under Redhat 6 (.0 I think). A single index exists that gives the best
case
> for lookups, and the table is clustered against this index.
Sorry for my ignorant question, but I think I'll learn if I ask it:
Wouldn't one *expect* lots of heavy disk activity if one were querying a
20GB database on a system with only 512MB of RAM? Does the same thing
happen on, say, 300MB of data?
-Clueless in Seattle,
Dan B.