Two billion records ok?

Поиск
Список
Период
Сортировка
От Nick Bower
Тема Two billion records ok?
Дата
Msg-id 200609050926.59910.nick@nickbower.com
обсуждение исходный текст
Ответы Re: Two billion records ok?  (Michael Fuhr <mike@fuhr.org>)
Re: Two billion records ok?  (Oleg Bartunov <oleg@sai.msu.su>)
Re: Two billion records ok?  (Ron Johnson <ron.l.johnson@cox.net>)
Re: Two billion records ok?  (Brent Wood <b.wood@niwa.co.nz>)
Список pgsql-general
We're considering using Postgresql for storing gridded metadata - each point
of our grids has a variety of metadata attached to it (including lat/lon,
measurements, etc) and would constitute a record in Postgresql+Postgis.

Size-wise, grids are about 4000x700 and are collected twice daily over say 10
years.  As mentioned, each record would have up to 50 metadata attributes
(columns) including geom, floats, varchars etc.

So given 4000x700x2x365x10 > 2 billion, is this going to  be a problem if we
will be wanting to query on datetimes, Postgis lat/lon, and integer-based
metadata flags?

If however I'm forced to sub-sample the grid, what rule of thumb should I be
looking to be constrained by?

Thanks for any pointers, Nick

PS - Feel free to throw in any other ideas of grid-suitable databases :)

В списке pgsql-general по дате отправления:

Предыдущее
От: Chris Mair
Дата:
Сообщение: Re: Porting from ORACLE to PostgSQL
Следующее
От: Michael Fuhr
Дата:
Сообщение: Re: Two billion records ok?