Обсуждение: replicate or multi-master for 9.1 or 9.2

Поиск
Список
Период
Сортировка

replicate or multi-master for 9.1 or 9.2

От
Jon Hancock
Дата:
We have a new pg system on 9.1, just launched inside China.  We now know we may need to run a replicate, with some writes to it outside China.  Would like some advice.  Here are parameters:

1 - Our data center is in Beijing.  If we have a replicate in a data center in California, we can expect the bandwidth to vary between the Beijing and California servers and for any connection between the two servers to break down occasionally.  How well does pg replication work for suboptimal connects like this?

2 - Is multi-master an option to allow some writes to the otherwise slave California db?

3 - Would trying this on 9.2 be a better place to start?  I don't think there is any reason we couldn't migrate up at this point.

Although I've used pg for quite a few years, this is my first trip in replication land…any advice would be appreciated.

thanks

-- 
Jon Hancock

Re: replicate or multi-master for 9.1 or 9.2

От
John R Pierce
Дата:
On 09/27/12 9:37 PM, Jon Hancock wrote:
> We have a new pg system on 9.1, just launched inside China.  We now
> know we may need to run a replicate, with some writes to it outside
> China.  Would like some advice.  Here are parameters:
>
> 1 - Our data center is in Beijing.  If we have a replicate in a data
> center in California, we can expect the bandwidth to vary between the
> Beijing and California servers and for any connection between the two
> servers to break down occasionally.  How well does pg replication work
> for suboptimal connects like this?
>

not very well.   you might do better with log shipping for an offsite
backup, but then hte offsite standby will be farther behind the master
server


> 2 - Is multi-master an option to allow some writes to the otherwise
> slave California db?
>

not with any built in replication method.

and, any external replication system that allows multimaster inherently
has to have compromises on transactional integrity. what happens when
both masters update the same records while the replication is delayed
due to above network outages?


> 3 - Would trying this on 9.2 be a better place to start?  I don't
> think there is any reason we couldn't migrate up at this point.
>

there's nothing in 9.2 that would change the above facts of life.



--
john r pierce                            N 37, W 122
santa cruz ca                         mid-left coast



Re: replicate or multi-master for 9.1 or 9.2

От
Chris Travers
Дата:


On Thu, Sep 27, 2012 at 9:37 PM, Jon Hancock <jhancock@shellshadow.com> wrote:
We have a new pg system on 9.1, just launched inside China.  We now know we may need to run a replicate, with some writes to it outside China.  Would like some advice.  Here are parameters:

1 - Our data center is in Beijing.  If we have a replicate in a data center in California, we can expect the bandwidth to vary between the Beijing and California servers and for any connection between the two servers to break down occasionally.  How well does pg replication work for suboptimal connects like this? 

How do you want things to work when the internet connection goes down? 

2 - Is multi-master an option to allow some writes to the otherwise slave California db?

Multi-master replication is inherently problematic.  It doesn't matter what system you are using, avoid it if you can.  The problem is that multi-master replication typically means "replicate the easy cases and let a programmer figure out what to do if anything looks a little weird."  I suppose it might work for some cases but.... 

I actually think that some sort of loose coupling usually makes better sense than multi-master replication.  I recently wrote pg_message_queue to make it easier to implement loose coupling generally.  You could, for example, send xml docs back and forth, parse those and save them into your databases.  You can't guarantee the C part of the CAP theorem (you pick A and P there), but you can guarantee local data consistency on both sides.


3 - Would trying this on 9.2 be a better place to start?  I don't think there is any reason we couldn't migrate up at this point.

The one thing in 9.2 that changes in this area is that it is designed so that if you have multiple servers on each continent, you  only replicate data once for each long haul link.  I don't think that's applicable to your case though.

Best Wishes,
Chris Travers

Re: replicate or multi-master for 9.1 or 9.2

От
"ac@hsk.hk"
Дата:
Hi Jon,

I have had a similar case as yours, I have one data center in Hong Kong another one in Tokyo, we have line between them, here are my feedback:

1) we used multiple masters at first, from time to time, some issues like that it took time to sync between master servers which caused obvious slowness of the entire database, moreover, it took more support resources to monitor them, the team became very tired 
2) we switched the config from multiple servers to single master, the database performance improved but we still saw DB slowness mainly because the single master was too buy
3) finally we fixed the issue by a) modify the application to handle heavy read-only accesses from local slave to reduce the loading from the master b) used the remote slave purely for remote backup c) built advanced cache system to further reduce database access

Regards
AC


On 28 Sep 2012, at 2:27 PM, Chris Travers wrote:



On Thu, Sep 27, 2012 at 9:37 PM, Jon Hancock <jhancock@shellshadow.com> wrote:
We have a new pg system on 9.1, just launched inside China.  We now know we may need to run a replicate, with some writes to it outside China.  Would like some advice.  Here are parameters:

1 - Our data center is in Beijing.  If we have a replicate in a data center in California, we can expect the bandwidth to vary between the Beijing and California servers and for any connection between the two servers to break down occasionally.  How well does pg replication work for suboptimal connects like this? 

How do you want things to work when the internet connection goes down? 

2 - Is multi-master an option to allow some writes to the otherwise slave California db?

Multi-master replication is inherently problematic.  It doesn't matter what system you are using, avoid it if you can.  The problem is that multi-master replication typically means "replicate the easy cases and let a programmer figure out what to do if anything looks a little weird."  I suppose it might work for some cases but.... 

I actually think that some sort of loose coupling usually makes better sense than multi-master replication.  I recently wrote pg_message_queue to make it easier to implement loose coupling generally.  You could, for example, send xml docs back and forth, parse those and save them into your databases.  You can't guarantee the C part of the CAP theorem (you pick A and P there), but you can guarantee local data consistency on both sides.


3 - Would trying this on 9.2 be a better place to start?  I don't think there is any reason we couldn't migrate up at this point.

The one thing in 9.2 that changes in this area is that it is designed so that if you have multiple servers on each continent, you  only replicate data once for each long haul link.  I don't think that's applicable to your case though.

Best Wishes,
Chris Travers

Re: replicate or multi-master for 9.1 or 9.2

От
"ac@hsk.hk"
Дата:
correction: because the single master was too BUSY

On 28 Sep 2012, at 7:48 PM, ac@hsk.hk wrote:

Hi Jon,

I have had a similar case as yours, I have one data center in Hong Kong another one in Tokyo, we have line between them, here are my feedback:

1) we used multiple masters at first, from time to time, some issues like that it took time to sync between master servers which caused obvious slowness of the entire database, moreover, it took more support resources to monitor them, the team became very tired 
2) we switched the config from multiple servers to single master, the database performance improved but we still saw DB slowness mainly because the single master was too buy
3) finally we fixed the issue by a) modify the application to handle heavy read-only accesses from local slave to reduce the loading from the master b) used the remote slave purely for remote backup c) built advanced cache system to further reduce database access

Regards
AC


On 28 Sep 2012, at 2:27 PM, Chris Travers wrote:



On Thu, Sep 27, 2012 at 9:37 PM, Jon Hancock <jhancock@shellshadow.com> wrote:
We have a new pg system on 9.1, just launched inside China.  We now know we may need to run a replicate, with some writes to it outside China.  Would like some advice.  Here are parameters:

1 - Our data center is in Beijing.  If we have a replicate in a data center in California, we can expect the bandwidth to vary between the Beijing and California servers and for any connection between the two servers to break down occasionally.  How well does pg replication work for suboptimal connects like this? 

How do you want things to work when the internet connection goes down? 

2 - Is multi-master an option to allow some writes to the otherwise slave California db?

Multi-master replication is inherently problematic.  It doesn't matter what system you are using, avoid it if you can.  The problem is that multi-master replication typically means "replicate the easy cases and let a programmer figure out what to do if anything looks a little weird."  I suppose it might work for some cases but.... 

I actually think that some sort of loose coupling usually makes better sense than multi-master replication.  I recently wrote pg_message_queue to make it easier to implement loose coupling generally.  You could, for example, send xml docs back and forth, parse those and save them into your databases.  You can't guarantee the C part of the CAP theorem (you pick A and P there), but you can guarantee local data consistency on both sides.


3 - Would trying this on 9.2 be a better place to start?  I don't think there is any reason we couldn't migrate up at this point.

The one thing in 9.2 that changes in this area is that it is designed so that if you have multiple servers on each continent, you  only replicate data once for each long haul link.  I don't think that's applicable to your case though.

Best Wishes,
Chris Travers