Slony-I REL_1_1_0 Documentation | ||||
---|---|---|---|---|
Prev | Fast Backward | Fast Forward | Next |
Before you subscribe a node to a set, be sure that you have slon processes running for both the provider and the new subscribing node. If you don't have slons running, nothing will happen, and you'll beat your head against a wall trying to figure out what is going on.
Subscribing a node to a set is done by issuing the slonik command SUBSCRIBE SET. It may seem tempting to try to subscribe several nodes to a set within a single try block like this:
try { echo 'Subscribing sets'; subscribe set (id = 1, provider=1, receiver=2, forward=yes); subscribe set (id = 1, provider=1, receiver=3, forward=yes); subscribe set (id = 1, provider=1, receiver=4, forward=yes); } on error { echo 'Could not subscribe the sets!'; exit -1; }
But you are just asking for trouble if you try to subscribe sets in that fashion. The proper procedure is to subscribe one node at a time, and to check the logs and databases before you move onto subscribing the next node to the set. It is also worth noting that the "success" within the above slonik try block does not imply that nodes 2, 3, and 4 have all been successfully subscribed. It merely indicates that the slonik commands were successfully received by the slon running on the origin node.
A typical sort of problem that will arise is that a cascaded subscriber is looking for a provider that is not ready yet. In that failure case, that subscriber node will never pick up the subscriber. It will get "stuck" waiting for a past event to take place. The other nodes will be convinced that it is successfully subscribed (because no error report ever made it back to them); a request to unsubscribe the node will be "blocked" because the node is stuck on the attempt to subscribe it.
When you subscribe a node to a set, you should see something like this in your slon logs for the provider node:
DEBUG2 remoteWorkerThread_3: Received event 3,1059 SUBSCRIBE_SET
You should also start seeing log entries like this in the slon logs for the subscribing node:
DEBUG2 remoteWorkerThread_1: copy table public.my_table
It may take some time for larger tables to be copied from the provider node to the new subscriber. If you check the pg_stat_activity table on the provider node, you should see a query that is copying the table to stdout.
The table sl_subscribe on both the provider, and the new subscriber should contain entries for the new subscription:
sub_set | sub_provider | sub_receiver | sub_forward | sub_active ---------+--------------+--------------+-------------+------------ 1 | 1 | 2 | t | t
A final test is to insert a row into one of the replicated tables on the origin node, and verify that the row is copied to the new subscriber.