[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

SF.net SVN: ledger-smb: [327] trunk



Revision: 327
          http://svn.sourceforge.net/ledger-smb/?rev=327&view=rev
Author:   einhverfr
Date:     2006-10-26 14:18:18 -0700 (Thu, 26 Oct 2006)

Log Message:
-----------
Moved Slony configuration scripts into new contrib directory.

Added Paths:
-----------
    trunk/contrib/
    trunk/contrib/replication/
    trunk/contrib/replication/README
    trunk/contrib/replication/configure-replication.sh

Removed Paths:
-------------
    trunk/sql/README
    trunk/sql/configure-replication.sh

Copied: trunk/contrib/replication/README (from rev 326, trunk/sql/README)
===================================================================
--- trunk/contrib/replication/README	                        (rev 0)
+++ trunk/contrib/replication/README	2006-10-26 21:18:18 UTC (rev 327)
@@ -0,0 +1,129 @@
+README
+------------
+$Id$
+Christopher Browne
..hidden..
+2006-09-29
+------------
+
+The script configure-replication.sh is intended to allow the gentle user
+to readily configure replication of the LedgerSMB database schema
+using the Slony-I replication system for PostgreSQL.
+
+For more general details about Slony-I, see <http://slony.info/>
+
+This script uses a number of environment variables to determine the
+shape of the configuration.  In many cases, the defaults should be at
+least nearly OK...
+
+Global:
+  CLUSTER - Name of Slony-I cluster
+  NUMNODES - Number of nodes to set up
+
+  PGUSER - name of PostgreSQL superuser controlling replication
+  PGPORT - default port number
+  PGDATABASE - default database name
+
+For each node, there are also four parameters; for node 1:
+  DB1 - database to connect to
+  USER1 - superuser to connect as
+  PORT1 - port
+  HOST1 - host
+
+It is quite likely that DB*, USER*, and PORT* should be drawn from the
+default PGDATABASE, PGUSER, and PGPORT values above; that sort of
+uniformity is usually a good thing.
+
+In contrast, HOST* values should be set explicitly for HOST1, HOST2,
+..., as you don't get much benefit from the redundancy replication
+provides if all your databases are on the same server!
+
+slonik config files are generated in a temp directory under /tmp.  The
+usage is thus:
+
+1.  preamble.slonik is a "preamble" containing connection info used by
+    the other scripts.
+
+   Verify the info in this one closely; you may want to keep this
+   permanently to use with other maintenance you may want to do on the
+   cluster.
+
+2.  create_set.slonik
+
+    This is the first script to run; it sets up the requested nodes as
+    being Slony-I nodes, adding in some Slony-I-specific config tables
+    and such.
+
+You can/should start slon processes any time after this step has run.
+
+3.  store_paths.slonik
+
+    This is the second script to run; it indicates how the slons
+    should intercommunicate.  It assumes that all slons can talk to
+    all nodes, which may not be a valid assumption in a
+    complexly-firewalled environment.
+
+4.  create_set.slonik
+
+    This sets up the replication set consisting of the whole bunch of
+    tables and sequences that make up the LedgerSMB database schema.
+
+    When you run this script, all that happens is that triggers are
+    added on the origin node (node #1) that start collecting updates;
+    replication won't start until #5...
+
+    There are two assumptions in this script that could be invalidated
+    by circumstances:
+
+     1.  That all of the LedgerSMB tables and sequences have been
+         included.
+
+         This becomes invalid if new tables get added to LedgerSMB and
+         don't get added to the TABLES list in the generator script.
+
+     2.  That all tables have been defined with primary keys.
+
+          This *should* be the case soon if not already.
+
+5.  subscribe_set_2.slonik
+
+     And 3, and 4, and 5, if you set the number of nodes higher...
+
+     This is the step that "fires up" replication.  
+
+     The assumption that the script generator makes is that all the
+     subscriber nodes will want to subscribe directly to the origin
+     node.  If you plan to have "sub-clusters", perhaps where there is
+     something of a "master" location at each data centre, you may
+     need to revise that.
+
+     The slon processes really ought to be running by the time you
+     attempt running this step.  To do otherwise would be rather
+     foolish.
+
+Once all of these slonik scripts have been run, replication may be
+expected to continue to run as long as slons stay running.
+
+If you have an outage, where a database server or a server hosting
+slon processes falls over, and it's not so serious that a database
+gets mangled, then no big deal: Just restart the postmaster and
+restart slon processes, and replication should pick up.
+
+If something does get mangled, then actions get more complicated:
+
+1 - If the failure was of the "origin" database, then you probably want
+   to use FAIL OVER to shift the "master" role to another system.
+
+2 - If a subscriber failed, and other nodes were drawing data from it,
+   then you could submit SUBSCRIBE SET requests to point those other
+   nodes to some node that is "less mangled."  That is not a real big
+   deal; note that this does NOT require that they get re-subscribed
+   from scratch; they can pick up (hopefully) whatever data they
+   missed and simply catch up by using a different data source.
+
+Once you have reacted to the loss by reconfiguring the surviving nodes
+to satisfy your needs, you may want to recreate the mangled node.  See
+the Slony-I Administrative Guide for more details on how to do that.
+It is not overly profound; you need to drop out the mangled node, and
+recreate it anew, which is not all that different from setting up
+another subscriber.

Copied: trunk/contrib/replication/configure-replication.sh (from rev 326, trunk/sql/configure-replication.sh)
===================================================================
--- trunk/contrib/replication/configure-replication.sh	                        (rev 0)
+++ trunk/contrib/replication/configure-replication.sh	2006-10-26 21:18:18 UTC (rev 327)
@@ -0,0 +1,186 @@
+#!/bin/bash
+# $Id$
+
+# Global defaults
+CLUSTER=${CLUSTER:-"LedgerSMB"}
+NUMNODES=${NUMNODES:-"2"}
+
+# Defaults - origin node
+DB1=${DB1:-${PGDATABASE:-"ledgersmb"}}
+HOST1=${HOST1:-`hostname`}
+USER1=${USER1:-${PGUSER:-"slony"}}
+PORT1=${PORT1:-${PGPORT:-"5432"}}
+
+# Defaults - node 2
+DB2=${DB2:-${PGDATABASE:-"ledgersmb"}}
+HOST2=${HOST2:-"backup.example.info"}
+USER2=${USER2:-${PGUSER:-"slony"}}
+PORT2=${PORT2:-${PGPORT:-"5432"}}
+
+# Defaults - node 3
+DB3=${DB3:-${PGDATABASE:-"ledgersmb"}}
+HOST3=${HOST3:-"backup3.example.info"}
+USER3=${USER3:-${PGUSER:-"slony"}}
+PORT3=${PORT3:-${PGPORT:-"5432"}}
+
+# Defaults - node 4
+DB4=${DB4:-${PGDATABASE:-"ledgersmb"}}
+HOST4=${HOST4:-"backup4.example.info"}
+USER4=${USER4:-${PGUSER:-"slony"}}
+PORT4=${PORT4:-${PGPORT:-"5432"}}
+
+# Defaults - node 5
+DB5=${DB5:-${PGDATABASE:-"ledgersmb"}}
+HOST5=${HOST5:-"backup5.example.info"}
+USER5=${USER5:-${PGUSER:-"slony"}}
+PORT5=${PORT5:-${PGPORT:-"5432"}}
+
+store_path()
+{
+
+echo "include <${PREAMBLE}>;" > $mktmp/store_paths.slonik
+  i=1
+  while : ; do
+    eval db=\$DB${i}
+    eval host=\$HOST${i}
+    eval user=\$USER${i}
+    eval port=\$PORT${i}
+
+    if [ -n "${db}" -a "${host}" -a "${user}" -a "${port}" ]; then
+      j=1
+      while : ; do
+        if [ ${i} -ne ${j} ]; then
+          eval bdb=\$DB${j}
+          eval bhost=\$HOST${j}
+          eval buser=\$USER${j}
+          eval bport=\$PORT${j}
+          if [ -n "${bdb}" -a "${bhost}" -a "${buser}" -a "${bport}" ]; then
+	    echo "STORE PATH (SERVER=${i}, CLIENT=${j}, CONNINFO='dbname=${db} host=${host} user=${user} port=${port}');" >> $mktmp/store_paths.slonik
+          else
+            err 3 "No conninfo"
+          fi
+        fi
+        if [ ${j} -ge ${NUMNODES} ]; then
+          break;
+        else
+          j=$((${j} + 1))
+        fi
+      done
+      if [ ${i} -ge ${NUMNODES} ]; then
+        break;
+      else
+        i=$((${i} +1))
+      fi
+    else
+      err 3 "no DB"
+    fi
+  done
+}
+
+mktmp=`mktemp -d -t ledgersmb-temp.XXXXXX`
+if [ $MY_MKTEMP_IS_DECREPIT ] ; then
+       mktmp=`mktemp -d /tmp/ledgersmb-temp.XXXXXX`
+fi
+
+PREAMBLE=${mktmp}/preamble.slonik
+
+echo "cluster name=${CLUSTER};" > $PREAMBLE
+
+alias=1
+
+while : ; do
+  eval db=\$DB${alias}
+  eval host=\$HOST${alias}
+  eval user=\$USER${alias}
+  eval port=\$PORT${alias}
+
+  if [ -n "${db}" -a "${host}" -a "${user}" -a "${port}" ]; then
+    conninfo="dbname=${db} host=${host} user=${user} port=${port}"
+    echo "NODE ${alias} ADMIN CONNINFO = '${conninfo}';" >> $PREAMBLE
+    if [ ${alias} -ge ${NUMNODES} ]; then
+      break;
+    else
+      alias=`expr ${alias} + 1`
+    fi   
+  else
+    break;
+  fi
+done
+
+
+SEQUENCES=" acc_trans_entry_id_seq audittrail_entry_id_seq
+ custom_field_catalog_field_id_seq custom_table_catalog_table_id_seq
+ id inventory_entry_id_seq invoiceid jcitemsid orderitemsid
+ partscustomer_entry_id_seq partsvendor_entry_id_seq
+ session_session_id_seq shipto_entry_id_seq "
+
+TABLES=" acc_trans ap ar assembly audittrail business chart
+ custom_field_catalog custom_table_catalog customer customertax
+ defaults department dpt_trans employee exchangerate gifi gl inventory
+ invoice jcitems language makemodel oe orderitems parts partscustomer
+ partsgroup partstax partsvendor pricegroup project recurring
+ recurringemail recurringprint session shipto sic status tax
+ transactions translation vendor vendortax warehouse yearend"
+
+SETUPSET=${mktmp}/create_set.slonik
+
+echo "include <${PREAMBLE}>;" > $SETUPSET
+echo "create set (id=1, origin=1, comment='${CLUSTER} Tables and Sequences');" >> $SETUPSET
+
+tnum=1
+
+for table in `echo $TABLES`; do
+    echo "set add table (id=${tnum}, set id=1, origin=1, fully qualified name='public.${table}', comment='${CLUSTER} table ${table}');" >> $SETUPSET
+    tnum=`expr ${tnum} + 1`
+done
+
+snum=1
+for seq in `echo $SEQUENCES`; do
+    echo "set add sequence (id=${snum}, set id=1, origin=1, fully qualified name='public.${seq}', comment='${CLUSTER} sequence ${seq}');" >> $SETUPSET
+    snum=`expr ${snum} + 1`
+done
+
+NODEINIT=$mktmp/create_nodes.slonik
+echo "include <${PREAMBLE}>;" > $NODEINIT
+echo "init cluster (id=1, comment='${CLUSTER} node 1');" >> $NODEINIT
+
+node=2
+while : ; do
+    SUBFILE=$mktmp/subscribe_set_${node}.slonik
+    echo "include <${PREAMBLE}>;" > $SUBFILE
+    echo "store node (id=${node}, comment='${CLUSTER} subscriber node ${node}');" >> $NODEINIT
+    echo "subscribe set (id=1, provider=1, receiver=${node}, forward=yes);" >> $SUBFILE
+    if [ ${node} -ge ${NUMNODES} ]; then
+      break;
+    else
+      node=`expr ${node} + 1`
+    fi   
+done
+
+store_path
+
+echo "
+$0 has generated Slony-I slonik scripts to initialize replication for LedgerSMB.
+
+Cluster name: ${CLUSTER}
+Number of nodes: ${NUMNODES}
+Scripts are in ${mktmp}
+=====================
+"
+ls -l $mktmp
+
+echo "
+=====================
+Be sure to verify that the contents of $PREAMBLE very carefully, as
+the configuration there is used widely in the other scripts.
+=====================
+====================="
+
+
+
+
+
+
+
+
+

Deleted: trunk/sql/README
===================================================================
--- trunk/sql/README	2006-10-26 21:04:23 UTC (rev 326)
+++ trunk/sql/README	2006-10-26 21:18:18 UTC (rev 327)
@@ -1,129 +0,0 @@
-README
-------------
-$Id$
-Christopher Browne
..hidden..
-2006-09-29
-------------
-
-The script configure-replication.sh is intended to allow the gentle user
-to readily configure replication of the LedgerSMB database schema
-using the Slony-I replication system for PostgreSQL.
-
-For more general details about Slony-I, see <http://slony.info/>
-
-This script uses a number of environment variables to determine the
-shape of the configuration.  In many cases, the defaults should be at
-least nearly OK...
-
-Global:
-  CLUSTER - Name of Slony-I cluster
-  NUMNODES - Number of nodes to set up
-
-  PGUSER - name of PostgreSQL superuser controlling replication
-  PGPORT - default port number
-  PGDATABASE - default database name
-
-For each node, there are also four parameters; for node 1:
-  DB1 - database to connect to
-  USER1 - superuser to connect as
-  PORT1 - port
-  HOST1 - host
-
-It is quite likely that DB*, USER*, and PORT* should be drawn from the
-default PGDATABASE, PGUSER, and PGPORT values above; that sort of
-uniformity is usually a good thing.
-
-In contrast, HOST* values should be set explicitly for HOST1, HOST2,
-..., as you don't get much benefit from the redundancy replication
-provides if all your databases are on the same server!
-
-slonik config files are generated in a temp directory under /tmp.  The
-usage is thus:
-
-1.  preamble.slonik is a "preamble" containing connection info used by
-    the other scripts.
-
-   Verify the info in this one closely; you may want to keep this
-   permanently to use with other maintenance you may want to do on the
-   cluster.
-
-2.  create_set.slonik
-
-    This is the first script to run; it sets up the requested nodes as
-    being Slony-I nodes, adding in some Slony-I-specific config tables
-    and such.
-
-You can/should start slon processes any time after this step has run.
-
-3.  store_paths.slonik
-
-    This is the second script to run; it indicates how the slons
-    should intercommunicate.  It assumes that all slons can talk to
-    all nodes, which may not be a valid assumption in a
-    complexly-firewalled environment.
-
-4.  create_set.slonik
-
-    This sets up the replication set consisting of the whole bunch of
-    tables and sequences that make up the LedgerSMB database schema.
-
-    When you run this script, all that happens is that triggers are
-    added on the origin node (node #1) that start collecting updates;
-    replication won't start until #5...
-
-    There are two assumptions in this script that could be invalidated
-    by circumstances:
-
-     1.  That all of the LedgerSMB tables and sequences have been
-         included.
-
-         This becomes invalid if new tables get added to LedgerSMB and
-         don't get added to the TABLES list in the generator script.
-
-     2.  That all tables have been defined with primary keys.
-
-          This *should* be the case soon if not already.
-
-5.  subscribe_set_2.slonik
-
-     And 3, and 4, and 5, if you set the number of nodes higher...
-
-     This is the step that "fires up" replication.  
-
-     The assumption that the script generator makes is that all the
-     subscriber nodes will want to subscribe directly to the origin
-     node.  If you plan to have "sub-clusters", perhaps where there is
-     something of a "master" location at each data centre, you may
-     need to revise that.
-
-     The slon processes really ought to be running by the time you
-     attempt running this step.  To do otherwise would be rather
-     foolish.
-
-Once all of these slonik scripts have been run, replication may be
-expected to continue to run as long as slons stay running.
-
-If you have an outage, where a database server or a server hosting
-slon processes falls over, and it's not so serious that a database
-gets mangled, then no big deal: Just restart the postmaster and
-restart slon processes, and replication should pick up.
-
-If something does get mangled, then actions get more complicated:
-
-1 - If the failure was of the "origin" database, then you probably want
-   to use FAIL OVER to shift the "master" role to another system.
-
-2 - If a subscriber failed, and other nodes were drawing data from it,
-   then you could submit SUBSCRIBE SET requests to point those other
-   nodes to some node that is "less mangled."  That is not a real big
-   deal; note that this does NOT require that they get re-subscribed
-   from scratch; they can pick up (hopefully) whatever data they
-   missed and simply catch up by using a different data source.
-
-Once you have reacted to the loss by reconfiguring the surviving nodes
-to satisfy your needs, you may want to recreate the mangled node.  See
-the Slony-I Administrative Guide for more details on how to do that.
-It is not overly profound; you need to drop out the mangled node, and
-recreate it anew, which is not all that different from setting up
-another subscriber.

Deleted: trunk/sql/configure-replication.sh
===================================================================
--- trunk/sql/configure-replication.sh	2006-10-26 21:04:23 UTC (rev 326)
+++ trunk/sql/configure-replication.sh	2006-10-26 21:18:18 UTC (rev 327)
@@ -1,186 +0,0 @@
-#!/bin/bash
-# $Id$
-
-# Global defaults
-CLUSTER=${CLUSTER:-"LedgerSMB"}
-NUMNODES=${NUMNODES:-"2"}
-
-# Defaults - origin node
-DB1=${DB1:-${PGDATABASE:-"ledgersmb"}}
-HOST1=${HOST1:-`hostname`}
-USER1=${USER1:-${PGUSER:-"slony"}}
-PORT1=${PORT1:-${PGPORT:-"5432"}}
-
-# Defaults - node 2
-DB2=${DB2:-${PGDATABASE:-"ledgersmb"}}
-HOST2=${HOST2:-"backup.example.info"}
-USER2=${USER2:-${PGUSER:-"slony"}}
-PORT2=${PORT2:-${PGPORT:-"5432"}}
-
-# Defaults - node 3
-DB3=${DB3:-${PGDATABASE:-"ledgersmb"}}
-HOST3=${HOST3:-"backup3.example.info"}
-USER3=${USER3:-${PGUSER:-"slony"}}
-PORT3=${PORT3:-${PGPORT:-"5432"}}
-
-# Defaults - node 4
-DB4=${DB4:-${PGDATABASE:-"ledgersmb"}}
-HOST4=${HOST4:-"backup4.example.info"}
-USER4=${USER4:-${PGUSER:-"slony"}}
-PORT4=${PORT4:-${PGPORT:-"5432"}}
-
-# Defaults - node 5
-DB5=${DB5:-${PGDATABASE:-"ledgersmb"}}
-HOST5=${HOST5:-"backup5.example.info"}
-USER5=${USER5:-${PGUSER:-"slony"}}
-PORT5=${PORT5:-${PGPORT:-"5432"}}
-
-store_path()
-{
-
-echo "include <${PREAMBLE}>;" > $mktmp/store_paths.slonik
-  i=1
-  while : ; do
-    eval db=\$DB${i}
-    eval host=\$HOST${i}
-    eval user=\$USER${i}
-    eval port=\$PORT${i}
-
-    if [ -n "${db}" -a "${host}" -a "${user}" -a "${port}" ]; then
-      j=1
-      while : ; do
-        if [ ${i} -ne ${j} ]; then
-          eval bdb=\$DB${j}
-          eval bhost=\$HOST${j}
-          eval buser=\$USER${j}
-          eval bport=\$PORT${j}
-          if [ -n "${bdb}" -a "${bhost}" -a "${buser}" -a "${bport}" ]; then
-	    echo "STORE PATH (SERVER=${i}, CLIENT=${j}, CONNINFO='dbname=${db} host=${host} user=${user} port=${port}');" >> $mktmp/store_paths.slonik
-          else
-            err 3 "No conninfo"
-          fi
-        fi
-        if [ ${j} -ge ${NUMNODES} ]; then
-          break;
-        else
-          j=$((${j} + 1))
-        fi
-      done
-      if [ ${i} -ge ${NUMNODES} ]; then
-        break;
-      else
-        i=$((${i} +1))
-      fi
-    else
-      err 3 "no DB"
-    fi
-  done
-}
-
-mktmp=`mktemp -d -t ledgersmb-temp.XXXXXX`
-if [ $MY_MKTEMP_IS_DECREPIT ] ; then
-       mktmp=`mktemp -d /tmp/ledgersmb-temp.XXXXXX`
-fi
-
-PREAMBLE=${mktmp}/preamble.slonik
-
-echo "cluster name=${CLUSTER};" > $PREAMBLE
-
-alias=1
-
-while : ; do
-  eval db=\$DB${alias}
-  eval host=\$HOST${alias}
-  eval user=\$USER${alias}
-  eval port=\$PORT${alias}
-
-  if [ -n "${db}" -a "${host}" -a "${user}" -a "${port}" ]; then
-    conninfo="dbname=${db} host=${host} user=${user} port=${port}"
-    echo "NODE ${alias} ADMIN CONNINFO = '${conninfo}';" >> $PREAMBLE
-    if [ ${alias} -ge ${NUMNODES} ]; then
-      break;
-    else
-      alias=`expr ${alias} + 1`
-    fi   
-  else
-    break;
-  fi
-done
-
-
-SEQUENCES=" acc_trans_entry_id_seq audittrail_entry_id_seq
- custom_field_catalog_field_id_seq custom_table_catalog_table_id_seq
- id inventory_entry_id_seq invoiceid jcitemsid orderitemsid
- partscustomer_entry_id_seq partsvendor_entry_id_seq
- session_session_id_seq shipto_entry_id_seq "
-
-TABLES=" acc_trans ap ar assembly audittrail business chart
- custom_field_catalog custom_table_catalog customer customertax
- defaults department dpt_trans employee exchangerate gifi gl inventory
- invoice jcitems language makemodel oe orderitems parts partscustomer
- partsgroup partstax partsvendor pricegroup project recurring
- recurringemail recurringprint session shipto sic status tax
- transactions translation vendor vendortax warehouse yearend"
-
-SETUPSET=${mktmp}/create_set.slonik
-
-echo "include <${PREAMBLE}>;" > $SETUPSET
-echo "create set (id=1, origin=1, comment='${CLUSTER} Tables and Sequences');" >> $SETUPSET
-
-tnum=1
-
-for table in `echo $TABLES`; do
-    echo "set add table (id=${tnum}, set id=1, origin=1, fully qualified name='public.${table}', comment='${CLUSTER} table ${table}');" >> $SETUPSET
-    tnum=`expr ${tnum} + 1`
-done
-
-snum=1
-for seq in `echo $SEQUENCES`; do
-    echo "set add sequence (id=${snum}, set id=1, origin=1, fully qualified name='public.${seq}', comment='${CLUSTER} sequence ${seq}');" >> $SETUPSET
-    snum=`expr ${snum} + 1`
-done
-
-NODEINIT=$mktmp/create_nodes.slonik
-echo "include <${PREAMBLE}>;" > $NODEINIT
-echo "init cluster (id=1, comment='${CLUSTER} node 1');" >> $NODEINIT
-
-node=2
-while : ; do
-    SUBFILE=$mktmp/subscribe_set_${node}.slonik
-    echo "include <${PREAMBLE}>;" > $SUBFILE
-    echo "store node (id=${node}, comment='${CLUSTER} subscriber node ${node}');" >> $NODEINIT
-    echo "subscribe set (id=1, provider=1, receiver=${node}, forward=yes);" >> $SUBFILE
-    if [ ${node} -ge ${NUMNODES} ]; then
-      break;
-    else
-      node=`expr ${node} + 1`
-    fi   
-done
-
-store_path
-
-echo "
-$0 has generated Slony-I slonik scripts to initialize replication for LedgerSMB.
-
-Cluster name: ${CLUSTER}
-Number of nodes: ${NUMNODES}
-Scripts are in ${mktmp}
-=====================
-"
-ls -l $mktmp
-
-echo "
-=====================
-Be sure to verify that the contents of $PREAMBLE very carefully, as
-the configuration there is used widely in the other scripts.
-=====================
-====================="
-
-
-
-
-
-
-
-
-


This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.