Monday, January 7, 2013

OpenLDAP clustering : Concepts and Configuration

1. LDAP Replication

LDAP replication feature allows LDAP DIT to be backed up /synched to other Ldap server. More importently to emphesis that LDAP replication operates at LDAP DIT level , not LDAP Server level. Replication occures periodically what call replication cycle time 

There are two possible replication methodology and multiple variations on each configuration type.


Figure 1 : Single Master-slave Mode


Figure 2 : Single Master Single slave Mode


Figure 3 : Master-Master Mode

Note:
RO : Read Only
RW : Read WRITE


1.1 Master-slave : In this , single master DIT is capable of being updated and updates are replicated/ backed up upon multiple dedicated slave DIT. Slave DIT allows read only operation and master allows both read and update operation. 

1.1.1 short comming : 
i. Master is single point of failure.
ii. if all LDAP client have to do both read and update operation , then for update operation , have to point only to master LDAP and for read operation , have to point both master / slave .


1.2 Master-Master : Introduced since OpenLdap 2.4 . In this mode , any client operation can be performed on one of all server , and changes will be propogated on other LDAP Servers.


2. OpenLDAP Slurpd Style Replication
Push based replication , obsoleted from version 2.4.
slapd.conf (master.example.com)

# global section
replicationinterval 300
...
# database section
database bdb
...
#simple security to slave located at slave.example.com  with a cleartext 
#password ,directive only used by slurpd
replica uri=ldap://slave.example.com bindmethod=simple
  binddn="dc=example,dc=com" credentials=slaveldap
    
# saves changes to specfied file directive used by both slapd and slurpd
replogfile /var/log/ldap/slave.log
slapd.conf (slave.example.com)

# global section 
...
# database section
database bdb
...
# defines the dn that is used in the replica directive of the master directive only used by slurpd
updatedn "dc=example,dc=com"
    
# referral given if a client tries to update slave
updateref ldap://master.example.com

3. OpenLDAP Slurpd Replication

The old slurpd mechanism only operated in provider-initiated push mode. Slurpd replication was deprecated in favor of Syncrepl replication and has been completely removed from OpenLDAP 2.4.

The slurpd daemon was the original replication mechanism , the master pushed changes to the slaves. It was replaced for many reasons, in brief:

- It was not reliable
- It was extremely sensitive to the ordering of records in the replog 
- It could easily go out of sync, at which point manual intervention was required to resync the slave database with the master directory 
- It wasn't very tolerant of unavailable servers. If a slave went down for a long time, the replog could grow to a size that was too large for slurpd to process
- It only worked in push mode 
- It required stopping and restarting the master to add new slaves 
- It only supported single master replication

Syncrepl has none of those weaknesses:

- Syncrepl is self-synchronizing; you can start with a consumer database in any state from totally empty to fully synced and it will automatically do the right thing to achieve and maintain synchronization
- It is completely insensitive to the order in which changes occur 
- It guarantees convergence between the consumer and the provider content without manual intervention 
- It can resynchronize regardless of how long a consumer stays out of contact with the provider
- Syncrepl can operate in either direction 
- Consumers can be added at any time without touching anything on the provider 
- Multi-master replication is supported.
- Runtime Configuration support , i.e. for change in cinfiguration need not to restart OpenLdap provider/consumer through cn=config DIT.



4. OpenLDAP syncrepl N-Way Multi-Master
Multi-Master replication is a replication technique using Syncrepl ( introduced in OpenLDAP 2.4 , new ) to replicate data to multiple provider ("Master") Directory servers.

4.1 True Arguments for Multi-Master replication
- If any provider fails, other providers will continue to accept updates 
- Avoids a single point of failure 
- Providers can be located in several physical sites i.e. distributed across the network/globe. 
- Good for Automatic failover/High Availability

4.2 False Arguments for Multi-Master replication
These claimed claims are false:
- It has NOTHING to do with load balancing 
- Providers must propagate writes to all the other servers, which means the network traffic and write load spreads across all of the servers the same as for single-master. 
- Server utilization and performance are at best identical for Multi-Master and Single-Master replication; at worst Single-Master is superior because indexing can be tuned differently to optimize for the different usage patterns between the provider and the consumers.





Figure 4 : syncrepl N-Way Multi-Mastering

N-way multi-master replication introduced in OpenLDAP 2.4, support both syncrepl refreshonlyand refreshAndPersist . All provider works as consumer as well as shown in above figure.

In this configuration assuming that a refreshAndPersist type of synchronization is used - it is not clear why you would even want to use refreshOnly.

Following are configuration (slapd.conf ) of all three ldap servers with refreshAndPersist mode .

slapd.conf (ldap1.example.com)

# global section
serverID 001
.....

# database section
database bdb
...
# allows read access from all consumers
# and assumes that all masters will use a binddn with this value
# may need merging with other ACL's
access to *
     by dn.base="dc=example,dc=com" read
     by * break 
         
# NOTE: 
# syncrepl directives for each of the other masters
# provider is ldap://ldap2.example.com:389,
# whole DIT (searchbase), all user and operational attributes synchronized
# simple security with cleartext password
syncrepl rid=000 
  provider=ldap://ldap2.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="dc=example,dc=com"
  credentials=ldappassword

# provider is ldap://ldap3.example.com:389,
# whole DIT (searchbase), user and operational attributes synchronized
# simple security with cleattext password
syncrepl rid=001
  provider=ldap://ldap3.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="dc=example,dc=com"
  credentials=ldappassword
...
# syncprov specific indexing (add others as required)
index entryCSN eq
index entryUUID eq 
...
# mirror mode essential to allow writes
# and must appear after all syncrepl directives
mirrormode TRUE

# define the provider to use the syncprov overlay
# (last directives in database section)
overlay syncprov
# contextCSN saved to database every 100 updates or ten minutes
syncprov-checkpoint 100 10

slapd.conf (ldap2.example.com)

ServerID 002
database bdb

syncrepl rid=000 
  provider=ldap://ldap1.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="dc=example,dc=com"
  credentials=ldappassword

syncrepl rid=001
  provider=ldap://ldap3.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="dc=example,dc=com"
  credentials=ldappassword

index entryCSN eq
index entryUUID eq 
mirrormode TRUE
overlay syncprov
syncprov-checkpoint 100 10

slapd.conf (ldap3.example.com)

#global section
ServerID 003

database bdb

syncrepl rid=000 
  provider=ldap://ldap1.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="dc=example,dc=com"
  credentials=ldappassword

syncrepl rid=001 
  provider=ldap://ldap2.example.com
  type=refreshAndPersist
  retry="5 5 300 +" 
  searchbase="dc=example,dc=com"
  attrs="*,+"
  bindmethod=simple
  binddn="dc=example,dc=com"
  credentials=ldappassword

index entryCSN eq
index entryUUID eq 
mirrormode TRUE
overlay syncprov
syncprov-checkpoint 100 10


Figure 4 shows a 3-Way Multi-Master Configuration. Each Master is configured - in its slapd.conf file - as a provider (using the overlay syncprov directive) and as a consumer for all of the other masters (using the syncrepl directive). Each provider must be uniquely identified using aServerID directive. Each provider is further, as noted above, synchronized to a common clock source. Thus the provider of DIT contains an overlay syncprov directive (the provider overlay) and two refreshAndPersist type syncrepl directives, one for each of the other providers as shown by the communication link . Similarly each of the other providers has a similar configuration - a single provider capability and refreshAndPersist syncrepl directives for the other two masters.

No comments:

Post a Comment