Active Directory Replication Technologies

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Active Directory Replication Technologies as PDF for free.

More details

  • Words: 49,604
  • Pages: 96
Active Directory Replication Technologies Updated: March 28, 2003

Active Directory is a distributed directory service that stores objects that represent real-world entities such as users, computers, services, and network resources. Objects in the directory are distributed among all domain controllers in a forest, and all domain controllers can be updated directly. Active Directory replication is the process by which the changes that originate on one domain controller are automatically transferred to other domain controllers that store the same data.

Active Directory Replication Model The replication model comprises the mechanisms that support the multimaster update capabilities of Active Directory domain controllers. To ensure that replication data is transferred efficiently in the multimaster system, domain controllers track the changes that they have received and request only the updates that have occurred since the last replication. The update tracking is based on the state of the data as it exists on a replicating pair of domain controllers at the time of replication. Update tracking ensures that:

• •

Only changes that have not been received are replicated to a destination. Conflicts are resolved according to the last change that occurred, even when individual domain controller clocks are

not synchronized or when administrators at different domain controllers make changes to the same object. The replication model also accommodates multimaster updates by enabling replicated changes to be stored on destination domain controllers and forwarded to other domain controllers. This store-and-forward capability removes the need for every domain controller on which updates originate to contact every other domain controller that requires the updates. Top of page

Active Directory Replication Topology The replication topology is the current set of Active Directory connections by which domain controllers in a forest communicate over local area networks (LANs) and wide area networks (WANs) to synchronize the directory partition replicas that they have in common. The replication topology ensures the transfer of changes to all directory partition replicas in the forest without redundancy. Replication topology generation is dynamic and adapts to network conditions and availability of domain controllers. To ensure a consistent replication topology, domain controllers use global configuration data to arrive at the same view of domain controller data. They apply the same algorithm to this data to arrive at an identical replication topology. Operating independently, each domain controller contributes to a uniform and efficient replication topology. Replication topology generation is optimized for speed within sites and for cost between sites. Replication between domain controllers in the same site occurs automatically in response to changes and does not require administrative management. Replication within a site is sent uncompressed to reduce processing time. Replication between domain controllers in different sites can be managed to control the scheduling and routing of replication over WAN links. Replication between sites is compressed so that it uses less bandwidth when sent across WAN links, thereby reducing the cost.

Active Directory Replication Model Technical Reference Updated: March 28, 2003

Active Directory Replication Model Technical Reference The Active Directory directory service replication model ensures that Active Directory data on one domain controller is consistent with the replica of the same data on other domain controllers in the same domain. The Active Directory replication model determines how changes to Active Directory data are propagated and tracked automatically between domain controllers. The replication model allows directory data on each domain controller to be updated directly. Each domain controller maintains replication metadata that indicates the update status both for itself and relative to other domain controllers. In addition, the replication model allows each domain controller to request (pull) only changes that need to be replicated and to forward changes to other domain controllers that need them. In this subject

• • •

What Is the Active Directory Replication Model? How the Active Directory Replication Model Works Active Directory Replication Tools and Settings

What Is the Active Directory Replication Model? Updated: March 28, 2003

What Is the Active Directory Replication Model? In this section

• • • •

Replication Model Components Technologies Related to Active Directory Replication Active Directory Replication Dependencies Related Information

Active Directory replication is the means by which changes to directory data are transferred between domain controllers in an Active Directory forest. The Active Directory replication model defines the mechanisms that allow directory updates to be transferred automatically between domain controllers to provide a seamless replication solution for the Active Directory distributed directory service. Note



This discussion of the replication model and related mechanisms for transferring directory data between domain controllers does not include the topic of replication topology. Replication topology is the set of connections that are generated by the Knowledge Consistency Checker (KCC) to enable replication to take place between domain

controllers. Active Directory is distributed by means of directory partitions. In addition to directory partitions that store forest-wide data, each domain controller stores a replica of a single domain directory partition, which contains data that is specific to one or more closely aligned business units—the users, computers, organizational units, and network resources that are managed by the same set of service and data administrators. Because each domain controller stores only one domain directory partition, Active Directory can scale to hundreds or thousands of domains storing millions of objects. To efficiently synchronize data between domain controllers that store the same domain, Active Directory replication transfers updates according to directory partition. Each domain controller receives directory updates to the data that is stored in its domain only, as well as updates that are stored in the two directory partitions that store configuration and schema data for the forest. Note



In Windows Server 2003 forests, domain controllers can also store application directory partitions, which store application data that can be replicated to only the domain controllers that store the directory partition, irrespective of

domain. Active Directory replication manages the transfer of these updates to the appropriate domain controllers automatically, keeping domain data up-to-date among all domain controllers in the domain, regardless of location. In the process, all domain controllers in the forest are also updated with changes to forest-wide data. Top of page

Replication Model Components To globally distribute the directory service, the Active Directory replication model incorporates the components in the following table. Replication Model Components and Advantages

Component

Description

Multimaster

Every domain controller can receive originating updates to data for Provides fault tolerance, eliminating

replication

which it is authoritative, rather than having a single domain

the dependency on a single domain

controller that receives all original updates (single-master

controller to maintain directory

replication, such as Windows NT 4.0 replication).

operations.

Pull replication

Advantage

Domain controllers request (pull) changes rather than send (push) Reduces unnecessary network traffic. changes that might not be needed.

Store-and-

Each domain controller communicates with a subset of domain

forward

controllers to transfer replication changes, rather than one domain many domain controllers.

Balances the replication load among

replication

controller being responsible for communicating with every other domain controller that requires the change.

State-based

Each domain controller tracks the state of replication updates.

Conflicts and unnecessary replication

Component

Description

replication

Advantage are reduced.

The Active Directory replication model ensures:

• • • •

Domain controller availability. Multimaster replication ensures that all domain controllers are available for updates, eliminating the potential for slow service if only a single updatable domain controller were available. Efficient transfer of data. State-based and pull replication ensures the minimum replication traffic and the maximum efficiency to retrieve only the changes that are needed. Reliable consistency. Directory consistency is guaranteed within the same period of replication latency. Conflict resolution. Even if two administrators change the same attribute on different domain controllers at the same time, conflict resolution ensures that only one of the values is replicated to all domain controllers.

Replication Latency Multimaster replication involves latency — the period of time for an update that occurs on the originating domain controller to reach all other domain controllers that need it. To address replication latency, multimaster replication ensures loose consistency with convergence, as follows:

• •

Loose consistency means that the replicas are not guaranteed to be consistent with each other at any particular point in time because changes can originate from any replica at any time. Convergence means that if the system is allowed to reach a steady state in which no new updates are occurring and all previous updates have been completely replicated, all replicas of the same directory partition are guaranteed to

converge on the same set of values. With multimaster replication, it is not necessary for every domain controller to replicate with every other domain controller. Instead, the system implements a robust set of connections that determines which domain controllers replicate to which other domain controllers to ensure that networks are not overloaded with replication traffic and that replication latency is not so long that it inconveniences users. The set of connections through which changes are replicated to domain controllers in an enterprise is called the replication topology. Although it involves latency, multimaster update capability provides high availability of write access to directory objects because several servers can contain writable copies of an object. Each domain controller in the domain can accept updates independently, without communicating with other domain controllers. Active Directory replication resolves any conflicts that occur when multiple updates are made to a single directory object.

State-based Vs. Log-based Replication In state-based replication, each domain controller (master) in the multimaster system applies updates to its replica as they arrive, without maintaining a change log file. In a typical log-based replication system (also called “change-based”), each master keeps a log of the updates that it originated and communicates its log to every other replica. After a log has arrived at a replica, the replica applies the log, bringing itself more up-to-date. In this process, the destination receives and stores a record of all changes, not just the changes it needs. Active Directory replication relies on the current “state” (the current values of all objects) of the source replica instead of logs. The current state includes metadata that is used to resolve conflicts and to avoid sending the full replica on each replication cycle. Generally speaking, a directory partition replica maintains all of its objects in a list ordered by last modification. This list is a log of sorts, but one whose size is a tiny fraction of the size of the replica itself. A typical replication request can be satisfied by examining only the last few objects on the list because the replication destination server is aware of how much of its replication source’s list of changes have already been processed.

Multimaster Vs. Single-master Replication Although a single-master model is adequate for a directory that has a small number of replicas and for an environment where all of the changes can be applied centrally, this approach does not scale beyond small organizations nor does it address the needs of decentralized organizations. Multimaster replication provides the following advantages over single-master replication:



If one domain controller becomes inoperable, other domain controllers can continue to update the directory. In singlemaster replication, if the master becomes inoperable, directory updates cannot take place. For example, if the failed

server holds your password and your password has expired, you cannot reset your password and therefore you cannot

• •

log on to the domain. Servers that are capable of making changes to the directory can be distributed across the network and can be deployed in multiple locations. By creating multiple replicas of the directory and keeping the replicas consistent, the directory service can handle more queries per second. Directory services must handle a large number of queries compared to the number of updates they must process. A typical ratio of queries to updates is 99:1.

Pull Vs. Push Replication In push replication, a source domain controller sends unsolicited information to update a destination domain controller. Push replication is problematic because it is difficult for the source to know what information the destination needs. The destination can receive the same information from another source. Therefore, a source can send unnecessary information to a destination. Top of page

Technologies Related to Active Directory Replication File Replication service (FRS) is related to Active Directory replication because it requires the Active Directory replication topology. FRS is a multimaster replication service that is used to replicate files and folders in the System Volume (SYSVOL) shared folder on domain controllers and in Distributed File System (DFS) shared folders. FRS works by detecting changes to files and folders and then replicating the updated files and folders to other replica members, which are connected in a replication topology. FRS uses the replication topology that is generated by the KCC to replicate the SYSVOL files to all domain controllers in the domain. SYSVOL files are required by all domain controllers for Active Directory to function. For more information about FRS and how it uses the Active Directory replication topology, see “FRS Technical Reference.” For more information about SYSVOL, see “Data Store Technical Reference.” For more information about DFS, see “DFS Technical Reference.” Top of page

Active Directory Replication Dependencies Active Directory replication has the following dependencies:

• • • •

DNS. The Domain Name System (DNS) that resolves DNS names to IP addresses. Active Directory requires that DNS is properly designed and deployed so that domain controllers can correctly resolve DNS names of replication partners. Remote procedure call (RPC). Active Directory replication requires IP connectivity and the Remote Procedure Call (RPC) to transfer updates between replication partners. Kerberos v5 authentication. The authentication protocol for both authentication and encryption that is required for all Active Directory RPC replication. LDAP protocol. The primary access protocol for Active Directory. Replication of an entire replica of an Active Directory domain, as occurs when Active Directory is installed on an additional domain controller in an existing domain, uses

LDAP communication rather than RPC. The following diagram shows the interaction of these components within the replication process. Replication Interactions with Other Technologies

Top of page

Related Information The following resources contain additional information that is relevant to this section.

• •

How the Active Directory Replication Model Works Active Directory Replication Topology Technical Reference

• • • •

DNS Support for Active Directory Technical Reference Data Store Technical Reference FRS Technical Reference DFS Technical Reference

How the Active Directory Replication Model Works Updated: March 28, 2003

How the Active Directory Replication Model Works In this section

• • • • • • • •

Active Directory Replication Model Architecture Active Directory Replication Model Physical Structure Active Directory Data Updates Domain Controller Notification of Changes Identifying and Locating Replication Partners Urgent Replication Network Ports Used by Active Directory Replication Related Information

Active Directory data takes the form of objects that have properties, or attributes. Each object is an instance of an object class, and object classes and their respective attributes are defined in the Active Directory schema. The values of the attributes define the object, and a change to a value of an attribute must be transferred from the domain controller on which it occurs to every other domain controller that stores a replica of that object. Thus, Active Directory replicates directory data updates at the attribute level. In addition, updates from the same directory partition are replicated as a unit to the corresponding replica on the destination domain controller over the same connection to optimize network usage. The information in this section applies to organizations that are designing, deploying, or operating an Active Directory infrastructure that satisfies the following requirements:



A Domain Name System (DNS) infrastructure is in place that manages the name resolution for domain controllers in the forest. Active Directory-integrated DNS is assumed, wherein DNS zone data is stored in Active Directory and is replicated to all domain controllers that are DNS servers.

• •

All Active Directory sites have local area network (LAN) connectivity. IP connectivity is available between all datacenter locations and branch sites.

The limits for data that can be replicated in one replication cycle are as follows:

• • •

Values that can be transferred in one replication cycle (replication of the current set of updates between a source and destination domain controller): no limit. Values that can be transferred in one replication packet: approximately 100. Replication exchanges continue during the course of one replication cycle until no values are left to send. Values that can be written in a single database transaction: 5,000. The effect of this limit depends on the forest functional level:

• •

Windows 2000 forest functional level: The minimum unit of replication at this level is the entire attribute. Therefore, changes to any value in the linked, multivalued member attribute results in replicating the entire attribute. For this reason, the supported size of group membership is limited to 5,000. Windows Server 2003 or Windows Server 2003 interim forest functional level: The minimum unit of replication is a single value of a linked, multivalued attribute. Therefore, the limitation on group membership is effectively

removed. This section covers the interactions that take place between individual domain controllers to synchronize directory data in an Active Directory forest. Top of page

Active Directory Replication Model Architecture Active Directory replication operates within the directory service component of the security subsystem. The directory service component, Ntdsa.dll, is accessed through the Lightweight Directory Access Protocol (LDAP) network protocol and LDAP C application programming interface (API) for directory service updates, as implemented in Wldap32.dll. The

updates are transported over Internet Protocol (IP) as packaged by the replication remote procedure call (RPC) protocol. Simple Mail Transfer Protocol (SMTP) can also be used to prepare non-domain updates for Transmission Control Protocol (TCP) transport over IP. The Directory Replication System (DRS) client and server components interact to transfer and apply Active Directory updates between domain controllers. When SMTP is used for the replication transport, Ismserv.exe on the source domain controller uses the Collaborative Data Object (CDO) library to build an SMTP file on disk with the replication data as the attached mail message. The message file is placed in a queue directory. When the mail is scheduled for transfer by the mail server application, the SMTP service (Smtpsvc) delivers the mail message to the destination domain controller over TCP/IP and places the file in the drop directory on the destination domain controller. Ismserv.exe applies the updates on the destination. The following diagram shows the client-server architecture for replication clients and LDAP clients. Replication and LDAP Client-Server Architecture

The following table describes the replication architecture components. Replication Architecture Components

Component

Description

Ntdsapi.dll

Manages communication with the directory service over RPC.

Private DRS client Private version of Ntdsapi.dll that runs on domain controllers to make RPC calls for replication. Wldap32.dll

Client library with APIs for access to directory service.

Asn.1

Encodes and decodes LDAP requests for transport over TCP/IP or UDP/IP.

Component

Description

Drs.idl

Set of functions for replication (for example, Get-Changes) and maintenance (for example, Get replication status)

MAPI (address

Entry protocol for address book applications such as Microsoft Outlook.

book) Domain rename

Carries out domain rename instructions.

Ntdsa.dll

The directory service module, which supports the Windows Server 2003 and Windows 2000 replication protocol and LDAP, and manages partitions of data.

ISMServ.exe

Prepares replication data in e-mail message format for SMTP protocol transport.

CDO library

Used by Ismserv.exe to package replication data into a mail message.

Smtpsvc

SMTP service.

Note



If Windows NT 4.0 backup domain controllers (BDCs) are operating in the forest, Windows NT4 Net APIs provide an

entry to the security accounts manager (SAM) on the primary domain controller (PDC) emulator. The protocols that are used by Active Directory replication are described in the following table. RPC and SMTP are the replication transport protocols. LDAP is a directory access protocol, and IP is a network wire protocol. Active Directory Access and Replication Protocols

Protocol

Description

LDAP

The primary directory access protocol for Active Directory. Windows Server 2003 family, Windows XP, Windows 2000 Server family, and Windows 2000 Professional clients, as well as Windows 98, Windows 95, and Windows NT 4.0 clients that have the Active Directory client components installed, use LDAP v3 to connect to Active Directory.

IP

Routable protocol that is responsible for the addressing, routing, and fragmenting of packets by the sending node. IP is required for Active Directory replication.

Replication RPC

The Directory Replication Service (Drsuapi) RPC protocol, used in the enabling of administration and monitoring of Active Directory replication, to communicate replication status and topology and network topology from a client running administrative tools to a domain controller. RPC is required by Active Directory replication.

Replication Simple

Replication protocol that can be used by Active Directory replication over IP network transport for

Mail Transfer

message-based replication between sites only and for non-domain replication only.

Protocol (SMTP)

Replication Subsystem Within the directory service component of the Active Directory architecture, the replication subsystem interacts with the operational layer to implement replication changes on the destination domain controller. The replication subsystem also determines the changes that a replication partner already has or those that are needed. The database layer manages the database capability of the directory service. The extensible storage engine (Esent.dll) communicates directly with individual records in the directory data store. The following diagram shows the replication subsystem components. Replication Subsystem Components

The components of the replication subsystem are described in the following table. Replication Subsystem Components

Component

Description

Ntdsa.dll

Directory system agent (DSA), which provides the interfaces through which directory clients and other directory servers gain access to the directory database.

Replication

Directory Replication System (DRS) interface, which communicates with the database through RPC.

Operational layer

Performs low-level operations on the database without regard for protocol.

Database layer

API that resides within Ntdsa.dll and provides an interface between applications and the directory database to protect the database from direct interaction with applications.

Extensible storage

Manages the tables of records, each with one or more columns that comprise the directory

engine (ESE)

database.

Ntds.dit

The directory database file.

Top of page

Active Directory Replication Model Physical Structure The Active Directory replication model components that determine how Active Directory replication functions between domain controllers are associated with mechanisms that effect automatic transfer of changes between replicating domain controllers, as described in the following table. Active Directory Replication Model Components and Related Mechanisms

Component

Description

Related Mechanisms

Multimaster

All domain controllers accept LDAP requests for changes to attributes of Active

LDAP update

replication

Directory objects for which they are authoritative, subject to security constraints Directory partitions that are in place. Each originating update is replicated to one or more other

Change notification

domain controllers, which record it as a replicated update.

Change tracking Conflict resolution

Pull replication

When an update occurs on a domain controller, it notifies its replication partner. DNS name resolution The partner domain controller responds by requesting (pulling) the changes from Kerberos authentication the source domain controller.

Change tracking Change notification Change request

Store-and-

Domain controllers store changes received from replication partners and forward Change tracking

forward

the changes to other domain controllers so that the originating domain controller Kerberos authentication

replication

for each change is not required to transfer changes to every other domain

DNS name resolution

controller that requires the change.

Change notification Change request

State-based

Active Directory replication is driven by the difference between the current

Change-tracking

Component

Description

Related Mechanisms

replication

“state” (the current values of all attributes) of the directory partition replica on

metadata:

the source and destination domain controllers. This state includes metadata that is used to resolve conflicts and to avoid sending the full replica on each replication cycle.

• Update sequence number (USN) counter

• Up-to-dateness vector

• High-watermark These mechanisms are implemented by the replication system in a sequence of events that occurs between two domain controllers. The following diagram shows a simplified version of the sequence between source and destination domain controllers when the source initiates replication by sending a change notification. Replication Sequence

Top of page

Active Directory Data Updates When a change is made to an object in a directory partition, the value of the changed attribute or attributes must be updated on all domain controllers that store a replica of the same directory partition. Domain controllers communicate data updates automatically through Active Directory replication. Their communication about updates is always specific to a single directory partition at a time. Active Directory data is logically partitioned so that all domain controllers in the forest do not store all objects in the directory. Active Directory objects are instances of schema-defined classes, which consist of named sets of attributes. Schema definitions determine whether an attribute can be administratively changed. Attributes that cannot be changed are never updated and therefore never replicated. However, most Active Directory objects have attribute values that can be updated.

Different categories of data are stored in replicas of different directory partitions, as follows:



• • •

Domain data that is stored in domain directory partitions:

• •

Every domain controller stores one writable domain directory partition. A domain controller that is a global catalog server stores one writable domain directory partition and a partial, readonly replica of every other domain in the forest. Global catalog read-only replicas contain a partial set of attributes

for every object in the domain. Configuration data: Every domain controller stores one writable configuration directory partition that stores forestwide data controlling site and replication operations. Schema data: Every domain controller stores one writable schema partition that stores schema definitions for the forest. Although the schema directory partition is writable, schema updates are allowed on only the domain controller that holds the role of schema operations master. Application data: Domain controllers that are running Windows Server 2003 can store directory partitions that store application data. Application directory partition replicas can be replicated to any set of domain controllers in a forest, irrespective of domain.

Changes to Attributes Active Directory updates originate on one domain controller (originating updates) and the same update is subsequently made on other domain controllers during the replication process (replicated updates). Object update behavior is consistent and predictable: when a set of changes is made to a specific directory partition replica, those changes will be propagated to all other domain controllers that store replicas of the directory partition. How soon the changes are applied depends on the distance between the domain controllers and whether the change must be sent to other sites. The following key points are central to understanding the behavior of Active Directory updates:

• • •

Changes occur at the attribute level; only the changed attribute value is replicated, not the entire object. At the time of replication, only the current value of an attribute that has changed is replicated. If an attribute value has changed multiple times between replication cycles (for example, between scheduled occurrences of intersite replication), only the current value is replicated. The smallest change that can be replicated in Windows 2000 Active Directory is an entire attribute; even if the attribute is linked and multivalued, all values replicate as a single change. The smallest change that can be replicated in Windows Server 2003 Active Directory is a separate value in a multivalued attribute that is linked. This



Windows Server 2003 feature is called linked-value replication. The individual values of a multivalued attribute are replicated separately under the following conditions:

• •

The forest functional level is Windows Server 2003 interim or Windows Server 2003. The attribute is linked. Linked attributes have the following characteristics: The a t t r i bu te has d i s t ingu i shed name syn . tax The at t r i bu te i s marked as l i nked i n the schema.

• •

An attribute is available for replication as soon as it is written.

• •

Multimaster conflict resolution is effective without depending on clock synchronization.



Replication is store-and-forward and moves sequentially through a set of connected domain controllers that host directory partition replicas. Originating updates to a single object are written to the database in the same transaction, so partially written objects are not possible and a consistent view of the object is maintained. For replicated updates to large numbers of values in linked multivalued attributes, such as the member attribute of a group, updates are not always guaranteed to be applied in the same transaction. In this case, the updates are guaranteed to be applied in one or more subsequent transactions in the same replication cycle (all updates from one



source are applied at the destination). After a replication cycle is initiated, all available changes to a directory partition on the source domain controller are sent to the destination domain controller, including changes that occur while the replication cycle is in progress.

Effect of Schema Changes on Replication Attribute definitions are stored in attributeSchema objects in the schema directory partition. Changes to attributeSchema objects block other replication until the schema changes are performed. During replication of any directory partition other than the schema directory partition, the replication system first checks to see whether the

schema versions of the source and the destination domain controllers are in agreement. If the versions are not the same, the replication of the other directory partition is rescheduled until the schema directory partition is synchronized. Prior to upgrading a domain controller from Windows 2000 Server to Windows Server 2003, you must update the schema to be compatible with Windows Server 2003. When you run Adprep.exe, Windows Server 2003 schema is installed in the forest. This process upgrades the schema on each Windows 2000–based domain controller. Thereafter, you can begin upgrading domain controllers to Windows Server 2003. Note



The Windows Server 2003 schema update adds 25 indexed attributes to the schema directory partition. An update of this size can cause replication delays in a large database. For this reason, domain controllers that are running Windows 2000 Server must be running at a minimum Windows 2000 Service Pack 2 (SP2) plus all additional Windows updates. However, it is highly recommended that you install Windows 2000 Service Pack 3 (SP3) on all domain controllers prior to preparing your infrastructure for upgrade to the Windows Server 2003 operating system.

Effect of Raising the Forest Functional Level on Existing Linked, Multivalued Attributes Existing linked, multivalued attributes are not directly affected when you raise the forest functional level to enable linkedvalue replication. These attribute values are converted to replicate as single values only when they are modified. This design avoids the performance effects that would potentially result from rewriting the existing member attribute values of all group objects in the forest at the same time. Because the member attribute is not converted until it is modified, a group that exceeded the 5,000-member limit in Windows 2000 continues to represent a replication issue because the original set of members continues to replicate as a unit under the new forest functional level. New members that are added and any member values that are updated replicate separately thereafter. Therefore, if the groups that were created in Windows 2000 do not exceed the 5,000member limit, no replication issues are associated with the group. Raising the forest functional level does affect the ability of an authoritative restore process to restore the back-links of a restored object that has one or more single-valued or multivalued linked attributes. The version of Ntdsutil that is included with Windows Server 2003 provides this functionality automatically when the forest has a functional level of Windows Server 2003 interim or Windows Server 2003. The version of Ntdsutil that is included with Windows Server 2003 SP1 also provides the ability to restore back-links that were created before the forest functional level was raised to Windows Server 2003 interim or Windows Server 2003. For more information about restoring backlinked attribute values, see "Authoritative Restore" later in this section. For more information about how linked, multivalued attributes are stored, see “How the Data Store Works.”

Originating Updates: Initiating Changes As a Lightweight Directory Access Protocol (LDAP) directory service, Active Directory supports the following four types of update requests:

• • • •

Add an object to the directory. Modify (add, delete, or replace) attribute values of an object in the directory. Move an object by changing the name or parent of the object. Delete an object from the directory.

Each LDAP request generates a separate write transaction. An LDAP directory service processes each write request as an atomic transaction; that is, the transaction is either completed in full or not applied at all. The practical limit to the number of values that can be written in one LDAP transaction is approximately 5,000 values added, modified, or deleted at the same time. A write request either commits and all its effects are durable, or it fails before completion and has no effect. A write request that commits is called an originating update. An originating update is initiated and committed at a specific replica. The absolute success or failure of an update applies even for requests that might affect several attributes of a single object, such as Add or Modify. In this case, if one attribute update fails, they all fail and the object is not updated. An originating update enforces schema restrictions, including allowable parent object types and syntax for mandatory and optional attributes for an object. The restrictions are enforced according to the schema that exists on the domain controller at the moment of the update.

Originating Add An Add request makes a new object with a unique objectGUID attribute. The values of all replicated attributes that are set by the Add request are stamped Version = 1. The Add request fails immediately if the parent object does not exist or if the originating domain controller does not contain a writable replica of the parent object’s directory partition.

Originating Modify All Modify operations replace the current value of an attribute with a new value. A modify request can specify one of the following:

• •

That an attribute be deleted from the object. Attribute deletion is best thought of as replacing the attribute value with NULL. The NULL value occupies no storage of its own but does carry a stamp, as does any value that is stored as a directory attribute. That a value be added to the current value of an attribute, as when modifying an attribute that can have multiple

values. The effect is to replace the current values with the current values plus the added value. For each attribute in the request, a Modify request compares the new value in the request with the existing value. If the values are the same, the request to modify that attribute is ignored. If the resulting Modify request does not change any attributes of the object, the entire request is ignored. Otherwise, a Modify request computes a stamp in the metadata for each new replicated attribute value by reading the version from the existing value (version=0 for an attribute that has never been written) and then adding 1 to this value. The Modify request replaces the old stamp values with new stamp values.

Originating Move A Move request is essentially a special Modify request for a single attribute, the name attribute. The operation proceeds as described for the Modify request.

Originating Delete A Delete request is essentially a special Modify request that does the following series of operations: 1.

Sets the isDeleted attribute to TRUE, which marks the object as a tombstone (an object that has been deleted but

2.

not fully removed from the directory). Changes the relative distinguished name of the object to a value that cannot be set by an LDAP application (a value

3.

that is impossible). Strips all attributes that are not needed by Active Directory. A few important attributes (including objectGUID, objectSid, distinguishedName, nTSecurityDescriptor, and uSNChanged) are preserved on the tombstone. Note

• • 4.

Because these attributes are preserved, tombstones can be restored (reanimated) by applications that use the LDAP API for undeleting an object. On domain controllers running Windows Server 2003 with SP1, the sIDHistory attribute is also retained on

tombstone objects. Moves the tombstone to the Deleted Objects container, which is a hidden container within those directory partitions

that allow deletions. On domain controllers running Windows Server 2003, you can restore tombstones in the Deleted Objects container to an active state.

Configuration Objects Protected from Deletion Certain objects in the configuration directory partition are critical to the functioning of a domain controller. These objects and their child objects are protected from deletion.

Reanimination of Protected Objects Each domain controller protects the following objects from deletion:



The cross-reference (class crossRef) objects that represent the writable directory partitions that are stored on the



The RID object for the domain controller.

domain controller.

These objects are protected at that domain controller, as follows:

• •

The domain controller rejects an originating deletion of a protected object. The domain controller does not carry out a replicated deletion of a protected object. Instead, the threatened protected object is revived by updating its replication metadata as if each attribute had just been updated. The update is then replicated out, thereby reanimating the deleted object.

Reanimation of the NTDS Settings Object The NTDS Settings (class nTDSDSA) object is also protected from deletion. On domain controllers running Windows 2000 Server or Windows Server 2003 with no service pack installed, the NTDS Settings object is reanimated and partner domain controllers continue to attempt to replicate with it. However, because the NTDS Settings object represents a domain controller in the replication topology, preserving it as a replication source when a domain controller has been removed from service is counterproductive and represents a security risk. For example, if a domain controller is demoted (that is, Active Directory is removed by running Dcpromo.exe), another server with the same name can be installed and mistaken by a replication partner for the demoted domain controller. To eliminate the possibility of improper replication attempts, domain controllers running Windows Server 2003 with SP1 disable the ability of the preserved NTDS Settings object to receive replication requests. Although the object is preserved on the domain controller that deleted it, replication attempts with the server that is represented by the deleted NTDS Settings object are discontinued. For information about how NTDS Settings replication metadata is preserved, see "Preservation of Replication Metadata" later in this section.

Replicated Update Tracking by Domain Controllers A replicated update is performed on one domain controller as a result of receiving replication of an originating update that was performed at another domain controller. There is not necessarily a one-to-one correspondence between originating and replicated updates. A single replicated update might reflect a set of originating updates (even updates originating at different domain controllers) to the same object. For example, the manager of a user object can be changed at one domain controller at the same time the address of the same user is changed at another domain controller. A third domain controller might receive these changes separately to the user object and in turn replicate the changes to a fourth domain controller in a single replicated update. To avoid endless replication of the same update and reapplication of an update that is received from different replication partners, a domain controller must be able to recognize replicated updates that it has already received as opposed to those that it has not. Some directory services use timestamps to determine what changes need to be propagated, on the basis of preserving the last write. But keeping time closely synchronized in a large network is difficult. When the latest time of a directory write is the only means of determining which of two changes is recorded and replicated, skewed time on a domain controller can result in data loss or directory corruption. Active Directory replication does not primarily depend on time to determine what changes need to be propagated. Instead it uses update sequence numbers (USNs) that are assigned by a counter that is local to each domain controller. Because these USN counters are local, it is easy to ensure that they are reliable and never run backward (that is, they cannot decrease in value). Note



Replication uses Kerberos v 5 authentication protocol for security, which does require that the time services on domain

controllers are synchronized. When a conflict occurs, instead of using timestamps as the primary mechanism to determine what updates are preserved, Active Directory uses volatility (number of changes) as the first element of the per-attribute stamps that are compared during conflict resolution. The second element is a timestamp. Therefore, if an attribute is updated once on domain controller A and once on domain controller B, the last writer’s update is preserved. But if the attribute is updated on domain controller A, then on domain controller B, and then again on domain controller A, the update of domain controller A is preserved even if the clock of domain controller B is set forward from that of domain controller A. With Active Directory, clock skew can never prevent a value from being overwritten. Domain controllers use USNs to simplify recovery after a failure. When a domain controller is restored following a failure, it queries its replication partners for changes with USNs greater than the USN of the last change it received from each partner. You do not need to be concerned with USNs and their implications unless you experience problems with replication that require troubleshooting.

Server Object GUID (DSA GUID) and Server Database GUID (Invocation ID) The server object that represents a domain controller in the Sites container of the configuration directory partition has a globally unique identifier (GUID) that identifies it to the replication system as a domain controller. This GUID, called the DSA (Directory System Agent) GUID, is used in USNs to track originating updates. It is also used by domain controllers to locate replication partners. The DSA GUID is the GUID of the NTDS Settings object (class nTDSDSA), which is a child object of the server object. Its value is stored in the objectGUID attribute of the NTDS Settings object. The DSA GUID is created when Active Directory is initially installed on the domain controller and destroyed only if Active Directory is removed from the domain controller. The DSA GUID ensures that the DSA remains recognizable when a domain controller is renamed. The DSA GUID is not affected by the Active Directory restore process. The Active Directory database has its own GUID, which the DSA uses to identify the database instance (version of the database). The database GUID is stored in the invocationId attribute on the NTDS Settings object. Unlike the DSA GUID, which never changes for the lifetime of the domain controller, the invocation ID is changed during an Active Directory restore process to ensure replication consistency. For more information about replication following a restore process, see “Active Directory Replication on a Restored Domain Controller” later in this section. On domain controllers that are running Windows Server 2003, the invocation ID also changes when an application directory partition is removed from or added to the domain controller.

Determining Changes to Replicate: Update Sequence Numbers A source domain controller uses USNs to determine what changes have already been received by a destination domain controller that is requesting changes. The destination domain controller uses USNs to determine what changes it needs to request. The current USN is a 64-bit counter that is maintained by each Active Directory domain controller as the highestCommittedUsn attribute on the rootDSE object. At the start of each update transaction (originating or replicated), the domain controller increments its current USN and associates this new value with the update request. Note



The rootDSE (DSA-specific Entry) represents the top of the logical namespace for one domain controller. RootDSE has no hierarchical name or schema class, but it does have a set of attributes that identify the contents of a given domain

controller. The current USN value is stored on an updated object as follows:

• •

Local USN: The USN for the update is stored in the metadata of each attribute that is changed by the update as the local USN of that attribute (originating and replicated writes). As the name implies, this value is local to the domain controller where the change occurs. uSNChanged: The maximum local USN among all of an object’s attributes is stored as the object’s uSNChanged attribute (originating and replicated writes). The uSNChanged attribute is indexed, which allows objects to be enumerated efficiently in the order of their most recent attribute write. Note



When the forest functional level is Windows Server 2003 or Windows Server 2003 interim, discrete values of linked multivalued attributes can be updated individually. In this case, there is a uSNChanged associated with each link in addition to the uSNChanged associated with each object. Therefore, updates to individual values of linked



multivalued attributes do not affect the local USN, only the uSNChanged attribute on the object. Originating USN: For an originating write only, the update’s USN value is stored with each updated attribute as the originating USN of that attribute. Unlike the local USN and uSNChanged, the originating USN is replicated with the attribute’s value.

Tracking Object Creation, Replication, and Change The following series of diagrams illustrates the replication-related data for a single object and one of its attributes as it goes from creation through replication. The following diagram shows the replication-related data for the user object when it is first created on domain controller DC1. Before the user object is created, the current USN for the domain controller is 4710. When the object is created, the local USN of 4711 is assigned to each attribute of the user object, and the current USN for the domain controller increments from 4710 to 4711. Because the object has not yet changed, the value of its uSNChanged attribute is the same as its uSNCreated attribute 4711. 1. Replication-related Data on DC1 When a User Object is Created

The next diagram shows the change to the destination domain controller when the new user object is replicated. The object is created as a replicated update on DC2. Notice that the per-attribute originating USN and stamp (version, originating time, originating DC) are replicated from DC1 to DC2, but the per-attribute local USN and per-object uSNChanged are unique to DC2. 2. Replication-related Data on DC2 When a New User Object is Replicated From DC1

The following information is transferred in the metadata of an updated attribute value from the source domain controller to the destination domain controller:



The originating USN value for the updated attribute, which is the USN assigned by the domain controller on which the



The stamp, which is used to resolve conflicts.

update was made.

The next diagram illustrates the change in the replicated object on DC2 when someone changes the password (the userPassword property in the diagram) of the object on that domain controller. By this time, the current USN on DC2 has increased from 1746 to 2001. The update request changes the password and increments the current USN to 2002 on DC2. The request also sets the password attribute’s originating USN and local USN to 2002 and creates a new stamp for the password value. The version number of this password’s stamp is 2, which is one version number higher than the version of the previous password. 3. Replication-related Data on DC2 After the User Password Value Has Been Changed on DC2

In the next diagram, the changed password is now replicated back to the original domain controller, whose current USN has increased to 5039. The replicated update increments the current USN of DC1 to 5040. The per-attribute originating USN and stamp (version, originating time, originating DC) are replicated from DC2 to DC1, and the per-attribute local USN and per-object uSNChanged values are set to 5040. 4. Replication-related Data on DC1 After the Password Change Has Replicated to DC1

Replication Request Filtering Destination domain controllers use the originating USN to track changes they have received from other domain controllers with which they replicate. When requesting changes from a source domain controller, the destination informs the source of the updates it has already received so that the source never replicates changes that the destination does not need. Two values are used by source and destination domain controllers to filter updates when the destination requests changes from the source replication partner:

• •

Up-to-dateness vector. The current status of the latest originating updates to occur on all domain controllers that store a replica of a specific directory partition. High-watermark (direct up-to-dateness vector). The latest originating update to a specific directory partition that

has been received by a destination from a specific source replication partner during the current replication cycle. Both of these values specify the invocation ID of the source domain controller.

Attributes to Send for Replication: Up-to-Dateness Vector The up-to-dateness vector is a value that the destination domain controller maintains for tracking the originating updates that are received from all source domain controllers. When a destination domain controller requests changes for a directory partition, it provides its up-to-dateness vector to the source domain controller. The source domain controller uses this value to reduce the set of attributes that it sends to the destination domain controller. The up-to-dateness vector contains an entry for each domain controller that holds a full replica of the directory partition. The up-to-dateness vector values include the database GUID (invocation ID) of the source domain controller and the highest originating write (based on the USN) received from the source domain controller. If the up-to-dateness entry that corresponds to source domain controller X contains the USN n, the destination domain controller guarantees that it holds all updates to a specific directory partition that originated at domain controller X and that have an originating USN value of less than or equal to n. If the destination already has an up-to-date value, the source domain controller does not send that attribute. If the source has no attributes to send for an object, it sends no information at all about that object. At the completion of a successful replication cycle between two replication partners, the source domain controller returns its up-to-dateness vector to the destination, including the highest originating USN on the source domain controller. The destination merges this information into its up-to-dateness vector. In this way, the destination tracks the latest originating update it has received from each partner, as well as the status of every other domain controller that stores a replica of the directory partition.

Timestamp on Up-to-Dateness Vector in Windows Server 2003 On domain controllers that are running Windows Server 2003, the up-to-dateness vector includes a timestamp that represents the last time the local (destination) domain controller has completed a full replication cycle with the source domain controller. The replication cycle may have occurred directly (direct replication partner) or indirectly (transitive replication partner). The timestamp is recorded whether or not the local domain controller actually received any changes from the partner. By examining the timestamps, a domain controller can quickly identify other domain controllers that are not replicating. Warning messages are posted to the event log on each domain controller when non-replicating partners are discovered (Event ID 1864 in the Directory Service event log).

Objects to Consider for Replication: High-Watermark (Direct Up-to-Dateness Vector) The high-watermark, or direct up-to-dateness vector, is a value that the destination domain controller maintains during replication to keep track of the most recent attribute change that it has received from a specific source domain controller for an object in a specific directory partition. When sending changes to a destination domain controller, the source domain controller provides the changes in increasing order of uSNChanged. Although the uSNChanged values from the source domain controller are not stored on objects at the destination domain controller, the destination domain controller keeps track of the uSNChanged value of the most recent object that was successfully updated from the source domain controller for a specific directory partition. This USN is called the destination’s high-watermark with respect to the directory partition and the source domain controller. When requesting changes during a replication cycle, the destination provides the high-watermark value with each request to the source domain controller, which in turn uses this value to filter the objects that it considers for continuing replication to the destination. If the uSNChanged value of an object on the source domain controller is less than or equal to the high-watermark value of the destination domain controller, the object update has already been received by the destination domain controller and is therefore not replicated. The high-watermark serves to decrease the CPU time and number of disk I/O operations that would otherwise be required. The up-to-dateness vector and the high-watermark are complementary filter mechanisms that work together to decrease replication latency. Whereas the high-watermark prevents irrelevant objects from being considered by the source domain controller with respect to a single destination, the up-to-dateness vector helps the source domain controller to filter irrelevant attributes (and entire objects if all attributes are filtered) on the basis of the relationships between all sources of originating updates and a single destination. For a specific directory partition, a domain controller maintains a high-watermark value for only those domain controllers from which it requests changes, but it maintains an up-to-dateness vector entry for every domain controller that has ever performed an originating update, which is typically every domain controller that holds a full replica of the directory partition.

Multiple Paths Without Redundant Replication Multiple replication paths can exist between a pair of domain controllers. Multiple paths provide fault tolerance and can reduce latency. However, when multiple paths exist, you might expect the same change to be sent along each path to a specific domain controller or that a change might replicate in an endless loop. Active Directory prevents these potential problems with multiple paths by using the up-to-dateness vector. The ability to eliminate redundancy is called “propagation dampening.” The following is an example of how replication ordinarily occurs: 1. 2.

DC A updates a password attribute. In this example, the originating USN of the attribute is set to 3. Destination DC B requests changes from source DC A and sends its high-watermark and up-to-dateness vector to

3.

DC A. According to the high-watermark that was passed by DC B, source DC A examines one or more objects, one of which contains the changed password. When DC A encounters the changed password attribute, it proceeds as follows: 1.

First, DC A finds that the originating directory system agent (DSA) of the password attribute is DC A. Note The DSA is the server-side process that creates an instance of a directory service. The DSA provides access to

2.

the physical store of directory information located on a hard disk. Therefore, DC A reads the up-to-dateness vector supplied by DC B and finds that DC B is guaranteed to be

up-to-date with updates that originated at DC A and that have an originating USN of less than or equal to 2. 3. DC A then finds that the originating USN of the password attribute is 3. 4. Because 3 is greater than 2, DC A sends the changed password attribute to DC B. To illustrate propagation dampening, suppose that DC B had already received the password update from DC C, which had received it from DC A. In this case, the entry in the up-to-dateness vector of DC B for DC A would contain the USN value 3, not 2. Therefore, DC A would not send the changed password to DC B. For information about viewing the replication metadata, see “Active Directory Replication Tools and Settings”.

Multimaster Conflict Resolution Policy Active Directory must ensure that all domain controllers agree on the value of the updated attribute after replication occurs. The general approach to resolving conflicts is to order all update operations (Add, Modify, Move, and Delete) by

assigning a globally unique (per-object and per-attribute) stamp to the originating update. Thus each replicated attribute value (or multivalue) is stamped during the originating update and this stamp is replicated with the value.

Conflict Resolution Stamp The stamp that is applied during an originating write has the following three components:

• • •

The version is a number that is incremented for each originating write. The version of the first originating write is 1. The version of each successive originating write is increased by 1. The originating time is the time of the originating write, to a one-second resolution, according to the system clock of the domain controller that performed the write. The originating DC is the DSA GUID of the domain controller that performed the originating write.

When stamps are compared, the version is the most significant, followed by the originating time and then the originating DC. If two stamps have the same version, the originating time almost always breaks the tie. In the extremely rare event that the same attribute is updated on two different domain controllers during the same second, the originating DC breaks the tie in an arbitrary fashion. Two different originating writes of a specific attribute of a particular object cannot assign the same stamp because each originating write advances the version at a specified originating domain controller. The originating time does not contribute to uniqueness. Replicated writes cannot decrease the version because values with smaller versions lose during conflict resolution. You can see all three components of the stamp in the output of the repadmin /showobjmeta command. In the case of a conflict, the ordering of stamps allows a consistent resolution, as described in the following table. Replication Conflict Resolution

Replication Conflict

Description

Resolution

Attribute value

A Modify operation sets the value of an

The attribute value at all domain controllers is the value

conflict

attribute. Concurrently, at another domain with the larger version number in its stamp. controller, a Modify operation sets the value of the same attribute to a different value.

Add or Move

An Add or Move operation makes an object At all domain controllers, the parent object is deleted and

under deleted

a child of parent object. Concurrently, at

the child object is a child of the special LostAndFound

parent, Delete

another domain controller, a Delete

container in the directory partition. Stamps are not involved

non-leaf object

operation deletes the parent object.

in the resolution.

Relative

An Add or Move operation names a child

The child object whose naming attribute has the larger

distinguished

object below a parent object. Concurrently, version number in its stamp keeps its given name. The

name conflict

at another domain controller, an Add or

child object whose relative distinguished name attribute

Move operation names a different child of

(for example, CN for most objects, OU for organizational

the same parent with the same child name, units, DC for domain components) has the smaller version resulting in two child objects with identical number in its stamp is named by the following convention: relative distinguished name values below

At all domain controllers, a system-assigned value that is

the same parent object.

unique to the conflicting name and cannot conflict with any client-assigned value is assigned to the child object. For example, if the relative distinguished name of a child object was “CN=ABC” before conflict resolution, its relative distinguished name after resolution is “CN=ABC*CNF:,” where “*” represents a reserved character, “CNF” is a constant that indicates a conflict resolution, and “” represents a printable representation of the objectGUID attribute value.

Preservation of Replication Metadata On domain controllers running Windows Server 2003 with no service pack installed, replication metadata of the NTDS Settings object is maintained by default after Active Directory is removed from a domain controller. The mechanism for preserving replication metadata is disabled in Windows Server 2003 SP1.

How Replication Metadata is Preserved in Windows Server 2003 The period of time during which the replication metadata of the NTDS Settings object is maintained after Active Directory is removed from the respective domain controller is determined by an attribute of the Directory Service object (cn=Directory Service,cn=Windows NT,cn=Services,cn=Configuration,dc=ForestRootDomainName). This attribute, replTopologyStayOfExecution, has a default value of 14 days and a maximum value of half the tombstone lifetime. For example, if you set the tombstone lifetime to 30 days and the replTopologyStayOfExecution value to 20 days, the actual stay-of-execution value is 15 days. The root object of each directory partition replica has a multivalued attribute repsFrom that contains configuration and persistent state information associated with inbound replication from each source replica of that directory partition. A destination domain controller uses this information to poll source domain controllers for replication changes. The Knowledge Consistency Checker (KCC) is the replication topology component that manages creation of connection objects. When the KCC detects a repsFrom value with no corresponding connection object, as is the case when the NTDS Settings object has been deleted, the KCC checks the tombstone for the NTDS Settings object for the time it was deleted and compares that time to the interval in replTopologyStayOfExecution. If the NTDS Settings object has been deleted for a time that is equal to or greater than the value in replTopologyStayOfExecution, then the KCC removes the repsFrom value and replication is no longer attempted with the deleted server. For more information about the data that is stored in repsFrom, see "Destination Identification of Source Replication Partners" later in this section. Prior to the stay-of-execution lifetime, you can see evidence of failed replication attempts in the Directory Service event log and in the output of Windows Support tools such as Repadmin.exe. When replication attempts are from a server designated as , they are likely due to the stay-of-execution mechanism.

Disabled replTopologyStayOfExecution in Windows Server 2003 SP1 On domain controllers running Windows Server 2003 with SP1, the stay-of-execution mechanism is disabled so that replication attempts do not continue after a domain controller is demoted. Without the repsFrom metadata, destination domain controllers no longer poll for replication from the deleted source. Eliminating this mechanism ensures that computers attempting to maliciously spoof the deleted domain controller cannot respond to polling for replication changes. However, if you provide a value for replTopologyStayOfExecution, the stay-of-execution mechanism takes effect according to that value.

Replication of Linked and Nonlinked Attributes Attributes are replicated differently depending on whether they are linked or nonlinked. Understanding the differences in how these attributes operate is helpful to understanding their effect on replication.

Linked Attributes A linked attribute represents an interobject, distinguished-name reference. Linked attributes occur in pairs — forward link and backward link (back-link). The forward link is the linked attribute on the source object (for example, the member attribute on the group object), while the backward link is the linked attribute on the target object (for example, the memberOf attribute on the user object). A back-link value on any object instance consists of the distinguished names of all the objects that have the object's distinguished name set in their corresponding forward link. For example, manager and directReports are a pair of linked attributes, where manager is the forward link and directReports is the backlink. If Bill is Joe's manager and the distinguished name of Bill's user object is stored in the manager attribute of Joe's user object, then the distinguished name of Joe's user object appears in the directReports attribute of Bill's user object. A linked attribute can have either single or multiple values. For example, the manager attribute identifies the distinguished name of a single manager of the object or objects that are managed. The directReports attribute of a user object can have multiple values of user names.

The relationships between linked attributes are stored in a separate table in the directory database as link pairs. The matching pair of Link IDs (defined as any two numbers 2N and 2N+1) tie the attributes together. For example, the member attribute has a link ID of 2 and the memberOf attribute has a link ID of 3. Because the member and the memberOf attributes are linked in the database and indexed for searching, the directory can be examined for all records in which the link pair is member/memberOf and the memberOf attribute identifies the group (for example, “What user objects have group X as a value in their memberOf attribute?”). Attributes are marked in the schema as being linked. Attributes with the distinguished name syntax Object(DS-DN), Object(DN-String), or Object(DN-Binary) can be linked, but not all such attributes are linked.

Nonlinked Attributes Nonlinked, distinguished-name attributes reference other objects in the same way as linked attributes do except that their behavior is different when a referred-to object is deleted, as described in “Replication of Deletion Updates” later in this section. In addition, nonlinked, multivalued attributes have an approximate limit of 1200 values (increased from the Windows 2000 Server limit of approximately 800 values). This limit is based on an approximate maximum page size of 8 kilobytes. For attributes of this maximum size, there are no storage or replication drawbacks or limitations. For more information about data size requirements, see "How the Data Store Works."

USNs and Database Transactions Originating and local USNs effectively identify the database transactions that record originating and replicated changes. Local USNs represent a local database transaction. Therefore, local USNs that are the same represent the same database transaction. For example, when two updates originate on different systems, they are assigned different originating GUIDs and originating USN values. By definition, the local USN for each originating change is the same as the respective originating USN. Then, these changes replicate to a third domain controller independently. The originating USNs stay the same, but the local USNs now change to reflect the two separate transactions. When this set of changes replicates to a fourth domain controller, the two changes travel together. When they are applied at the fourth system, the originating USNs are still different, but the local USNs are now the same, reflecting their application in the same transaction. Individual values of a multivalued attribute have slightly different rules. On origination, each unique value is assigned its own originating USN. The values replicate out independently, in roughly increasing USN order; however, atomicity between values is not guaranteed. When a set of values is applied on the destination domain controller, they are applied in batches to the same object. The batch size is approximately 100 multivalues. Therefore, when they are written on the destination domain controller, their originating USNs are still different, but the values on the same object, in groups of 100, have the same local USN.

Atomicity of Linked and Nonlinked Attributes Atomicity is a guarantee by a database system that a grouping is applied in a single transaction. "Atomic" can be defined as "indivisible." Atomicity of a transaction means that the transaction occurs in total, or not at all. This guarantee ensures that objects are always in a complete state of either pre- or post-update. Nothing in between is possible. As evidenced by the application of the local USN, write transactions are performed according to the way replication data is packaged and received. Rules that affect whether attribute values replicate together or separately differ for linked and nonlinked attributes.

• •

Nonlinked attributes: All changes to nonlinked attributes of an object replicate together and are assigned the same local USN. Therefore, they are committed to the database in the same write transaction. Linked attributes: Changes to linked attributes are not guaranteed to replicate together. Therefore, they are not

guaranteed to be written in the same database transaction. The issue for an update transaction, then, is at what points in the replication stream the application of an update can be interrupted and the effect of interruption on the end state. For inbound replication, the unit of transaction is the entire object for nonlinked attributes. For linked attributes, the unit of transaction is the batch of values for the same object. Therefore, it is possible for a replicated update to a single object that has linked values to occur in more than one transaction. However, transactions to apply all updates to the same object are guaranteed to occur during the same replication cycle. Atomicity provides the following assurances:



Inferences can be based on the relationships of multiple attributes: If you always update attribute 1 and attribute 2



together and you see attribute 1, you know that you can make a decision based on attribute 2. Even if the system crashes in the middle of applying the replicated update, the database system guarantees that all changes are applied or that no changes are applied, and nothing in between.

Replication Order of Linked and Nonlinked Attribute Updates To accommodate the guarantee that changes that are committed together on the originating domain controller appear in the same write transaction on the destination domain controller, updates to nonlinked attributes are prioritized more highly than updates to linked attributes. For this reason, although originating USNs are applied as they occur in order of the update and outbound replication changes are assembled in this update order, nonlinked updates are packaged for transmission ahead of linked updates. Therefore, when an object is updated and some of the updated attributes are linked and some are not linked, the replication packet orders the nonlinked values first, splitting the order of values between linked and nonlinked. For example, if two objects (obj1 and obj2) receive updates simultaneously to one linked attribute and one nonlinked attribute on each object, the order of the attribute values in the replication data packet are as follows: obj1.nonlinked; obj2.nonlinked; obj1.linked; obj2.linked Now, suppose that the number of nonlinked changes is sufficiently large that the values fill an entire replication packet. When both linked and nonlinked attributes are replicated, this scenario results in a set of changes that are made to one of the objects spanning multiple replication packets. All nonlinked values for a single object are included in one packet, but linked values for the same object can span multiple packets. Therefore, the write transaction at the destination can occur out of update order, and replicated updates to a single object can require multiple transactions.

Transaction Order of Linked and Nonlinked Attribute Updates For linked attributes, although objects are transmitted in increasing USN order, object transactions might be applied out of order as a result of parent-child relationships. This condition guarantees that object creation precedes the application of any links to that object. In addition, when the object already exists at the destination, although replication of nonlinked attributes occurs first, transaction of nonlinked attribute updates before linked attribute updates is not guaranteed. However, nonlinked updates are generally applied before linked updates at the destination. For applications that depend on updates being written in update order, the prioritized implementation of replicated linked and nonlinked attribute updates can occasionally result in unexpected behavior.

Group Membership and Linked-Value Replication The replication of linked, multivalued attributes is especially important for group objects. Potentially, the linked, multivalued member attribute of a group object can have thousands of values. Linked-value replication in Windows Server 2003 enables individual values to replicate separately. Linked-value replication requires a forest functional level of Windows Server 2003 or Windows Server 2003 interim. When it is in effect, linked-value replication solves the problem of replication delays that are due to the inability to write an entire member attribute in a single database transaction. Linked-value replication also makes restoring group membership back-links possible when a user or group object is authoritatively restored.

Group Membership Replication in Windows 2000 Forests Linked-value replication is not available in Windows 2000 Server forests. Because an originating update must be written in a single database transaction, and because the practical limit for a single transaction is 5,000 values, membership of more than 5,000 values is not supported in Windows 2000 Server Active Directory. A group of this size represents a limitation both in terms of the database write operation that is required to record a change to an attribute of that size and the transfer of that much data over the network. These conditions have the following impacts on replication, most notably for group and distribution list objects:



Lost changes: If values of the same multivalued attribute are updated on two different domain controllers during a period of replication latency, the most recently changed replica of the attribute with all its multiple values is replicated and any earlier changes are lost. Changes to the separate values are not merged. Note

• •

Because all changes to an object must be written in the same database transaction, multiple changes to a single group object can take a relatively long time to be written, which increases the likelihood of another change

occurring to the same object prior to the completion of the original write. Excessive network bandwidth consumption: For example, when one member is added to a group of 3,000 members, the member attribute with all 3,001 values is transmitted between domain controllers. Transmission of all values to

apply a change to only the updated value or values is inefficient use of network resources. These limitations are effectively removed in a forest that has a forest functional level of Windows Server 2003 or Windows Server 2003 interim. At these levels, linked-value replication accommodates replication of individually updated member values.

Group Membership Replication in Windows Server 2003 Forests In a Windows Server 2003 forest that has a forest functional level of Windows Server 2003 or Windows Server 2003 interim, linked-value replication provides the following benefits:

• • •

Removed likelihood of losing entire sets of changes to the same group membership made on different domain controllers. Greatly reduced likelihood of update collisions, where the same member value is changed on different domain controllers at the same time and one update is lost. Improved network efficiency by transmitting only updated values and not the entire set of attribute values, which can

include many thousands of values. Although replication of many thousands of individual membership updates can be accommodated in a Windows Server 2003 forest, LDAP writes have a practical limit of approximately 5,000 updates in a single transaction. Because originating updates are required to complete in a single transaction, a practical limit of approximately 5,000 updates to a single object is recommended. Note



Only originating updates must be applied in the same database transaction. Replicated updates can be applied in more than one database transaction.

Replication of Deletion Updates Object deletions are replicated by replicating tombstones. After an object is deleted but before it is removed from the directory, object references that formerly referred to the object now refer to the deleted object’s tombstone. The isDeleted attribute, which has a value of TRUE when an object is a tombstone, indicates the object deletion to other domain controllers. Deleted objects are stored in the Deleted Objects hidden container. Every directory partition has a Deleted Objects container. Note



Objects are moved to the Deleted Objects container according to system flags. Certain protected configuration objects, such as NTDS Settings, do not move to the Deleted Objects container when they are deleted. In addition, dynamic objects that are deleted are removed from Active Directory according to a Time-to-Live (TTL) setting. They are not moved to the Deleted Objects container as tombstones. DNS resource records that are removed by scavenging are also not stored as tombstones when DNS is Active Directory–integrated. For more information about storage of object deletions and dynamic data, see "Storage and Removal of Object Deletions" and "Dynamic Data," respectively, in How the Data Store Works. For more information about how DNS data is removed from Active Directory, see

"Understanding Aging and Scavenging" in How DNS Works. By default, tombstones have a lifetime of 60 days (180 days in a forest that was created on a server running Windows Server 2003 with SP1), after which they are permanently removed from the directory database through a process called garbage collection. The tombstone lifetime can be changed, but it is important to ensure that the tombstone lifetime is larger than the worst possible replication latency for any directory partition so that a tombstone cannot be deleted before it has replicated to every directory partition replica. In addition, Active Directory does not allow data to be restored from a backup image that is older than the tombstone lifetime. Note



A tombstone is invisible to normal LDAP searches. However, a tombstone is visible to searches that use the special LDAP control 1.2.840.113556.1.4.417.

Effect of Deletion on Linked Values

When an object is deleted, two types of automatic cleanup occur with regard to linked values:



All of the forward links that are held by the deleted object are removed; that is, if a group object is deleted, all of the member links on the group object are removed. The target of each link (for example, the user object that is named in the member attribute) is not affected by the removal of the group and its forward link, except that its memberOf



back-link attribute loses the value that corresponds to the deleted group. All the back-links that are held by the deleted object are removed; that is, if a user is deleted, its back-links include all the group objects that have forward links to the user object. For example, if a user is a member of multiple groups and the user is deleted, the user's distinguished name value is removed from the member attributes of each group object that is named in the memberOf attribute of the deleted user. The group object is not changed in any way other

than its membership. During replication, if a linked value replicates to a domain controller, the linked value is silently dropped (that is, it disappears from the database with no history) under the following conditions:

• •

The forward link holder object that is referenced by the link value has been deleted. The backward link holder that is referenced by the link value (the target of the back-link) has been deleted.

When a target and source object are authoritatively restored together on a domain controller running Windows Server 2003, whether the links are restored on the destination domain controller depends on which restored object replicates out first:

• •

If the restored group object replicates to a destination domain controller before the restored user object, the group membership of the user is not restored because the user object does not exist on the destination. If the user object replicates out ahead of the group object and if the functional level of the forest was Windows Server 2003 or Windows Server 2003 interim when the back-link was created (that is, when the user was added to the group), the group membership of the user is restored in the member and memberOf attributes on the

respective objects. For more information about restoring back-links on authoritatively restored objects, see "Authoritative Restore" later in this section.

Replication of Absent Linked Object References A different condition occurs when the object that the link refers to is simply removed as a referent, as it is when a user is removed from a group. In a Windows 2000 forest, this condition results in replication of the entire linked attribute. In a forest with a functional level of Windows Server 2003 or Windows Server 2003 interim, this condition is replicated on a per-value basis. For example, if a user is removed from a group, the user value in the member attribute is marked “absent” in the database so that its condition can be replicated, much like a tombstone. The deletion of the user object from the group replicates as an absent value in the member attribute of the group. After a tombstone lifetime, absent values are physically removed from the database. Until that time, the link value remains in the database. Such absent values can be restored prior to the end of the tombstone lifetime. If you add the user back to the group prior to the end of a tombstone lifetime, the link value is restored in the database by the value being marked "present."

Deleting Group Objects in a Windows Server 2003 Forest In a forest that has a forest functional level of Windows Server 2003 or Windows Server 2003 interim, you can delete a group object of any size and its multivalued member attribute values are cleaned up in the background. For this reason, it is not required for the deletion to complete in one transaction, as it is in a Windows 2000 forest. However, if you delete more than 5,000 members in the same transaction without deleting the group itself, the same long-running transaction conditions can occur as described in “Group Membership Replication in Windows 2000 Forests” earlier in this section. Therefore, if you must add, modify, or delete more than 5,000 members of the same group, do so in blocks of less than 5,000 members to avoid long-running transactions and excessive network bandwidth consumption. For more information about linked and nonlinked values, garbage collection, and tombstones, see "Storage and Removal of Object Deletions" in How the Data Store Works.

Deletions on Nonreplicating Domain Controllers If a domain controller fails to replicate for a number of days that exceeds the tombstone lifetime, replicas of objects that have been deleted from a writable partition might remain in that domain controller’s directory. Because the tombstones

of the deleted objects are permanently removed from the directory at the end of the tombstone lifetime, a domain controller that fails to replicate changes for tombstoned objects never deletes them. This condition can occur for a variety of reasons, including:

• • • •

Prolonged misconfigurations. Prolonged errors in name resolution, authentication, or the replication engine that block inbound replication. Turning on a domain controller that has been offline for longer than the tombstone lifetime. Advancing system time or reducing tombstone lifetime values in an attempt to accelerate garbage collection before

end-to-end replication has taken place for all directory partitions in the forest. The condition of outdated objects can also occur due to hardware and software problems that render the domain controller unreachable. Regardless of the reason, a deleted object can remain on a domain controller any time the domain controller goes offline before receiving a deletion and remains offline for longer than the tombstone lifetime of that deletion. These outdated objects, called lingering objects, create inconsistency in the directory. If a change is made to an outdated object on the reconnected domain controller, it is possible for the object to be recreated in the directory under certain conditions. To avoid this situation, replication of an outdated object is prohibited by default in newly created Windows Server 2003 forests.

Backup Latency Interval On domain controllers running Windows Server 2003 with SP1, a new NTDS Replication event provides a warning to administrators when a domain controller has not been backed up. Event ID 2089 provides the backup status of each directory partition that a domain controller stores, including application directory partitions. Specifically, event ID 2089 is logged in the Directory Service event log when partitions in the Active Directory forest are not backed up within a backup latency interval. The value for the backup latency interval is stored as a REG_DWORD value in the Backup Latency Threshold (days) entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters. By default, the value of Backup Latency Threshold (days) is half the value of the tombstone lifetime of the forest. If halfway through the tombstone lifetime a directory partition has not been backed up, event ID 2089 is logged in the Directory Service event log and continues daily until the directory partition is backed up. This event serves as a warning to administrators and monitoring applications to make sure that domain controllers are backed up before the tombstone lifetime expires. However, it is recommended that you take backups at a much higher frequency than the default value of Backup Latency Threshold (days).

Replication Consistency Setting If the attributes on a lingering object never change, the object is never considered for replication. However, if an attribute changes, the attribute is considered for outbound replication. Because the destination domain controller does not hold the object for the attribute that is being replicated, an update cannot be performed. How this condition is resolved depends on the replication consistency setting on the domain controller. A registry setting on domain controllers that are running Windows Server 2003 or Windows 2000 Server with SP3 provides a consistency value that determines whether a domain controller replicates and reanimates an updated object that has been deleted from all other replicas, or whether replication of such objects is blocked. The default settings are different on domain controllers that are running Windows 2000 Server with SP3 and Windows Server 2003.

Strict Replication Consistency To avoid problems with reanimating objects that have been deleted, a domain controller that is running Windows Server 2003 in a newly created (not upgraded) Windows Server 2003 forest blocks inbound replication by default when it receives an update to an object that it does not have. Note



Active Directory replication uses update tracking to differentiate between replicating a newly created object and updating an attribute for an existing object. Replication of a lingering object is an attempt to update an attribute or

attributes of an object that the destination domain controller cannot update because the object does not exist. Replication is halted in the directory partition for the object until the lingering object is removed from the source domain controller or the strict replication consistency setting is disabled. For information about how lingering objects are removed, see “Lingering Object Removal” later in this section.

Loose Replication Consistency When strict replication consistency is disabled, the effect is called “loose” consistency. By using “loose” consistency, the destination domain controller detects that it does not have the object for the attribute that is being replicated. The destination domain controller requests the entire object from the source partner, and thereby reanimates the object in its copy of the directory. The same process repeats on all domain controllers that do not have a copy of the object. This mechanism can be used to cause lingering objects to be reanimated across the entire forest. If a lingering object is discovered and its presence is intended, then perform any update to the object. As long as replication consistency is set to loose (strict replication consistency is disabled) on all domain controllers, the object will be reanimated as it replicates around the forest. Loose replication consistency is the default setting for domain controllers that are running Windows 2000 Server with SP3 or later. The Windows 2000 Server default is not changed by upgrading to Windows Server 2003; strict replication consistency remains disabled and replication is allowed to proceed. Keeping the Windows 2000 Server setting is required to ensure that the upgraded domain and forest are consistent with Windows 2000 Server functionality. You must change the setting manually following the upgrade.

Storage for Consistency Setting The setting for replication consistency is in the registry on each domain controller. The value for the consistency setting is stored in the Strict Replication Consistency entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters. The values are as follows:

• • •

Value: 1 (Set to 0 to disable) Default: 1 (enabled) in a new Windows Server 2003 forest, otherwise 0. Data type: REG_DWORD

Note



Having Strict Replication Consistency set to 0 or unset is equivalent to the Windows 2000 Server setting applied by the Correct Missing Objects registry entry. However, the semantics for Correct Missing Objects are the opposite of Strict Replication Consistency: Correct Missing Objects=1 is equivalent to Strict Replication Consistency=0 or unset.

Lingering Object Removal On domain controllers that are running Windows Server 2003, you can use Repadmin to analyze and remove lingering objects from a domain controller that you suspect or know has not replicated for a tombstone lifetime or longer. If strict replication consistency is in effect and replication fails on the destination domain controller, event ID 1988 is logged in the Directory Service event log on the destination domain controller. Event ID 1988 indicates that the local domain controller has attempted to replicate an object from the source domain controller and the object is not present on the local domain controller because it might have been deleted and its tombstone already garbage collected. Event ID 1988 provides the GUID-based distinguished name of the source domain controller as well as the distinguished name and GUID of the outdated object. Replication of the directory partition containing the outdated object does not continue with the source domain controller until the situation has been resolved. In this event, you can use an up-to-date domain controller as the authority against which to compare the objects on the source replication partner suspected of harboring lingering objects. This domain controller acts as the authoritative directory replica to reveal outdated objects in the suspect directory database on the destination. In the Repadmin command-line arguments that remove lingering objects, the roles of source and destination are switched. The repadmin /removelingeringobjects command compares the directories of two domain controllers, as follows:

• •

A “source” domain controller that you designate as the authoritative reference server. A “destination” domain controller, which is the source replication partner that has attempted to replicate an outdated

object. To use the repadmin /removelingeringobjects command, both source and destination domain controllers must be running Windows Server 2003.

If the comparison reveals any mismatched objects, Repadmin either displays or removes the objects that are found on the destination but not on the source, depending on the arguments that you use. The advisory mode argument allows you to view the results of the command before you take action to remove any objects from the directory.

Repadmin /RemoveLingeringObjects Command Syntax The command repadmin /removelingeringobjects has the following syntax: /removelingeringobjects <Source DC GUID> [/ADVISORY_MODE] where



is the DNS or NetBIOS name of one or more domain controllers that you suspect of harboring lingering objects. provides the ability to target specific domain controllers, such as all domain controllers in a site, all global catalog servers, or domain controllers that hold specific operations master roles. To see the syntax for

• • •

DC_List, type repadmin /listhelp at the command prompt. <Source DC GUID> is the GUID you obtained by running repadmin /showrepl against the source domain controller that you are using as the authoritative server. is the distinguished name of the directory partition that contains the lingering object. [/ADVISORY_MODE] is an optional switch that specifies that no deletions are performed on the destination domain controller, but are displayed (logged) only. Using this switch is recommended prior to allowing Repadmin to remove

any objects. To use this command, you must first obtain the GUID of the authoritative source domain controller by running repadmin /showrepl <source_server>, where <source_server> is the name of the domain controller that has a writable copy of the directory partition that will serve as the authoritative replica. The output of this command provides the “DC GUID,” which the /removelingeringobjects command requires to identify the authoritative source.

RemoveLingeringObjects Implementation When you run repadmin /removelingeringobjects, the tool performs the following steps to compare the directories of the source and destination domain controllers and log (or remove) any found lingering objects: 1. 2.

Check to ensure that the directory partition and the source domain controller are valid. Verify that the user has the DS-Replication-Manage-Topology extended right on the directory partition container object specified in . This extended right is required to verify object state between two domain controllers.

3.

Members of the Domain Admins group have this right by default. Ensure that both source and destination use the same objects for comparison by merging the up-to-dateness vectors to filter out any objects that have not replicated from the source to the destination or from the destination to the source. This check rules out a lingering object on the destination if the destination has not received the tombstone

4.

from the source, and vice versa. Any such nonreplicated objects are removed from the comparison. Create the list of object GUIDs for each domain controller to be compared. Examine the metadata of each object and use the merged up-to-dateness vector to determine whether the object should be present on both source and

5. 6.

destination. For each GUID that is in the list for the destination, determine if it is in the list of GUIDs for the source. If a GUID is not found on the source, the object is identified to be outdated on the destination and is either displayed

or deleted on the destination server. If advisory mode has been specified, the GUID is displayed only. For more information about up-to-dateness vectors, see “Update Tracking by Domain Controllers” later in this section. For more information about removing lingering objects, see "Troubleshooting Active Directory Replication Problems" in the Active Directory Operations Guide.

Active Directory Replication on a Restored Domain Controller All domain controllers must be backed up routinely to ensure directory integrity. In case of failure on a domain controller, the backup media can be used to restore the domain controller to its state at the time of the backup. When a domain controller is restored from a backup, it can then be brought up-to-date by normal replication. There are two general methods for restoring Active Directory from backup media, each of which has different replication consequences:



Nonauthoritative restore. Replication brings the domain controller up-to-date from its state at the time of backup,



Authoritative restore. Objects that were deleted can be reinstated.

including updating deletions that have occurred since the time of the backup.

Nonauthoritative Restore Nonauthoritative restore is the default method of performing a restore of Active Directory, and it is used in the majority of restore situations, such as domain controller hard disk failure. Nonauthoritative restore is performed by using the backup tool that you used to create the backup file. A nonauthoritative restore returns Active Directory on the domain controller to a state that is consistent with the time that the backup was taken. When the domain controller restarts following the restore process, it requests changes from its replication partners. Through the normal replication process, the restored domain controller receives any directory changes that have occurred since the time of the backup. Because the restore process does not restore any previously deleted data to Active Directory, it is described as nonauthoritative. When a domain controller restarts after being restored from a backup, the domain controller must determine where to restart the USN sequence. If an existing USN were reused, serious replication problems could result. For example, suppose object A has USN=1000 and is replicated across the network. Three changes are made to object A, raising its USN to 1004. A domain controller in the same domain is restored and creates a new object B with USN=1000. The version number of object B is 1, but it has the same USN as object A. For this reason, object B will never replicate because its version number is below that of object A. To mitigate this risk, the Active Directory invocation ID is changed on the restored domain controller as part of the restore process. The effect is that the restored domain controller appears as a new replication partner to the domain controllers from which it pulls replication. In the following example, three domain controllers (DC1, DC2, and DC3) exist as part of the Active Directory replication topology. The scenario includes the following steps:

Step 1 Prior to backup, DC1 maintains replication information about itself and its replication partners:

• • •

Its own invocation ID (database GUID) Highest USN that it has committed for changes to the local database. Replication metadata for its two replication partners, DC2 and DC3:

• •

Invocation ID of the local Active Directory database of each domain controller. The up-to-dateness vector, as shown in the table below.

Step 2 Some changes have been made to the Active Directory database on DC1 since Step 1, which result in advancing the current USN on DC1 to 35. At this point, DC1 is backed up.

Step 3 A total of 14 additional change changes have been made to the local Active Directory database on DC1, advancing the highest committed USN on DC1 to 49. DC1 now begins replicating these changes to DC2. After replicating 12 of the 14 changes to DC2, DC1 becomes unavailable due to hard drive corruption. As a result, DC2 has USN 47 as the last change it received from DC1. At this point, the up-to-dateness vector on DC2 shows a highest committed USN of 47 for DC1.

Step 4 A nonauthoritative restore has been completed and DC1 is restarted. During the restore process a new invocation ID has been assigned to the local Active Directory database on DC1. For replication purposes, DC1 adds its old invocation ID (GUID1) and highest committed USN of 35 to its up-to-dateness vector, effectively restoring it to its state as of Step 2.

Step 5 DC1 replicates in the 12 changes that were saved to DC2 before the restore (which corresponds to Step 3 in the following table). The last two changes that occurred as originating updates on DC1 prior to going offline are lost. DC1 Replication Metadata Values Pre- and Post-Nonauthoritative Restore

Step 1:

Step2:

Step 3:

Step 4:

Step 5:

Invocation ID of DC1

DC1-GUID1

DC1-GUID1

DC1-GUID1

DC1-GUID4

DC1-GUID4

Highest committed

20

35

49

35

47

Up-to-dateness vector

DC2-GUID2,

DC2-GUID2,

DC2-GUID2,

DC2-GUID2,

DC2-GUID2,

on DC1

USN=10;

USN=25;

USN=37;

USN=25;

USN=37;

DC3-GUID3,

DC3-GUID3,

DC3-GUID3,

DC3-GUID3,

DC3-GUID3,

USN=45

USN=60

USN=60

USN=60;

USN=60;

DC1-GUID1,

DC1-GUID1,

USN=35

USN=47

USN on DC1

Authoritative Restore The primary purpose of an authoritative restore is to reinstate objects that were deleted from Active Directory. To reinstate objects that were intentionally or accidentally deleted, a nonauthoritative restore must be completed and followed by an authoritative restore. Authoritative restore is performed by using Ntdsutil.exe. The nonauthoritative restore process cannot reinstate deleted objects from a backup image because the backup media that was used to restore the domain controller contains an image of Active Directory that was created before the objects were deleted. In this case, the deletions would simply be replicated in from the up-to-date replication partner and applied by the restored domain controller. To reinstate a deleted object, an authoritative restore is required. The authoritative restore process works as follows: 1.

The domain controller is restarted in Directory Services Restore Mode, and a nonauthoritative restore of Active

2.

Directory is performed by using backup media that was created before the object was deleted. Following the nonauthoritative restore but prior to restarting, the object metadata is altered by Ntdsutil.exe so that it has a higher USN than any other possible version of the object (by default, the version number is increased by

100,000). The effect is to render the object or objects as authoritative and reinstates them in Active Directory. The authoritative restore process does not affect objects that were created after the backup was created. Note



Only the domain and configuration directory partitions can be marked as authoritative. The schema directory partition cannot be authoritatively restored.

Restoring Back-links for Authoritatively Restored Objects When authoritative restore is performed on domain controllers running Windows Server 2000, the procedure recovers objects that have been deleted, but it does not restore the back-links for any objects that have linked attributes. The effect of not restoring back-links for the restored objects is particularly problematic for group memberships, which must be restored manually. In a forest that has a forest functional level of Windows Server 2003 or Windows Server 2003 interim, the procedure for performing authoritative restore automatically restores back-links for multivalued, linked attributes. For example, the member attribute of groups to which a restored user object belongs are updated. This restoration applies to only those links that were created after the functional level was raised. For example, if you added a user to a group before raising the forest functional level, the user's membership in that group is not restored if you delete the user and then authoritatively restore the user. Automatic restore of back-links requires the raised forest functional level because link restoration is made possible by linked-value replication.

Restoring Back-links Created Before Linked-Value Replication An updated version of Ntdsutil that is included with Windows Server 2003 SP1 makes it possible to also restore backlinks that were created before implementation of linked-value replication. On domain controllers that have this updated version of Ntdsutil, the authoritative restore option generates an LDAP Data Interchange Format (LDIF) file that can be used to restore any back-links that are not restored automatically. In addition, Ntdsutil generates a text file that you can use to create an LDIF file for restoring back-links for groups in other domains. The LDIF file can be used to restore back-links on domain controllers running Windows 2000 Server or Windows Server 2003, and it does not depend on

forest functional level. This method also resolves the problem of links not being restored if linked user and group objects are authoritatively restored together and the restored group object replicates out before the restored user object. For more information about restoring back-links, see "Managing Active Directory Backup and Restore" in the Active Directory Operations Guide. Top of page

Domain Controller Notification of Changes Replication within a site occurs as a response to changes. On its NTDS Settings object, the source domain controller stores a repsTo attribute that lists all servers in the same site that pull replication from it. When a change occurs on a source domain controller, it notifies its destination replication partner, prompting the destination domain controller to request the changes from the source domain controller. The source domain controller either responds to the change request with a replication operation or places the request in a queue if requests are already pending. Replication occurs one request at a time until all requests in the queue are processed. Note



Replication between sites occurs according to a schedule, where the destination requests changes at the specified time. By default, change notification is not enabled on site links, although this setting can be changed.

Notification Delay Values When a change occurs on a domain controller within a site, two configurable intervals determine the delay between the change and subsequent events:



Initial notification delay. The delay between the change to an attribute and notification of the first partner. This interval serves to stagger network traffic caused by intrasite replication. When a domain controller makes a change (originating or replicated) to a directory partition, it starts the timer for the interval; when the timer expires, the domain controller begins to notify its replication partners (for that directory partition and within its site) that it has



changes. The default value is 15 seconds. Subsequent notification delay. Notification of the first partner and notification of each subsequent partner. A domain controller does not notify all of its replication partners at one time. By delaying between notifications, the domain controller distributes the load of responding to replication requests from its partners. The default delay

between notifications is 3 seconds. The default values for the initial and subsequent notification delay intervals depend variably on the version of the operating system, the upgrade path, and the forest functional level. The default initial notification delay is 15 seconds and the subsequent notification delay is 3 seconds on a domain controller under any of the following conditions:

• • •

The forest functional level is Windows Server 2003 and the default initial notification delay value was in effect on the domain controller if it was upgraded from Windows 2000. If non-default values are set before the upgrade, the nondefault value is retained. The domain controller has been created from a new installation of Windows Server 2003 (not upgraded) in a Windows 2000 or Windows Server 2003 forest. The domain controller has been upgraded directly from Windows NT 4.0 to Windows Server 2003.

Initial notification delay is 300 seconds and subsequent notification delay is 30 seconds under either of the following conditions:

• •

The domain controller is running Windows 2000 Server. The domain controller has been upgraded from Windows 2000 to Windows Server 2003 and the forest functional level is Windows 2000.

Storage of Intrasite Notification Delay Values On a domain controller that is running Windows Server 2003, intrasite notification delay values are specific to each directory partition and are stored in two attributes of the cross-reference object for each directory partition.

Windows Server 2003 Storage of Notification Delay Values On domain controllers that are running Windows Server 2003, the attributes that store the change notification values are located on each cross-reference object in the Partitions container within the configuration directory partition:

• •

The value for initial change notification delay is stored in the msDS-Replication-Notify-First-DSA-Delay attribute. The value for subsequent notification delay is stored in the msDS-Replication-Notify-Subsequent-DSA-Delay

attribute. These attributes do not exist in Windows 2000 Server. Although the attribute values are present on all domain controllers that are running Windows Server 2003, the default values of 15 seconds for initial notification delay and 3 seconds for subsequent notification delay are in effect only under the conditions described in “Default Notification Delay Values” earlier in this section.

Windows 2000 Server Storage of Notification Delay Values On domain controllers that are running Windows 2000, notification delay values are stored in registry entries on each domain controller. The registry entries are as follows: The value for the delay before the first change notification is stored in the Replicator notify pause after modify



(secs) entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters. The default value is 300 seconds. The value for the delay before each subsequent change notification is stored in the Replicator notify pause



between DSAs (secs) entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters. The default value is 30 seconds.

Transfer of Registry Notification Delay Values on Windows Server 2003 Upgrade Notification delay values for first and subsequent change notification are transferred from Windows 2000 registry settings during upgrades to Windows Server 2003 as follows: If the default registry value has been changed on a domain controller running Windows 2000, the registry entry and



its value are preserved when the domain controller is upgraded to Windows Server 2003. If the default registry value has not been changed on the domain controller running Windows 2000, the entry is



deleted from the registry when the domain controller is upgraded to Windows Server 2003. For newly installed domain

controllers (not upgraded), the registry entries do not exist. Note The registry entries can be added to the registry to override the cross-reference value, if needed, to control



notification by a specific Windows Server 2003–based domain controller.

Notification Delay Values and Their Application by Domain Controllers To accommodate both locations of notification delay information (in the registry on domain controllers that are running Windows 2000 and in the attribute on the cross-reference object on domain controllers that are running Windows Server 2003), the process that is used to determine change notification values considers all possibilities, favoring the setting in the registry if it exists, as follows: 1.

Windows Server 2003 and Windows 2000 domain controllers: assume the default values of 300 seconds and

2.

30 seconds for the first notification delay and subsequent notification delays, respectively. Windows Server 2003 only: check the cross-reference object for the directory partition to which the change has

3.

occurred. If a value is set, use this value. Windows Server 2003 and Windows 2000: check the domain controller’s registry for the presence of the respective registry values and respond according to forest functional level, as follows:

• • •

Windows 2000 operating system and Windows 2000 forest functional level: If a value is set, use this value instead of the default value. Windows Server 2003 operating system, Windows 2000 forest functional level: If a value is set, use this value instead of the default value. Windows Server 2003 operating system, Windows Server 2003 forest functional level: If a value is set, use this value to override the value on the cross-reference object for all directory partitions.

Top of page

Identifying and Locating Replication Partners Replication occurs in one direction between two domain controllers at a time. The replication topology determines the replication partnerships between source and destination domain controllers. As a replication source, the domain controller must determine the replication partners it must notify when changes occur. As a replication destination, the domain controller participates in replication either by responding to notification of changes from a source, or by requesting changes to initiate replication when it starts up or in response to a schedule.

Destination Identification of Source Replication Partners When the KCC creates the replication topology, it creates connection objects on destination domain controllers that represent the inbound connection from the replication source domain controller. For each source domain controller that is represented by an inbound connection object, the KCC writes information to the repsFrom attribute of the directory partition object for each directory partition that the destination domain controller has in common with the source domain controller. This information is local to the destination domain controller and is not replicated. For a destination domain controller, the repsFrom attribute contains the following information about each source domain controller:

• • • • •

Directory partition name.

• • • • •

Invocation ID, for the purpose of detecting a change due to a restore from backup.

NTDS Settings object distinguished name. GUID-based DNS name (CNAME alias). Whether replication is intrasite, intersite using IP (RPC), or intersite using SMTP. Flags that govern the options, which indicate whether notification is allowed (no value is present) or not allowed (NEVER_NOTIFY is present). Direct USN vector (high-watermark), which is the USN pair of object USN (OU) and property USN (PU). Time of last attempt and time of last success. Last error code. Count of consecutive failures, if any.

If notification is enabled on the destination domain controller, the destination responds to notification messages from source domain controllers by sending a request for changes. If notification is not enabled, the destination responds to notification requests with an error. Notification is enabled on a destination domain controller except under the following conditions, in which the KCC sets the option NEVER_NOTIFY in repsFrom:

• •

The connection object that the KCC creates is inbound from a server in a different site. Such a destination is designated as a bridgehead server for the directory partition. Bridgehead servers pull replication according to a schedule on the site link object. The connection object is created to identify the source from which this domain controller replicates to add Active Directory as a new domain controller.

Source Identification of Destination Replication Partners The source domain controller keeps track of the replication partners that pull changes from it and uses the information to locate partners for change notification. This information is not provided by the KCC, but rather by the source domain controller itself during a replication cycle. The first time a domain controller receives a request for changes from a new destination, the source creates an entry for the destination in the repsTo attribute on the respective directory partition object. Whenever the source has changes, it sends a notification to all replication partners that are identified in the repsTo value for the respective directory partition. Like the repsFrom data, this information is stored locally on the domain controller and is not replicated. When updates occur, the source domain controller checks the repsTo attribute to determine the identities of its destination replication partners. The source domain controller notifies them one by one that changes are available. Note



The output of the command repadmin /showrepl /v /all shows the contents of the repsFrom value (INBOUND

NEIGHBORS) and the repsTo value (OUTBOUND NEIGHBORS). If a destination domain controller has the NEVER_NOTIFY option set, the destination responds to a notification message with an error. When the error is received by the source domain controller, it removes the entry for that destination from its repsTo data. If a destination domain controller moves to a different site, the KCC adjusts the notify setting to indicate that notification is not needed.

Locating Replication Partners The partner that initiates replication locates its partner by querying DNS to look up the IP address of the partner according to the GUID of the NTDS Settings object (class nTDSDSA). The NTDS Settings object represents the directory system agent (DSA) on the domain controller. Its GUID uniquely identifies the domain controller and is guaranteed to find the correct server, even if the name of the server has been changed. The GUID of the NTDS Settings object is stored in the objectGUID attribute. The DSA GUID is created when Active Directory is installed on the domain controller, and is destroyed only if Active Directory is removed from the domain controller to create a member server. Note



The Active Directory database also has a GUID, which is used by the DSA to identify the specific versions of the database when a database has been restored. This GUID is stored in the invocationId attribute on the NTDS Settings

object. As part of the DNS registration process, the Net Logon service on every domain controller registers a canonical name (CNAME) resource record, or alias record, which is constructed using the DSA GUID as the alias and maps to the DNS fully qualified domain name (FQDN) of the respective domain controller. The format of CNAME alias is: DsaGuid._msdcs.DNSForestName. To initiate replication, the domain controller retrieves the GUID-based DNS name that is stored in repsTo (for a source domain controller that is sending a change notification) and repsFrom (for a destination domain controller that is initiating replication) and queries DNS for the CNAME resource record. DNS responds by returning both the CNAME resource record and the A resource record, which contains the IP address of the destination domain controller. The domain controller uses information in the CNAME resource record to authenticate to the replication partner. Therefore, by using the CNAME and A resource record data, the domain controller can initiate replication. Note



The _msdcs.DnsForestName DNS zone contains a variety of forest-wide service (SRV) resource records that are used to locate special servers, such as domain controllers and global catalog servers, and to facilitate replication. In the context of this discussion, it is important to note that if a DNS server that is authoritative for the

_msdcs.DnsForestName zone is not available, replication between domain controllers cannot occur. For more information about the DSA and application directory partitions, see “How the Data Store Works.” For more information about SRV resource records and the _msdcs subdomain, see “DNS Support for Active Directory Technical Reference”. Top of page

Urgent Replication Certain important events trigger replication immediately, overriding existing change notification. Urgent replication is implemented immediately by using RPC/IP to notify replication partners that changes have occurred on a source domain controller. Urgent replication uses regular change notification between destination and source domain controller pairs that otherwise use change notification, but notification is sent immediately in response to urgent events instead of waiting the default period of 15 seconds (or 300 seconds on domain controllers that are running Windows 2000).

Events That Trigger Urgent Replication Urgent Active Directory replication is always triggered by certain events on all domain controllers within the same site. When you have enabled change notification between sites, these triggering events also replicate immediately between sites. Between Windows Server 2003–based and Windows 2000–based domain controllers in the same site, immediate notification is caused by the following events:



Assigning an account lockout, which a domain controller performs to prohibit a user from logging on after a certain

• • •

Changing the account lockout policy.

• •

Changing the password on a domain controller computer account.

number of failed attempts. Changing the domain password policy. Changing a Local Security Authority (LSA) secret, which is a secure form in which private data is stored by the LSA (for example, the password for a trust relationship). Changing the relative identifier (known as a “RID”) master role owner, which is the single domain controller in a domain that assigns relative identifiers to all domain controllers in that domain.

Urgent Replication of Account Lockout Changes Account lockout is a security feature that sets a limit on the number of failed authentication attempts that are allowed before the account is “locked out” from a further attempt to log on, in addition to a time limit for how long the lockout is in effect. The PDC emulator receives urgent replication of account lockouts. In Active Directory domains, a single domain controller in each domain holds the role of PDC emulator, which simulates the behavior of a Windows NT version 3.x–based or Windows NT 4.0–based PDC. In Windows NT domains, the only domain controller that can accept updates is the PDC. If authentication fails at a BDC, the authentication request is passed immediately to the PDC, which is guaranteed to have the current password. An account lockout is urgently replicated to the PDC emulator and is then urgently replicated to the following:

• • •

Domain controllers in the same domain that are located in the same site as the PDC emulator. Domain controllers in the same domain that are located in the same site as the domain controller that handled the account lockout. Domain controllers in the same domain that are located in sites that have been configured to allow change notification between sites (and, therefore, urgent replication) with the site that contains the PDC emulator or with the site where the account lockout was handled. These sites include any site that is included in the same site link as the site that contains the PDC emulator or in the same site link as the site that contains the domain controller that handled the

account lockout. In addition, when authentication fails at a domain controller other than the PDC emulator, the authentication is retried at the PDC emulator. For this reason, the PDC emulator locks the account before the domain controller that handled the failed-password attempt if the bad-password-attempt threshold is reached. Note



When a bad password is used in an attempt to change a password, the lockout count is incremented on that domain controller only and is not replicated. As such, an attacker could try (of domain controllers)*(lockout threshold -1) + 1 guesses before the account is locked out. Although this scenario has a relatively small impact on account lockout security, domains with an exceptionally high number of domain controllers represent a significant increase in the total number of guesses available to an attacker. Because a user cannot specify the domain controller on which the password change is attempted, an attack of this type requires an advanced tool.

Replication of Password Changes Password changes are replicated differently than both normal (non-urgent) replication and urgent replication. Changes to security account passwords present a replication latency problem wherein a user’s password is changed on domain controller A and the user subsequently attempts to log on, being authenticated by domain controller B. If the password has not replicated from A to B, the attempt to log on fails. Active Directory replication remedies this situation by forwarding password changes immediately to a single domain controller in the domain, the PDC emulator. In Active Directory, when a user password is changed at a domain controller, that domain controller attempts to update the respective replica at the domain controller that holds the PDC emulator role. Update of the PDC emulator occurs immediately, without respect to schedules on site links. The updated password is propagated to other domain controllers by normal replication within a site. When the user logs on to a domain and is authenticated by a domain controller that does not have the updated password, the domain controller refers to the PDC emulator to check the credentials of the user name and password rather than denying authentication based on an invalid password. Therefore, the user can log on successfully even when the authenticating domain controller has not yet received the updated password. On domain controllers that are running Windows Server 2003 or Windows 2000 Server with SP4, if the authentication is successful at the PDC emulator, the PDC emulator replicates the password immediately to the requesting domain controller to prevent that domain controller from having to check the PDC emulator again. If the update at the PDC emulator fails for any reason, the password change is replicated non-urgently by normal replication. For clients that are running Windows NT 4.0 or clients that are running Windows 95 or Windows 98 without the Directory Service Client Pack, the client attempts to contact the PDC emulator. If the client has the Directory Service Client Pack installed, the client contacts any domain controller and the contacted domain controller then attempts to contact the PDC emulator.

Note



The Group Policy setting “Contact PDC on logon failure” can be disabled to keep a domain controller from contacting the PDC emulator if the PDC emulator role owner is not in the current site. If this setting is disabled, the password change reaches the PDC emulator non-urgently through normal replication. Top of page

Network Ports Used by Active Directory Replication By default, RPC-based replication uses dynamic port mapping. When connecting to an RPC endpoint during Active Directory replication, the RPC run time on the client contacts the RPC endpoint mapper on the server at a well-known port (port 135). The server queries the RPC endpoint mapper on this port to determine what port has been assigned for Active Directory replication on the server. This query occurs whether the port assignment is dynamic (the default) or fixed. The client never needs to know which port to use for Active Directory replication. Note



An endpoint comprises the protocol, local address, and port address.

In addition to the dynamic port 135, other ports that are required for replication to occur are listed in the following table. Port Assignments for Active Directory Replication

Service Name UDP TCP LDAP

389

389

LDAP

636 (Secure Sockets Layer [SSL])

LDAP

3268 (global catalog)

Kerberos

88

88

DNS

53

53

SMB over IP

445

445

Replication within a domain also requires File Replication service (FRS) using a dynamic RPC port. Top of page

Related Information The following resources contain additional information that is relevant to this section.

• • • •

How the Data Store Works Active Directory Replication Topology Technical Reference Active Directory Schema Technical Reference DNS Support for Active Directory Technical Reference

Active Directory Replication Tools and Settings Updated: March 28, 2003

Active Directory Replication Tools and Settings In this section

• • • • • •

Active Directory Replication Tools Active Directory Replication Registry Entries Active Directory Replication Group Policy Settings Active Directory Replication WMI Classes Network Ports Used by Active Directory Replication Related Information Top of page

Active Directory Replication Tools The following tools are associated with Active Directory replication.

Dssite.msc: Active Directory Sites and Services Category

Active Directory Administrative Tools Microsoft Management Console (MMC) snap-in. This tool is installed automatically when you install Active Directory, and is available on the Start menu under Programs\Administrative Tools. This tool also ships with the Administration Tools Pack (Adminpak.msi).

Version compatibility Active Directory Sites and Services runs on domain controllers that are running Windows Server 2003 and Windows 2000 Server. On administrative workstations that are running Windows XP Professional with Service Pack 1, you can install the Windows Server 2003 Administration Tools Pack (Adminpak.msi) from the i386 directory on the Windows Server 2003 operating system CD. This version of the Administration Tools Pack encrypts and signs LDAP traffic between the administrative tool clients and domain controllers. The Windows Server 2003 version of Active Directory Sites and Services (installed on the domain controller or on the administrative workstation by using Administration Tools Pack) can target domain controllers that are running Windows Server 2003 and Windows 2000 Server. Active Directory Sites and Services provides a view into the Sites container of the configuration directory partition. Use Active Directory Sites and Services to manage Active Directory replication topology. The following objects and their properties can be managed by using this tool:

• • •

Sites container: Add new sites.

• •

Server object: View the NTDS Settings object and designate the server as a bridgehead server.

• • •

Inter-Site Transports container: Manage IP and SMTP site links.

Site objects: Add new servers to a site. NTDS Site Settings object: For each site, view the connection object schedule and enable Universal group membership caching. NTDS Settings object: View inbound connections for the server. View the connection object schedule and change the source server for the connection. Site link objects: Manage the site link properties for a set of sites. Subnets container: Add, remove, and configure subnets with IP addresses. Associate subnets with sites.

Reapdmin.exe: Repadmin Category Windows Server 2003 Support Tools, command-line tool.

Version compatibility Repadmin runs on any computer on which Windows Server 2003 Support Tools can be installed, which includes Windows Server 2003 family and Windows XP Professional. Repadmin is used to view the replication information on domain controllers. You can determine the last successful replication of all directory partitions, identify inbound and outbound replication partners, identify the current bridgehead servers, view object metadata, and generally manage Active Directory replication topology. You can use Repadmin to force replication of an entire directory partition or of a single object. You can also list domain controllers in a site. Repadmin is extended in Windows Server 2003 to enable commands to target sets of domain controllers. For example, you can target all domain controllers in a site or domain, or all domain controllers that are global catalog servers. In Windows 2000 Server, Repadmin can report information about only one domain-controller at a time. Repadmin has also been improved in Windows Server 2003 to include the RemoveLingeringObjects command, which removes objects that are outdated (do not exist in a replica of the same directory partition on the source domain controller). For more information about removing lingering objects, see "Fixing Replication Lingering Object Problems (Event IDs 1388, 1988, 2042)" in the Windows Server 2003 Operations Guide at http://go.microsoft.com/fwlink/?LinkId=44131. For more information about Repadmin, see Repadmin Overview.

Ntdsutil.exe: Ntdsutil Category Windows Server 2003 Support Tools, command-line tool.

Version compatibility This tool is compatible with Windows Server 2003. An updated version of Ntdsutil is included with Windows Server 2003 Service Pack 1 (SP1). Ntdsutil.exe provides management capabilities for Active Directory. You can use Ntdsutil.exe to perform Active Directory database maintenance, manage and control single-master operations, and remove replication metadata left behind by domain controllers that are removed from the network without uninstalling Active Directory. The version of Ntdsutil that is included with Windows Server 2003 SP1 removes File Replication service (FRS) metadata in addition to Active Directory replication metadata. You can also use Ntdsutil to create application directory partitions and perform authoritative restore operations. This tool is intended for use by experienced administrators. Top of page

Active Directory Replication Registry Entries The information here is provided as a reference for use in troubleshooting or verifying that the required settings are applied. It is recommended that you do not directly edit the registry unless there is no other alternative. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be stored. This can result in unrecoverable errors in the system. When possible, use Group Policy or other Windows tools, such as Microsoft Management Console (MMC), to accomplish tasks rather than editing the registry directly. If you must edit the registry, use extreme caution. The following registry settings cannot be modified by using Group Policy or other Windows tools.

NTDS Parameters Registry Settings The following registry entries are associated with Active Directory replication.

Replicator notify pause after modify (secs) Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows 2000 Server.

Default value Windows 2000 Server: 300 seconds. The value for the delay between an originating update on a domain controller and the first change notification. On domain controllers running Windows Server 2003, the value for initial change notification delay is stored in the msDSReplicationNotifyFirstDSADelay attribute on the cross-reference object for each directory partition in the Configuration container. The default value in Windows Server 2003 is decreased to 15 seconds when the forest functional level is Windows Server 2003.

Replicator notify pause between DSAs (secs) Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows 2000 Server.

Default value

Windows 2000 Server: 30 seconds The value for the delay before each subsequent change notification. On domain controllers running Windows Server 2003, the value for subsequent notification delay is stored in the msDSReplicationNotifySubsequentDSADelay attribute on the cross-reference object for each directory partition in the Configuration container. The default value in Windows Server 2003 is decreased to 3 seconds when the forest functional level is Windows Server 2003.

RPC Replication Timeout (mins) Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value Windows 2000 Server: 45 minutes; Windows Server 2003: 5 minutes. The number of minutes between initiation of Active Directory replication and the RPC timeout. The domain controller must be restarted before the change takes effect.

Strict replication consistency Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server with SP3.

Default value Windows 2000 Server with SP3: off (0); Windows Server 2003: on (1) The value that determines the treatment of replication of outdated objects that exist on reconnected domain controllers that have not replicated in longer than a tombstone lifetime. If the destination domain controller has strict replication consistency enabled, inbound replication of an outdated object is blocked. If the destination domain controller has strict replication disabled, inbound replication of the full object occurs.

Replicator intra site packet size (objects) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of 1,000 objects. The maximum number of objects per packet for RPC replication within a site.

Replicator intra site packet size (bytes) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/100th the size of RAM, with a minimum of 1 megabyte (MB) and a maximum of 10 MB. The maximum size of objects per packet for RPC replication within a site.

Replicator inter site packet size (objects) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of 1,000 objects. The maximum number of objects per packet for RPC replication between sites.

Replicator inter site packet size (bytes) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server. The maximum size of objects per packet for RPC replication between sites.

Default value 1/100th the size of RAM, with a minimum of 1 MB and a maximum of 10 MB.

Replicator async inter site packet size (objects) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of 1,000 objects. The maximum number of objects per packet for SMTP replication between sites.

Replicator async inter site packet size (bytes) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1 MB. The maximum size of objects per packet for SMTP replication between sites.

Replicator compression algorithm Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003.

Default value 3. For Windows 2000 Server compression, change the value to 2. Determines the compression algorithm that is used on a site link (Windows Server 2003 algorithm or Windows 2000 Server algorithm).

Repl topology update delay (secs) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 300 seconds. Number of seconds to wait between the time Active Directory starts and the KCC performs the first topology check. To find more information about Repl topology update delay (secs), see “Registry Reference” in Tools and Settings Collection.

Repl topology update period (secs) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 900 seconds. Interval between KCC replication topology checks. To find more information about Repl topology update period (secs), see “Registry Reference” in Tools and Settings Collection.

IntersiteFailuresAllowed Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version

Windows Server 2003, Windows 2000 Server.

Default value 1. Number of failed replication attempts prior to excluding nonresponding servers from the intersite topology.

MaxFailureTimeForIntersiteLink (sec) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 7200 seconds (2 hours). Time in seconds that must elapse prior to excluding nonresponding servers from the intersite topology.

NonCriticalLinkFailuresAllowed Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1. Number of failed replication attempts prior to excluding nonresponding servers from the intrasite topology.

MaxFailureTimeForNonCriticalLink (sec) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 43200 seconds (12 hours). Time in seconds that must elapse prior to excluding nonresponding servers from the intrasite topology.

CriticalLinkFailuresAllowed Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value

0. Number of failed replication attempts prior to excluding nonresponding servers for immediate neighbor connections within a site.

MaxFailureTimeForCriticalLink (sec) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 7200 seconds (2 hours). Time in seconds that must elapse prior to excluding nonresponding servers for immediate neighbor connections within a site.

TCP/IP Port Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 135. TCP port that the directory service uses instead of using dynamic port 135. The domain controller must be restarted before the change takes effect.

Backup Latency Threshold (days) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003 with SP 1

Default value Half the value of the tombstone lifetime of the forest. When the value is reached, logs event ID 2089 in the Directory Service event log, warning administrators and monitoring applications to make sure that domain controllers are backed up before the tombstone lifetime expires. Top of page

Active Directory Replication Group Policy Settings The following table lists and describes the Group Policy settings that are associated with Active Directory replication updates. Group Policy Settings Associated with Active Directory Replication

Group Policy Setting

Description

Account Lockout Policy:

Changes to these settings in the Domain Security Policy trigger urgent replication.

Group Policy Setting

Description

• Account lockout duration

• Account lockout threshold

• Reset account lockout counter after

Password Policy:

Changes to these settings in the Domain Security Policy trigger urgent replication.

• Enforce password history

• Maximum password age • Minimum password age • Minimum password length

• Password must meet complexity

requirements

• Store passwords using reversible encryption

Contact PDC on logon

Account lockout and domain password changes rely on contacting the primary domain

failure

controller (PDC) emulator urgently to update the PDC emulator with the change. If Contact PDC on logon failure is disabled, replication of password changes to the PDC emulator occurs non-urgently.

To find more information about these Group Policy settings, see “Group Policy Settings Reference” in Tools and Settings Collection. Top of page

Active Directory Replication WMI Classes The following table lists and describes the WMI classes that are associated with Active Directory replication. These classes are shipped with Windows Server 2003, but are also compatible with Windows 2000 Server. WMI Classes Associated with Active Directory Replication

Class Name

Namespace

Version Compatibility

MSAD_DomainController \\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server MSAD_NamingContext

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

MSAD_ReplNeighbor

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

MSAD_ReplCursor

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

MSAD_ReplPendingOp

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

For more information about these WMI classes, see the WMI SDK documentation on MSDN. Top of page

Network Ports Used by Active Directory Replication

By default, RPC-based replication uses dynamic port mapping. When connecting to an RPC endpoint during Active Directory replication, the RPC run time on the client contacts the RPC endpoint mapper on the server at a well-known port (port 135). The server queries the RPC endpoint mapper on this port to determine what port has been assigned for Active Directory replication on the server. This query occurs whether the port assignment is dynamic (the default) or fixed. The client never needs to know which port to use for Active Directory replication. Note



An endpoint comprises the protocol, local address, and port address.

In addition to the dynamic port 135, other ports that are required for replication to occur are listed in the following table. Port Assignments for Active Directory Replication

Service Name UDP TCP LDAP

389

389

LDAP

636 (Secure Sockets Layer [SSL])

LDAP

3268 (global catalog)

Kerberos

88

88

DNS

53

53

SMB over IP

445

445

Replication within a domain also requires FRS using a dynamic RPC port. Top of page

Related Information The following resources contain additional information that is relevant to this section.

• •

How the Active Directory Replication Model Works How Active Directory Replication Topology Works

Active Directory Replication Topology Technical Reference Updated: March 28, 2003

Active Directory Replication Topology Technical Reference The replication topology of Active Directory directory service provides the network of connections between domain controllers in a forest according to their location in Active Directory sites. A site is an Active Directory object that you create and configure to represent an area of good network connectivity, typically corresponding to a local area network (LAN). The site object is associated with a set of one or more subnets, which are objects that identify a range of IP addresses. Each domain controller has an IP address that maps to a subnet, and that mapping in turn identifies the site of the domain controller. By recognizing domain controllers according to site locations, the replication system ensures that each domain controller is updated with directory changes in the most efficient and timely manner possible, given network conditions and directory service configuration. The replication topology is generated automatically at regular intervals to accommodate network and configuration changes, and is designed to ensure that all domain controllers are connected without redundancy and with minimum cost. In this subject

• • •

What Is Active Directory Replication Topology? How Active Directory Replication Topology Works Active Directory Replication Tools and Settings

What Is Active Directory Replication Topology? Updated: March 28, 2003

What Is Active Directory Replication Topology? In this section



Replication Within and Between Sites

• • •

Technologies Related to Active Directory Replication Topology Active Directory Replication Topology Dependencies Related Information

Replication of updates to Active Directory objects are transmitted between multiple domain controllers to keep replicas of directory partitions synchronized. Multiple domains are common in large organizations, as are multiple sites in disparate locations. In addition, domain controllers for the same domain are commonly placed in more than one site. Therefore, replication must often occur both within sites and between sites to keep domain and forest data consistent among domain controllers that store the same directory partitions. Site objects can be configured to include a set of subnets that provide local area network (LAN) network speeds. As such, replication within sites generally occurs at high speeds between domain controllers that are on the same network segment. Similarly, site link objects can be configured to represent the wide area network (WAN) links that connect LANs. Replication between sites usually occurs over these WAN links, which might be costly in terms of bandwidth. To accommodate the differences in distance and cost of replication within a site and replication between sites, the intrasite replication topology is created to optimize speed, and the intersite replication topology is created to minimize cost. The Knowledge Consistency Checker (KCC) is a distributed application that runs on every domain controller and is responsible for creating the connections between domain controllers that collectively form the replication topology. The KCC uses Active Directory data to determine where (from what source domain controller to what destination domain controller) to create these connections. Top of page

Replication Within and Between Sites The KCC creates separate replication topologies to transfer Active Directory updates within a site and between all configured sites in the forest. The connections that are used for replication within sites are created automatically with no additional configuration. Intrasite replication takes advantage of LAN network speeds by providing replication as soon as changes occur, without the overhead of data compression, thus maximizing CPU efficiency. Intrasite replication connections form a ring topology with extra shortcut connections where needed to decrease latency. The fast replication of updates within sites facilitates timely updates of domain data. In deployments where large datacenters constitute hub sites for the centralization of mission-critical operations, directory consistency is critical. Replication between sites is made possible by user-defined site and site link objects that are created in Active Directory to represent the physical LAN and WAN network infrastructure. When Active Directory sites and site links are configured, the KCC creates an intersite topology so that replication flows between domain controllers across WAN links. Intersite replication occurs according to a site link schedule so that WAN usage can be controlled, and is compressed to reduce network bandwidth requirements. Site link settings can be managed to optimize replication routing over WAN links. The connections that are created between sites form a spanning tree for each directory partition in the forest, merging where common directory partitions can be replicated over the same connection. In remote branch locations, replication of updates from the hub sites is optimized for network availability. Thus, because intrasite replication is optimized for speed, branch locations across WAN links can be assured of receiving data from hub sites that is up-to-date and reliable; but because intersite replication is scheduled, branch sites receive this replication only at intervals that are deemed appropriate and cost-effective for remote operations. Top of page

Technologies Related to Active Directory Replication Topology The following technologies interact with Active Directory replication.

File Replication Service File Replication service (FRS) is related to Active Directory replication because it requires the Active Directory replication topology. FRS is a multimaster replication service that is used to replicate files and folders in the system volume (SYSVOL) shared folder on domain controllers and in Distributed File System (DFS) shared folders. FRS works by detecting changes to files and folders and then replicating the updated files and folders to other replica members, which are connected in a replication topology. FRS uses the replication topology that is generated by the KCC to replicate the SYSVOL files to all domain controllers in the domain. SYSVOL files are required by all domain controllers for Active Directory to function. For more information about FRS and how it uses the Active Directory replication topology, see “FRS Technical Reference”. For more information about SYSVOL, see “Data Store Technical Reference.”

SMTP Simple Mail Transfer Protocol (SMTP) is a packaging protocol that can be used as an alternative to the remote procedure call (RPC) replication transport. SMTP can be used to transport nondomain replication over IP networks in mail-message format. Where networks are not fully routed, e-mail is sometimes the only transport method available. Top of page

Active Directory Replication Topology Dependencies Active Directory replication topology has the following dependencies:



Routable IP infrastructure. The replication topology is dependent upon a routable IP infrastructure from which you can map IP subnet address ranges to site objects. This mapping generates the information that is used by client workstations to communicate with domain controllers that are close by, when there is a choice, rather than those that



are located across WAN links. DNS. The Domain Name System (DNS) resolves DNS names to IP addresses. Active Directory replication topology requires that DNS is properly designed and deployed so that domain controllers can correctly resolve the DNS names of replication partners. DNS also stores service (SRV) resource records that provide site affinity information to clients searching for domain controllers, including domain controllers that are searching for replication partners. Every domain controller registers these records so that they can be located according to site.

• •

Net Logon service. Net Logon is required for DNS registrations. RPC. Active Directory replication requires IP connectivity and RPC to transfer updates between replication partners within sites. RPC is required for replication between two sites containing domain controllers in the same domain, but SMTP is an alternative where RPC cannot be used and domain controllers for the same domain are all located in one



site so that intersite replication of domain data is not required. Intersite Messaging. Intersite Messaging is required for SMTP intersite replication and for site coverage calculations.

If the forest functional level is Windows 2000, Intersite Messaging is also required for intersite topology generation. The following diagram shows the interaction of these technologies with the replication topology, which is indicated by the two-way connections between each set of domain controllers. Replication Topology and Dependent Technologies

Top of page

Related Information The following resources contain additional information that is relevant to this section:

• •

How Active Directory Replication Topology Works TCP/IP Technical Reference

How Active Directory Replication Topology Works Updated: March 28, 2003

How Active Directory Replication Topology Works In this section

• • • • • • • • • •

Active Directory KCC Architecture and Processes Replication Topology Physical Structure Performance Limits for Replication Topology Generation Goals of Replication Topology Topology-Related Objects in Active Directory Replication Transports Replication Between Sites KCC and Topology Generation Network Ports Used by Replication Topology Related Information

Active Directory implements a replication topology that takes advantage of the network speeds within sites, which are ideally configured to be equivalent to local area network (LAN) connectivity (network speed of 10 megabits per second [Mbps] or higher). The replication topology also minimizes the use of potentially slow or expensive wide area network (WAN) links between sites. When you create a site object in Active Directory, you associate one or more Internet Protocol (IP) subnets with the site. Each domain controller in a forest is associated with an Active Directory site. A client workstation is associated with a site according to its IP address; that is, each IP address maps to one subnet, which in turn maps to one site. Active Directory uses sites to:

• • • •

Optimize replication for speed and bandwidth consumption between domain controllers. Locate the closest domain controller for client logon, services, and directory searches. Direct a Distributed File System (DFS) client to the server that is hosting the requested data within the site. Replicate the system volume (SYSVOL), a collection of folders in the file system that exists on each domain controller

in a domain and is required for implementation of Group Policy. The ideal environment for replication topology generation is a forest that has a forest functional level of Windows Server 2003. In this case, replication topology generation is faster and can accommodate more sites and domains than occurs when the forest has a forest functional level of Windows 2000. When at least one domain controller in each site is running Windows Server 2003, more domain controllers in each site can be used to replicate changes between sites than when all domain controllers are running Windows 2000 Server. In addition, replication topology generation requires the following conditions:



A Domain Name System (DNS) infrastructure that manages the name resolution for domain controllers in the forest. Active Directory–integrated DNS is assumed, wherein DNS zone data is stored in Active Directory and is replicated to all domain controllers that are DNS servers.

• • •

All physical locations that are represented as site objects in Active Directory have LAN connectivity.



The appropriate number of domain controllers is deployed for each domain that is represented in each site.

IP connectivity is available between each site and all sites in the same forest that host operations master roles. Domain controllers meet the hardware requirements for Microsoft Windows Server 2003, Standard Edition; Windows Server 2003, Enterprise Edition; and Windows Server 2003, Datacenter Edition.

This section covers the replication components that create the replication topology and how they work together, plus the mechanisms and rationale for routing replication traffic between domain controllers in the same site and in different sites. Top of page

Active Directory KCC Architecture and Processes The replication topology is generated by the Knowledge Consistency Checker (KCC), a replication component that runs as an application on every domain controller and communicates through the distributed Active Directory database. The KCC functions locally by reading, creating, and deleting Active Directory data. Specifically, the KCC reads configuration data and reads and writes connection objects. The KCC also writes local, nonreplicated attribute values that indicate the replication partners from which to request replication. For most of its operation, the KCC that runs on one domain controller does not communicate directly with the KCC on any other domain controller. Rather, all KCCs use the knowledge of the common, global data that is stored in the configuration directory partition as input to the topology generation algorithm to converge on the same view of the replication topology.

Each KCC uses its in-memory view of the topology to create inbound connections locally, manifesting only those results that apply to itself. The KCC communicates with other KCCs only to make a remote procedure call (RPC) request for replication error information. The KCC uses the error information to identify gaps in the replication topology. A request for replication error information occurs only between domain controllers in the same site. Note



The KCC uses only RPC to communicate with the directory service. The KCC does not use Lightweight Directory Access

Protocol (LDAP). One domain controller in each site is selected as the Intersite Topology Generator (ISTG). To enable replication across site links, the ISTG automatically designates one or more servers to perform site-to-site replication. These servers are called bridgehead servers. A bridgehead is a point where a connection leaves or enters a site. The ISTG creates a view of the replication topology for all sites, including existing connection objects between all domain controllers that are acting as bridgehead servers. The ISTG then creates inbound connection objects for servers in its site that it determines will act as bridgehead servers and for which connection objects do not already exist. Thus, the scope of operation for the KCC is the local server only, and the scope of operation for the ISTG is a single site. Each KCC has the following global knowledge about objects in the forest, which it gets by reading objects in the Sites container of the configuration directory partition and which it uses to generate a view of the replication topology:

• • • • • • •

Sites Servers Site affiliation of each server Global catalog servers Directory partitions stored by each server Site links Site link bridges

Detailed information about these configuration components and their functionality is provided later in this section. The following diagram shows the KCC architecture on servers in the same forest in two sites. KCC Architecture and Processes

The architecture and process components in the preceding diagram are described in the following table. KCC Architecture and Process Components

Component

Description

Knowledge

The application running on each domain controller that communicates directly with the Ntdsa.dll to

Consistency Checker read and write replication objects. (KCC) Directory System

The directory service component that runs as Ntdsa.dll on each domain controller, providing the

Agent (DSA)

interfaces through which services and processes such as the KCC gain access to the directory database.

Extensible Storage

The directory service component that runs as Esent.dll. ESE manages the tables of records, each

Engine (ESE)

with one or more columns. The tables of records comprise the directory database.

Remote procedure

The Directory Replication Service (Drsuapi) RPC protocol, used to communicate replication status

call (RPC)

and topology to a domain controller. The KCC also uses this protocol to communicate with other KCCs to request error information when building the replication topology.

Intersite Topology

The single KCC in a site that manages intersite connection objects for the site.

Generator (ISTG) The four servers in the preceding diagram create identical views of the servers in their site and generate connection objects on the basis of the current state of Active Directory data in the configuration directory partition. In addition to creating its view of the servers in its respective site, the KCC that operates as the ISTG in each site also creates a view of all servers in all sites in the forest. From this view, the ISTG determines the connections to create on the bridgehead servers in its own site. Note



A connection requires two endpoints: one for the destination domain controller and one for the source domain controller. Domain controllers creating an intrasite topology always use themselves as the destination end point and must consider only the endpoint for the source domain controller. The ISTG, however, must identify both endpoints in

order to create connection objects between two other servers. Thus, the KCC creates two types of topologies: intrasite and intersite. Within a site, the KCC creates a ring topology by using all servers in the site. To create the intersite topology, the ISTG in each site uses a view of all bridgehead servers in all sites in the forest. The following diagram shows a high-level generalization of the view that the KCC sees of an intrasite ring topology and the view that the ISTG sees of the intersite topology. Lines between domain controllers within a site represent inbound and outbound connections between the servers. The lines between sites represent configured site links. Bridgehead servers are represented as BH. KCC and ISTG Views of Intrasite and Intersite Topology

Top of page

Replication Topology Physical Structure The Active Directory replication topology can use many different components. Some components are required and others are not required but are available for optimization. The following diagram illustrates most replication topology components and their place in a sample Active Directory multisite and multidomain forest. The depiction of the intersite topology that uses multiple bridgehead servers for each domain assumes that at least one domain controller in each site is running Windows Server 2003. All components of this diagram and their interactions are explained in detail later in this section. Replication Topology Physical Structure

In the preceding diagram, all servers are domain controllers. They independently use global knowledge of configuration data to generate one-way, inbound connection objects. The KCCs in a site collectively create an intrasite topology for all domain controllers in the site. The ISTGs from all sites collectively create an intersite topology. Within sites, one-way arrows indicate the inbound connections by which each domain controller replicates changes from its partner in the ring. For intersite replication, one-way arrows represent inbound connections that are created by the ISTG of each site from bridgehead servers (BH) for the same domain (or from a global catalog server [GC] acting as a bridgehead if the domain is not present in the site) in other sites that share a site link. Domains are indicated as D1, D2, D3, and D4. Each site in the diagram represents a physical LAN in the network, and each LAN is represented as a site object in Active Directory. Heavy solid lines between sites indicate WAN links over which two-way replication can occur, and each WAN link is represented in Active Directory as a site link object. Site link objects allow connections to be created between bridgehead servers in each site that is connected by the site link. Not shown in the diagram is that where TCP/IP WAN links are available, replication between sites uses the RPC replication transport. RPC is always used within sites. The site link between Site A and Site D uses the SMTP protocol for the replication transport to replicate the configuration and schema directory partitions and global catalog partial, readonly directory partitions. Although the SMTP transport cannot be used to replicate writable domain directory partitions, this transport is required because a TCP/IP connection is not available between Site A and Site D. This configuration is acceptable for replication because Site D does not host domain controllers for any domains that must be replicated over the site link A-D. By default, site links A-B and A-C are transitive (bridged), which means that replication of domain D2 is possible between Site B and Site C, although no site link connects the two sites. The cost values on site links A-B and A-C are site link

settings that determine the routing preference for replication, which is based on the aggregated cost of available site links. The cost of a direct connection between Site C and Site B is the sum of costs on site links A-B and A-C. For this reason, replication between Site B and Site C is automatically routed through Site A to avoid the more expensive, transitive route. Connections are created between Site B and Site C only if replication through Site A becomes impossible due to network or bridgehead server conditions. Top of page

Performance Limits for Replication Topology Generation Active Directory topology generation performance is limited primarily by the memory on the domain controller. KCC performance degrades at the physical memory limit. In most deployments, topology size will be limited by the amount of domain controller memory rather than CPU utilization required by the KCC. Scaling of sites and domains is improved in Windows Server 2003 by improving the algorithm that the KCC uses to generate the intersite replication topology. Because all domain controllers must use the same algorithm to arrive at a consistent view of the replication topology, the improved algorithm has a forest functional level requirement of Windows Server 2003 or Windows Server 2003 interim. KCC scalability was tested on domain controllers with 1.8 GHz processor speed, 512 megabytes (MB) RAM, and small computer system interface (SCSI) disks. KCC performance results at the Windows Server 2003 forest functional level are described in the following table. The times shown are for the KCC to run where all new connections are needed (maximum) and where no new connections are needed (minimum). Because most organizations add domain controllers in increments, the minimum generation times shown are closest to the actual runtimes that can be expected in deployments of comparable sizes. The CPU and memory usage values for the Local Security Authority (LSA) process (Lsass.exe) indicate the more significant impact of memory versus percent of CPU usage when the KCC runs. Note



Active Directory runs as part of the LSA, which manages authentication packages and authenticates users and

services. Minimum and Maximum KCC Generation Times for Domain-Site Combinations

Domains Sites

Connections KCC Generation Time (seconds)

Lsass.exe Memory Usage (MB)

Lsass.exe CPU Usage (%)

1

Maximum

43

100

39

Minimum

1

100

29

49

149

43

2

149

28

69

236

46

Minimum

2

236

63

Maximum

70

125

29

Minimum

2

126

71

77

237

28

3

237

78

78

325

43

5

325

77

85

449

52

6

449

75

555

624

46

Minimum

34

624

69

1,000 Maximum

48

423

65

5

423

81

93

799

56

500

1,000 Maximum Minimum 3,000 Maximum

5

500

1,000 Maximum Minimum 2,000 Maximum Minimum 3,000 Maximum Minimum 4,000 Maximum

20

Minimum 40

1,000 Maximum

Domains Sites

Connections KCC Generation Time (seconds)

Lsass.exe Memory Usage (MB)

Lsass.exe CPU Usage (%)

Minimum

12

799

96

2,000 Minimum

38

874

71

These numbers cannot be used as the sole guidelines for forest and domain design. Other limitations might affect performance and scalability. A limitation of note is that when FRS is deployed, a limit of 1,200 domain controllers per domain is recommended to ensure reliable recovery of SYSVOL. For more information about FRS limitations, see “FRS Technical Reference.” For more information about the functional level requirements for the intersite topology generation algorithm, see “Automated Intersite Topology Generation” later in this section. Top of page

Goals of Replication Topology The KCC generates a replication topology that achieves the following goals:

• • • •

Connect every directory partition replica that must be replicated. Control replication latency and cost. Route replication between sites. Effect client affinity.

By default, the replication topology is managed automatically and optimizes existing connections. However, manual connections created by an administrator are not modified or optimized.

Connect Directory Partition Replicas The total replication topology is actually composed of several underlying topologies, one for each directory partition. In the case of the schema and configuration directory partitions, a single topology is created. The underlying topologies are merged to form the minimum number of connections that are required to replicate each directory partition between all domain controllers that store replicas. Where the connections for directory partitions are identical between domain controllers — for example, two domain controllers store the same domain directory partition — a single connection can be used for replication of updates to the domain, schema, and configuration directory partitions. A separate replication topology is also created for application directory partitions. However, in the same manner as schema and configuration directory partitions, application directory partitions can use the same topology as domain directory partitions. When application and domain directory partitions are common to the source and destination domain controllers, the KCC does not create a separate connection for the application directory partition. A separate topology is not created for the partial replicas that are stored on global catalog servers. The connections that are needed by a global catalog server to replicate each partial replica of a domain are part of the topology that is created for each domain. The routes for the following directory partitions or combinations of directory partitions are aggregated to arrive at the overall topology:

• • • • • • • •

Configuration and schema within a site. Each writable domain directory partition within a site. Each application directory partition within a site. Global catalog read-only, partial domain directory partitions within a site. Configuration and schema between sites. Each writable domain directory partition between sites. Each application directory partition between sites. Global catalog read-only, partial domain directory partitions between sites.

Replication transport protocols determine the manner in which replication data is transferred over the network media. Your network environment and server configuration dictates the transports that you can use. For more information about transports, see “Replication Transports” later in this section.

Control Replication Latency and Cost Replication latency is inherent in a multimaster directory service. A period of replication latency begins when a directory update occurs on an originating domain controller and ends when replication of the change is received on the last domain controller in the forest that requires the change. Generally, the latency that is inherent in a WAN link is relative to a combination of the speed of the connection and the available bandwidth. Replication cost is an administrative value that can be used to indicate the latency that is associated with different replication routes between sites. A lower-cost route is preferred by the ISTG when generating the replication topology. Site topology is the topology as represented by the physical network: the LANs and WANs that connect domain controllers in a forest. The replication topology is built to use the site topology. The site topology is represented in Active Directory by site objects and site link objects. These objects influence Active Directory replication to achieve the best balance between replication speed and the cost of bandwidth utilization by distinguishing between replication that occurs within a site and replication that must span sites. When the KCC creates replication connections between domain controllers to generate the replication topology, it creates more connections between domain controllers in the same site than between domain controllers in different sites. The results are lower replication latency within a site and less replication bandwidth utilization between sites. Within sites, replication is optimized for speed as follows:

• • •

Connections between domain controllers in the same site are always arranged in a ring, with possible additional connections to reduce latency. Replication within a site is triggered by a change notification mechanism when an update occurs, moderated by a short, configurable delay (because groups of updates frequently occur together). Data is sent uncompressed, and thus without the processing overhead of data compression.

Between sites, replication is optimized for minimal bandwidth usage (cost) as follows:

• • • •

Replication data is compressed to minimize bandwidth consumption over WAN links. Store-and-forward replication makes efficient use of WAN links — each update crosses an expensive link only once. Replication occurs at intervals that you can schedule so that use of expensive WAN links is managed. The intersite topology is a layering of spanning trees (one intersite connection between any two sites for each directory partition) and generally does not contain redundant connections.

Route Replication Between Sites The KCC uses the information in Active Directory to identify the least-cost routes for replication between sites. If a domain controller is unavailable at the time the replication topology is created, making replication through that site impossible, the next least-cost route is used. This rerouting is automatic when site links are bridged (transitive), which is the default setting. Replication is automatically routed around network failures and offline domain controllers.

Effect Client Affinity Active Directory clients locate domain controllers according to their site affiliation. Domain controllers register SRV resource records in the DNS database that map the domain controller to a site. When a client requests a connection to a domain controller (for example, when logging on to a domain computer), the domain controller Locator uses the site SRV resource record to locate a domain controller with good connectivity whenever possible. In this way, a client locates a domain controller within the same site, thereby avoiding communications over WAN links. Sites can also be used by certain applications, such as DFS, to ensure that clients locate servers that are within the site or, if none is available, a server in the next closest site. If the ISTG is running Windows Server 2003, you can specify an alternate site based on connection cost if no same-site servers are available. This DFS feature, called “site costing,” is new in Windows Server 2003. For more information about the domain controller Locator, see “DNS Support for Active Directory Technical Reference.” For more information about DFS site costing, see “DFS Technical Reference.” Top of page

Topology-Related Objects in Active Directory Active Directory stores replication topology information in the configuration directory partition. Several configuration objects define the components that are required by the KCC to establish and implement the replication topology.

Active Directory Sites and Services is the Microsoft Management Console (MMC) snap-in that you can use to view and manage the hierarchy of objects that are used by the KCC to construct the replication topology. The hierarchy is displayed as the contents of the Sites container, which is a child object of the Configuration container. The Configuration container is not identified in the Active Directory Sites and Services UI. The Sites container contains an object for each site in the forest. In addition, Sites contains the Subnets container, which contains subnet definitions in the form of subnet objects. The following figure shows a sample hierarchy, including two sites: Default-First-Site-Name and Site A. The selected NTDS Settings object of the server MHDC3 in the site Default-First-Site-Name displays the inbound connections from MHDC4 in the same site and from A-DC-01 in Site A. In addition to showing that MHDC3 and MHDC4 perform intrasite replication, this configuration indicates that MHDC3 and A-DC-01 are bridgehead servers that are replicating the same domain between Site A and Default-First-Site-Name. Sites Container Hierarchy

Site and Subnet Objects Sites are effective because they map to specific ranges of subnet addresses, as identified in Active Directory by subnet objects. The relationship between sites and subnets is integral to Active Directory replication.

Site Objects A site object (class site) corresponds to a set of one or more IP subnets that have LAN connectivity. Thus, by virtue of their subnet associations, domain controllers that are in the same site are well connected in terms of speed. Each site object has a child NTDS Site Settings object and a Servers container. The distinguished name of the Sites container is CN=Sites,CN=Configuration,DC=ForestRootDomainName. The Configuration container is the topmost object in the configuration directory partition and the Sites container is the topmost object in the hierarchy of objects that are used to manage and implement Active Directory replication. When you install Active Directory on the first domain controller in the forest, a site object named Default-First-Site-Name is created in the Sites container in Active Directory.

Subnet Objects Subnet objects (class subnet) define network subnets in Active Directory. A network subnet is a segment of a TCP/IP network to which a set of logical IP addresses is assigned. Subnets group computers in a way that identifies their physical proximity on the network. Subnet objects in Active Directory are used to map computers to sites. Each subnet object has a siteObject attribute that links it to a site object.

Subnet-to-Site Mapping You associate a set of IP subnets with a site if they have high-bandwidth LAN connectivity, possibly involving hops through high-performance routers. Note



LAN connectivity assumes high-speed, inexpensive bandwidth that allows similar and reliable network performance, regardless of which two computers in the site are communicating. This quality of connectivity does not indicate that all servers in the site must be on the same network segment or that hop counts between all servers must be identical. Rather, it is the measure by which you know that if a large amount of data needs to be copied from one server to another, it does not matter which servers are involved. If you find that you are concerned about such situations,

consider creating another site. When you create subnet objects in Active Directory, you associate them with site objects so that IP addresses can be localized according to sites. During the process of domain controller location, subnet information is used to find a domain controller in the same site as, or the site closest to, the client computer. The Net Logon service on a domain controller is able to identify the site of a client by mapping the client’s IP address to a subnet object in Active Directory. Likewise, when a domain controller is installed, its server object is created in the site that contains the subnet that maps to its IP address. You can use Active Directory Sites and Services to define subnets, and then create a site and associate the subnets with the site. By default, only members of the Enterprise Admins group have the right to create new sites, although this right can be delegated. In a default Active Directory installation, there is no default subnet object, so potentially a computer can be in the forest but have an IP subnet that is not contained in any site. For private networks, you can specify the network addresses that are provided by the Internet Assigned Numbers Authority (IANA). By definition, that range covers all of the subnets for the organization. However, where several class B or class C addresses are assigned, there would necessarily be multiple subnet objects that all mapped to the same default site. To accommodate this situation, use the following subnets:

• •

For class B addresses, subnet 128.0.0.0/2 covers all class B addresses. For class C addresses, subnet 192.0.0.0/3 covers all class C addresses.

Note



The Active Directory Sites and Services MMC snap-in neither checks nor enforces IP address mapping when you move a server object to a different site. You must manually change the IP address on the domain controller to ensure proper mapping of the IP address to a subnet in the appropriate site.

Server Objects Server objects (class server) represent server computers, including domain controllers, in the configuration directory partition. When you install Active Directory, the installation process creates a server object in the Servers container within the site to which the IP address of the domain controller maps. There is one server object for each domain controller in the site. A server object is distinct from the computer object that represents the computer as a security principal. These objects are in separate directory partitions and have separate globally unique identifiers (GUIDs). The computer object represents the domain controller in the domain directory partition; the server object represents the domain controller in the configuration directory partition. The server object contains a reference to the associated computer object. The server object for the first domain controller in the forest is created in the Default-First-Site-Name site. When you install Active Directory on subsequent servers, if no other sites are defined, server objects are created in Default-FirstSite-Name. If other sites have been defined and subnet objects have been associated with these sites, server objects are created as follows:

• •

If additional sites have been defined in Active Directory and the IP address of the installation computer matches an existing subnet in a defined site, the domain controller is added to that site. If additional sites have been defined in Active Directory and the new domain controller's IP address does not match an existing subnet in one of the defined sites, the new domain controller's server object is created in the site of the

source domain controller from which the new domain controller receives its initial replication. When Active Directory is removed from a server, its NTDS Settings object is deleted from Active Directory, but its server object remains because the server object might contain objects other than NTDS Settings. For example, when Microsoft Operations Manager or Message Queuing is running on a domain controller, these applications create child objects beneath the server object.

NTDS Settings Objects The NTDS Settings object (class nTDSDSA) represents an instantiation of Active Directory on that server and distinguishes a domain controller from other types of servers in the site or from decommissioned domain controllers. For a specific server object, the NTDS Settings object contains the individual connection objects that represent the inbound connections from other domain controllers in the forest that are currently available to send changes to this domain controller. Note



The NTDS Settings object should not be manually deleted.

The hasMasterNCs multivalued attribute (where “NC” stands for “naming context,” a synonym for “directory partition”) of an NTDS Settings object contains the distinguished names for the set of writable (non-global-catalog) directory partitions that are located on that domain controller, as follows:

• •

DC=Configuration,DC=ForestRootDomainName



DC=DomainName,DC=ForestRootDomainName

DC=Schema,DC=Configuration,DC=ForestRootDomainNam e

The msDSHasMasterNCs attribute is new in Windows Server 2003, and this attribute of the NTDS Settings object contains values for the above-named directory partitions as well as any application directory partitions that are stored by the domain controller. Therefore, on domain controllers that are DNS servers and use Active Directory–integrated DNS zones, the following values appear in addition to the default directory partitions:

• •

DC=ForestDNSZones,DC=ForestRootDomainName (domain controllers in the forest root domain only) DC=DomainDNSZones,DC=DomainName,DC=ForestRootDomainName (all domain controllers)

Applications that need to retrieve the list of all directory partitions that are hosted by a domain controller can be updated or written to use the msDSHasMasterNCs attribute. Applications that need to retrieve only domain directory partitions can continue to use the hasMasterNCs attribute. For more information about these attributes, see Active Directory in the Microsoft Platform SDK on MSDN.

Connection Objects A connection object (class nTDSConnection) defines a one-way, inbound route from one domain controller (the source) to the domain controller that stores the connection object (the destination). The KCC uses information in cross-reference objects to create the appropriate connection objects, which enable domain controllers that store the same directory partitions to replicate with each other. The KCC creates connections for every server object in the Sites container that has an NTDS Settings object. The connection object is a child of the replication destination’s NTDS Settings object, and the connection object references the replication source domain controller in the fromServer attribute on the connection object — that is, it represents the inbound half of a connection. The connection object contains a replication schedule and specifies a replication transport. The connection object schedule is derived from the site link schedule for intersite connections. For more information about intersite connection schedules, see “Connection Object Schedule” later in this section. A connection is unidirectional; a bidirectional replication connection is represented as two inbound connection objects. The KCC creates one connection object under the NTDS Settings object of each server that is used as an endpoint for the connection. Connection objects are created in two ways:



Automatically by the KCC.



Manually by a directory administrator by using Active Directory Sites and Services, ADSI Edit, or scripts.

Intersite connection objects are created by the KCC that has the role of intersite topology generator (ISTG) in the site. One domain controller in each site has this role, and the ISTG role owners in all sites use the same algorithm to collectively generate the intersite replication topology.

Ownership of Connection Objects Connections that are created automatically by the KCC are “owned” by the KCC. If you create a new connection manually, the connection is not owned by the KCC. If a connection object is not owned by the KCC, the KCC does not modify it or delete it. Note



One exception to this modification rule is that the KCC automatically changes the transport type of an administrator-

owned connection if the transportType attribute is set incorrectly (see “Transport Type” later in this section). However, if you modify a connection object that is owned by the KCC (for example, you change the connection object schedule), the ownership of the connection depends on the application that you use to make the change:

• •

If you use an LDAP editor such as Ldp.exe or Adsiedit.msc to change a connection object property, the KCC reverses the change the next time it runs. If you use Active Directory Sites and Services to change a connection object property, the object is changed from

automatic to manual and the KCC no longer owns it. The UI indicates the ownership status of each connection object. In most Active Directory deployments, manual connection objects are not needed. If you create a connection object, it remains until you delete it, but the KCC will automatically delete duplicate KCCowned objects if they exist and will continue to create needed connections. Ownership of a connection object does not affect security access to the object; it determines only whether the KCC can modify or delete the object. Note



If you create a new connection object that duplicates one that the KCC has already created, your duplicate object is created and the KCC-created object is deleted by the KCC the next time it runs.

ISTG and Modified Connections Because connection objects are stored in the configuration directory partition, it is possible for an intersite connection object to be modified by an administrator on one domain controller and, prior to replication of the initial change being received, to be modified by the KCC on another domain controller. Overwriting such a change can occur within the local site or when a connection object changes in a remote site. By default, the KCC runs every 15 minutes. If the administrative connection object change is not received by the destination domain controller before the ISTG in the destination site runs, the ISTG in the destination site might modify the same connection object. In this case, ownership of the connection object belongs to the KCC because the latest write to the connection object is the write that is applied.

Manual Connection Objects The KCC is designed to produce a replication topology that provides low replication latency, that adapts to failures, and that does not need modification. It is usually not necessary to create connection objects when the KCC is being used to generate automatic connections. The KCC automatically reconfigures connections as conditions change. Adding manual connections when the KCC is employed potentially increases replication traffic by adding redundant connections to the optimal set chosen by the KCC. When manually generated connections exist, the KCC uses them wherever possible. Adding extra connections does not necessarily reduce replication latency. Within a site, latency issues are usually related to factors other than the replication topology that is generated by the KCC. Factors that affect latency include the following:



Interruption of the service of key domain controllers, such as the primary domain controller (PDC) emulator, global

• • • •

Domain controllers that are too busy to replicate in a timely manner (too few domain controllers).

catalog servers, or bridgehead servers. Network connectivity issues. DNS server problems. Inordinate amounts of directory updates.

For problems such as these, creating a manual connection does not improve replication latency. Adjusting the scheduling and costs that are assigned to the site link is the best way to influence intersite topology.

Site Link Objects For a connection object to be created on a destination domain controller in one site that specifies a source domain controller in another site, you must manually create a site link object (class siteLink) that connects the two sites. Site link objects identify the transport protocol and scheduling required to replicate between two or more sites. You can use Active Directory Sites and Services to create the site links. The KCC uses the information stored in the properties of these site links to create the intersite topology connections. A site link is associated with a network transport by creating the site link object in the appropriate transport container (either IP or SMTP). All intersite domain replication must use IP site links. The Simple Mail Transfer Protocol (SMTP) transport can be used for replication between sites that contain domain controllers that do not host any common domain directory partition replicas.

Site Link Properties A site link specifies the following:

• •

Two or more sites that are permitted to replicate with each other.

• •

A schedule during which replication is permitted to occur.

An administrator-defined cost value associated with that replication path. The cost value controls the route that replication takes, and thus the remote sites that are used as sources of replication information. An interval that determines how frequently replication occurs over this site link during the times when the schedule

allows replication. For more information about site link properties, see “Site Link Settings and Their Effects on Intersite Replication” later in this section.

Default Site Link When you install Active Directory on the first domain controller in the forest, an object named DEFAULTIPSITELINK is created in the Sites container (in the IP container within the Inter-Site Transports container). This site link contains only one site, Default-First-Site-Name.

Site Link Bridging By default, site links for the same IP transport that have sites in common are bridged, which enables the KCC to treat the set of associated site links as a single route. If you categorically do not want the KCC to consider some routes, or if your network is not fully routed, you can disable automatic bridging of all site links. When this bridging is disabled, you can create site link bridge objects and manually add site links to a bridge. For more information about using site link bridges, see “Bridging Site Links Manually” later in this section.

NTDS Site Settings Object NTDS Site Settings objects (class nTDSSiteSettings) identify site-wide settings in Active Directory. There is one NTDS Site Settings object per site in the Sites container. NTDS Site Settings attributes control the following features and conditions:

• • •

The identity of the ISTG role owner for the site. The KCC on this domain controller is responsible for identifying bridgehead servers. For more information about this role, see “Automated Intersite Topology Generation” later in this section. Whether domain controllers in the site cache membership of universal groups and the site in which to find a global catalog server for creating the cache. The default schedule that applies to connection objects. For more information about this schedule, see “Connection Object Schedule” later in this section. Note



To allow for the possibility of network failure, which might cause one or more notifications to be missed, a default schedule of once per hour is applied to replication within a site. You do not need to manage this schedule.

Cross-Reference Objects Cross-reference objects (class crossRef) store the location of directory partitions in the Partitions container (CN=Partitions,CN=Configuration,DC=ForestRootDomainName). The contents of the Partitions container are not visible by using Active Directory Sites and Services, but can be viewed by using Adsiedit.msc to view the Configuration directory partition. Active Directory replication uses cross-reference objects to locate the domain controllers that store each directory partition. A cross-reference object is created during Active Directory installation to identify each new directory partition that is added to the forest. Cross-reference objects store the identity (nCName, the distinguished name of the directory partition where “NC” stands for “naming context,” a synonym for “directory partition”) and location (dNSRoot, the DNS domain where servers that store the particular directory partition can be reached) of each directory partition. Note



In Windows Server 2003 Active Directory, a special attribute of the cross-reference object, msDS-NC-ReplicaLocations, identifies application directory partitions to the replication system. For more information about how application directory partitions are replicated, see “Topology Generation Phases” later in this section.

Top of page

Replication Transports Replication transports provide the wire protocols that are required for data transfer. There are three levels of connectivity for replication of Active Directory information:

• • •

Uniform high-speed, synchronous RPC over IP within a site.

• • •

Replication within a site always uses RPC over IP.

Point-to-point, synchronous, low-speed RPC over IP between sites. Low-speed, asynchronous SMTP between sites.

The following rules apply to the replication transports:

Replication between sites can use either RPC over IP or SMTP over IP. Replication between sites over SMTP is supported for only domain controllers of different domains. Domain controllers of the same domain must replicate by using the RPC over IP transport. Therefore, replication between sites over SMTP is supported for only schema, configuration, and global catalog replication, which means that domains can span sites

only when point-to-point, synchronous RPC is available between sites. The Inter-Site Transports container provides the means for mapping site links to the transport that the link uses. When you create a site link object, you create it in either the IP container (which associates the site link with the RPC over IP transport) or the SMTP container (which associates the site link with the SMTP transport). For the IP transport, a typical site link connects only two sites and corresponds to an actual WAN link. An IP site link connecting more than two sites might correspond to an asynchronous transfer mode (ATM) backbone that connects, for example, more than two clusters of buildings on a large campus or connects several offices in a large metropolitan area that are connected by leased lines and IP routers.

Synchronous and Asynchronous Communication The RPC intersite and intrasite transport (RCP over IP within sites and between sites) and the SMTP intersite transport (SMTP over IP between sites only) correspond to synchronous and asynchronous communication methods, respectively. Synchronous communication favors fast, available connections, while asynchronous communication is better suited for slow or intermittent connections.

Synchronous Replication Over IP The IP transport (RPC over IP) provides synchronous inbound replication. In the context of Active Directory replication, synchronous communication implies that after the destination domain controller sends the request for data, it waits for the source domain controller to receive the request, construct the reply, and send the reply before it requests changes from any other domain controllers; that is, inbound replication is sequential. Thus in synchronous transmission, the reply is received within a short time. The IP transport is appropriate for linking sites in fully routed networks.

Asynchronous Replication Over SMTP The SMTP transport (SMTP over IP) provides asynchronous replication. In asynchronous replication, the destination domain controller does not wait for the reply and it can have multiple asynchronous requests outstanding at any

particular time. Thus in asynchronous transmission, the reply is not necessarily received within a short time. Asynchronous transport is appropriate for linking sites in networks that are not fully routed and have particularly slow WAN links. Note



Although asynchronous replication can send multiple replication requests in parallel, the received replication packets are queued on the destination domain controller and the changes applied for only one partner and directory partition at a time.

Replication Queue Suppose a domain controller has five inbound replication connections. As the domain controller formulates change requests, either by a schedule being reached or from a notification, it adds a work item for each request to the end of the queue of pending synchronization requests. Each pending synchronization request represents one <source domain controller, directory partition> pair, such as “synchronize the schema directory partition from DC1,” or “delete the ApplicationX directory partition.” When a work item has been received into the queue, notification and polling intervals do not apply — the domain controller processes the item (begins synchronizing from that source) as soon as the item reaches the front of the queue, and continues until either the destination is fully synchronized with the source domain controller, an error occurs, or the synchronization is pre-empted by a higher-priority operation.

SMTP Intersite Replication When sites are on opposite ends of a WAN link (or the Internet), it is not always desirable — or even possible — to perform synchronous, RPC-based directory replication. In some cases, the only method of communication between two sites is e-mail. When connectivity is intermittent or when end-to-end IP connectivity is not available (an intermediate site does not support RPC/IP replication), replication must be possible across asynchronous, store-and-forward transports such as SMTP. In addition, where bandwidth is limited, it can be disadvantageous to force an entire replication cycle of request for changes and transfer of changes between two domain controllers to complete before another can begin (that is, to use synchronous replication). With SMTP, several cycles can be processing simultaneously so that each cycle is being processed to some degree most of the time, as opposed to receiving no attention for prolonged periods, which can result in RPC time-outs. For intersite replication, SMTP replication substitutes mail messaging for the RPC transport. The message syntax is the same as for RPC-based replication. There is no change notification for SMTP–based replication, and scheduling information for the site link object is used as follows:

• •

By default, SMTP replication ignores the Replication Available and Replication Not Available settings on the site link schedule in Active Directory Sites and Services (the information that indicates when these sites are connected). Replication occurs according to the messaging system schedule. Within the scope of the messaging system schedule, SMTP replication uses the replication interval that is set on the SMTP site link to indicate how often the server requests changes. The interval (Replicate every ____ minutes) is

set in 15-minute increments on the General tab in site link Properties in Active Directory Sites and Services. The underlying SMTP messaging system is responsible for message routing between SMTP servers.

SMTP Replication and Intersite Messaging Intersite Messaging is a Windows 2000 Server and Windows Server 2003 component that is enabled when Active Directory is installed. Intersite Messaging allows for multiple transports to be used as add-ins to the Intersite Messaging architecture. Intersite Messaging enables messaging communication that can use SMTP servers other than those that are dedicated to processing e-mail applications. When the forest has a functional level of Windows 2000, Intersite Messaging also provides services to the KCC in the form of querying the available replication paths. In addition, Net Logon queries the connectivity data in Intersite Messaging when calculating site coverage. By default, Intersite Messaging rebuilds its database once a day, or when required by a site link change. When the forest has a functional level of Windows Server 2003, the KCC does not use Intersite Messaging for calculating the topology. However, regardless of forest functional level, Intersite Messaging is still required for SMTP replication,

DFS, universal group membership caching, and Net Logon automatic site coverage calculations. Therefore, if any of these features are in use, do not stop Intersite Messaging. For more information about site coverage and how automatic site coverage is calculated, see “How DNS Support for Active Directory Works.” For more information about DFS, see “DFS Technical Reference.”

Requirements for SMTP Replication The KCC does not create connections that use SMTP until the following requirements are met:

• •

Internet Information Services (IIS) is installed on both bridgehead servers. An enterprise certification authority (CA) is installed and configured on your network. The certificate authority signs and encrypts SMTP messages that are exchanged between domain controllers, ensuring the authenticity of directory updates. Specifically, a domain controller certificate must be present on the replicating domain controllers. The replication request message, which contains no directory data, is not encrypted. The replication reply message, which does contain directory data, is encrypted using a key length of 128 bits.

• • •

The sites are connected by SMTP site links.



Each domain controller is configured to receive mail.

The site link path between the sites has a lower cost than any IP/RPC site link that can reach the SMTP site. You are not attempting to replicate writable replicas of the same domain (although replication of global catalog partial replicas is supported).

You must also determine if mail routing is necessary. If the two replicating domain controllers have direct IP connectivity and can send mail to each other, no further configuration is required. However, if the two domain controllers must go through mail gateways to deliver mail to each other, you must configure the domain controller to use the mail gateway. Note



RPC is required for replicating the domain to a new domain controller and for installing certificates. If RPC is not available to the remote site, the domain must be replicated and certificates must be installed over RPC in a hub site and the domain controller then shipped to the remote site.

Comparison of SMTP and RPC Replication The following characteristics apply to both SMTP and RPC with respect to Active Directory replication:

• •

For replication between sites, data that is replicated through either transport is compressed. Active Directory can respond with only a fixed (maximum) number of changes per change request, based on the size of the replication packet. The size of the replication packet is configurable. For information about configuring the replication packet size, see “Replication Packet Size” later in this section.

• •

Active Directory can apply a single set of changes at a time for a specific directory partition and replication partner.

• •

TCP transports the data portion by using the same algorithm for both SMTP and RPC.

The response data (changes) are transported in one or many frames, based on the total number of changed or new values. If transmission of the data portion fails, complete retransmission is necessary.

Point-to-point synchronous RPC replication is available between sites to allow the flexibility of having domains that span multiple sites. RPC is best used between sites that are connected by WAN links because it involves lower latency. SMTP is best used between sites where RPC over IP is not possible. For example, SMTP can be used by companies that have a network backbone that is not based on TCP/IP, such as companies that use an X.400 backbone. Active Directory replication uses both transports to implement a request-response mechanism. Active Directory issues requests for changes and replies to requests for changes. RPC maps these requests into RPC requests and RPC replies. SMTP, on the other hand, actually uses long-lived TCP connections (or X.400-based message transfer agents in nonTCP/IP networks) to deliver streams of mail in each direction. Thus, RPC transport expects a response to any request immediately and can have a maximum of one active inbound RPC connection to a directory partition replica at a time. The SMTP transport expects much longer delays between a request and a response. As a result, multiple inbound SMTP connections to a directory partition replica can be active at the same time, provided the requests are all for a different source domain controller or, for the same source domain controller, a different directory partition. For more information, see “Synchronous and Asynchronous Communication” earlier in this section.

Replication Packet Size Replication packet sizes are computed on the basis of memory size unless you have more than 1 gigabyte (GB). By default, the system limits the packet size as follows:

• •

The packet size in bytes is 1/100th the size of RAM, with a minimum of 1 MB and a maximum of 10 MB. The packet size in objects is 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of

1,000 objects. For general estimates when this entry is not set, assume an approximate packet size of 100 objects. There is one exception: the value of the Replicator async inter site packet size (bytes) registry entry is always 1 MB if it is not set (that is, when the default value is in effect). Many mail systems limit the amount of data that can be sent in a mail message (2 MB to 4 MB is common), although most Windows-based mail systems can handle large 10-MB mail messages. Overriding these memory-based values might be beneficial in advanced bandwidth management scenarios. You can edit the registry to set the maximum packet size. Note



If you must edit the registry, use extreme caution. Registry information is provided here as a reference for use by only highly skilled directory service administrators. It is recommended that you do not directly edit the registry unless, as in this case, there is no Group Policy or other Windows tools to accomplish the task. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be

stored. Storage of incorrect values can result in unrecoverable errors in the system. Setting the maximum packet size requires adding or modifying entries in the following registry path with the REG_DWORD data type: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters. These entries can be used to determine the maximum number of objects per packet and maximum size of the packets. The minimum values are indicated as the lowest value in the range.



For RPC replication within a site:

• •



(objects) Range: >=2 Replicator intra site packet size (bytes)

Range: >=10 KB For RPC replication between sites:

• •



Replicator intra site packet size

Replicator inter site packet size (objects) Range: >=2 Replicator inter site packet size (bytes)

Range: >=10 KB For SMTP replication between sites:

• •

Replicator async inter site packet size (objects) Range: >=2 Replicator async inter site packet size (bytes) Range: >=10 KB

Transport Type The transportType attribute of a connection object specifies which network transport is used when the connection is used for replication. The transport type receives its value from the distinguished name of the container in the configuration directory partition that contains the site link over which the connection occurs, as follows:

• • •

Connection objects that use TCP/IP have the transportType value of CN=IP,CN=Inter-Site Transports,CN=IP,DC=Configuration,DC=ForestRootDomainName. Connection objects that use SMTP/IP have the transportType value of CN=SMTP,CN=Inter-Site Transports,CN=IP,DC=Configuration,DC=ForestRootDomainName. For intrasite connections, transportType has no value; Active Directory Sites and Services shows the transport of

“RPC” for connections that are from servers in the same site. If you move a domain controller to a different site, the connection objects from servers in the site from which it was moved remain, but the transport type is blank because it was an intrasite connection. Because the connection has an endpoint outside of the site, the local KCC in the server’s new site does not manage the connection. When the ISTG runs, if a blank transport type is found for a connection that is from a server in a different site, the transportType value is automatically changed to IP. The ISTG in the site determines whether to delete the connection object or to retain it, in which case the server becomes a bridgehead server in its new site.

Top of page

Replication Between Sites Replication between sites transfers domain updates when domain controllers for a domain are located in more than one site. Intersite replication of configuration and schema changes is always required when more than one site is configured in a forest. Replication between sites is accomplished by bridgehead servers, which replicate changes according to site link settings.

Bridgehead Servers When domain controllers for the same domain are located in different sites, at least one bridgehead server per directory partition and per transport (IP or SMTP) replicates changes from one site to a bridgehead server in another site. A single bridgehead server can serve multiple partitions per transport and multiple transports. Replication within the site allows updates to flow between the bridgehead servers and the other domain controllers in the site. Bridgehead servers help to ensure that the data replicated across WAN links is not stale or redundant. Any server that has a connection object with a “from” server in another site is acting as a destination bridgehead. Any server that is acting as a source for a connection to another site acts as a source bridgehead. Note



You can identify a KCC-selected bridgehead server in Active Directory Sites and Services by viewing connection objects for the server (select the NTDS Settings object below the server object); if there are connections from servers in a different site or sites, the server represented by the selected NTDS Settings object is a bridgehead server. If you have Windows Support Tools installed, you can see all bridgehead servers by using the command repadmin

/bridgeheads. KCC selection of bridgehead servers guarantees bridgehead servers that are capable of replicating all directory partitions that are needed in the site, including partial global catalog partitions. By default, bridgehead servers are selected automatically by the KCC on the domain controller that holds the ISTG role in each site. If you want to identify the domain controllers that can act as bridgehead servers, you can designate preferred bridgehead servers, from which the ISTG selects all bridgehead servers. Alternatively, if the ISTG is not used to generate the intersite topology, you can create manual intersite connection objects on domain controllers to designate bridgehead servers. In sites that have at least one domain controller that is running Windows Server 2003, the ISTG can select bridgehead servers from all eligible domain controllers for each directory partition that is represented in the site. For example, if three domain controllers in a site store replicas of the same domain and domain controllers for this domain are also located in three or more other sites, the ISTG can spread the inbound connection objects from those sites among all three domain controllers, including those that are running Windows 2000 Server. In Windows 2000 forests, a single bridgehead server per directory partition and per transport is designated as the bridgehead server that is responsible for intersite replication of that directory partition. Therefore, for the preceding example, only one of the three domain controllers would be designated by the ISTG as a bridgehead server for the domain, and all four connection objects from the four other sites would be created on the single bridgehead server. In large hub sites, a single domain controller might not be able to adequately respond to the volume of replication requests from perhaps thousands of branch sites. For more information about how the KCC selects bridgehead servers in Windows Server 2003, see “Bridgehead Server Selection” later in this section.

Compression of Replication Data Intersite replication is compressed by default. Compressing replication data allows the data to be transferred over WAN links more quickly, thereby conserving network bandwidth. The cost of this benefit is an increase in CPU utilization on bridgehead servers. By default, replication data is compressed under the following conditions:

• •

Replication of updates between domain controllers in different sites. Replication of Active Directory to a newly created domain controller.

A new compression algorithm is employed by bridgehead servers that are running Windows Server 2003. The new algorithm improves replication speed by operating between two and ten times faster than the Windows 2000 Server algorithm.

Windows 2000 Server Compression The compression algorithm that is used by domain controllers that are running Windows 2000 Server achieves a compression ratio of approximately 75% to 85%. The cost of this compression in terms of CPU utilization can be as high as 50% for intersite Active Directory replication. In some cases, the CPUs on bridgehead servers that are running Windows 2000 Server can become overwhelmed with compression requests, compounded by the need to service outbound replication partners. In a worst case scenario, the bridgehead server becomes so overloaded that it cannot keep up with outbound replication. This scenario is usually coupled with a replication topology issue where a domain controller has more outbound partners than necessary or the replication schedule was overly aggressive for the number of direct replication partners. Note



If a bridgehead server has too many replication partners, the KCC logs event ID 1870 in the Directory Service log, indicating the current number of partners and the recommended number of partners for the domain controller.

Windows Server 2003 Compression On domain controllers that are running Windows Server 2003, compression quality is comparable to Windows 2000 but the processing burden is greatly decreased. The Windows Server 2003 algorithm produces a compression ratio of approximately 60%, which is slightly less compression than is achieved by the Windows 2000 Server ratio, but which significantly reduces the processing load on bridgehead servers. The new compression algorithm provides a good compromise by significantly reducing the CPU load on bridgehead servers, while only slightly increasing the WAN traffic. The new algorithm reduces the time taken by compression from approximately 60% of replication time to 20%. The Windows Server 2003 compression algorithm is used only when both bridgehead servers are running Windows Server 2003. If a bridgehead server that is running Windows Server 2003 replicates with a bridgehead server that is running Windows 2000 Server, then the Windows 2000 compression algorithm is used.

Reverting to Windows 2000 Compression For slow WAN links (for example, 64 KB or less), if more compression is preferable to a decrease in computation time, you can change the compression algorithm to the Windows 2000 algorithm. The compression algorithm is controlled by the REG_DWORD registry entry HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters\Replicator compression algorithm. By editing this registry entry, you can change the algorithm that is used for compression to the Windows 2000 algorithm. Note



If you must edit the registry, use extreme caution. Registry information is provided here as a reference for use by only highly skilled directory service administrators. It is recommended that you do not directly edit the registry unless, as in this case, there is no Group Policy or other Windows tools to accomplish the task. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be

stored. Storage of incorrect values can result in unrecoverable errors in the system. The default value is 3, which indicates that the Windows Server 2003 algorithm is in effect. By changing the value to 2, the Windows 2000 algorithm is used for compression. However, switching to the Windows 2000 algorithm is not recommended unless both bridgehead domain controllers serve relatively few branches and have ample CPU (for example, > dual processor 850 megahertz [MHz]).

Site Link Settings and Their Effects on Intersite Replication In Active Directory Sites and Services, the General tab of the site link Properties contains the following options for configuring site links to control the replication topology:

• •

A list of two or more sites to be connected. A single numeric cost that is associated with communication over the link. The default cost is 100, but you can assign higher cost values to represent more expensive transmission. For example, sites that are connected by low-speed or dial-up connections would have high-cost site links between them. Sites that are well connected through backbone lines would have low-cost site links. Where multiple routes or transports exist between two sites, the least expensive



route and transport combination is used. A schedule that determines days and hours during which replication can occur over the link (the link is available). For example, you might use the default (100 percent available) schedule on most links, but block replication traffic during

peak business hours on links to certain branches. By blocking replication, you give priority to other traffic, but you also increase replication latency. Note

• •

Scheduling information is ignored by site links that use SMTP transports; the mail is stockpiled and then exchanged

at the times that are configured for your mail infrastructure. An interval in minutes that determines how often replication can occur (default is every 180 minutes, or 3 hours). The minimum interval is 15 minutes. If the interval exceeds the time allowed by the schedule, replication occurs once at

the scheduled time. A site can be connected to other sites by any number of site links. For example, a hub site has site links to each of its branch sites. Each site that contains a domain controller in a multisite directory must be connected to at least one other site by at least one site link; otherwise, it cannot replicate with domain controllers in any other site. The following diagram shows two sites that are connected by a site link. Domain controllers DC1 and DC2 belong to the same domain and are acting as partner bridgehead servers. When topology generation occurs, the ISTG in each site creates an inbound connection object on the bridgehead server in its site from the bridgehead server in the opposite site. With these objects in place, replication can occur according to the settings on the SB site link. Connections Between Domain Controllers in Two Sites that Are Connected by a Site Link

Site Link Cost The ISTG uses the cost settings on site links to determine the route of replication between three or more sites that replicate the same directory partition. The default cost value on a site link object is 100. You can assign lower or higher cost values to site links to favor inexpensive connections over expensive connections, respectively. Certain applications and services, such as domain controller Locator and DFS, also use site link cost information to locate nearest resources. For example, site link cost can be used to determine which domain controller is contacted by clients located in a site that does not include a domain controller for the specified domain. The client contacts the domain controller in a different site according to the site link that has the lowest cost assigned to it. Cost is usually assigned not only on the basis of the total bandwidth of the link, but also on the availability, latency, and monetary cost of the link. For example, a 128-kilobits per second (Kbps) permanent link might be assigned a lower cost than a dial-up 128-Kbps dual ISDN link because the dial-up ISDN link has replication latency-producing delay that occurs as the links are being established or removed. Furthermore, in this example, the permanent link might have a fixed monthly cost, whereas the ISDN line is charged according to actual usage. Because the company is paying up-front for the permanent link, the administrator might assign a lower cost to the permanent link to avoid the extra monetary cost of the ISDN connections. The method used by the ISTG to determine the least-cost path from each site to every other site for each directory partition is more efficient when the forest has a functional level of Windows Server 2003 than it is at other levels. For more information about how the KCC computes replication routes, see “Automated Intersite Topology Generation” later in this section. For more information about domain controller location, see “How DNS Support for Active Directory Works.”

Transitivity and Automatic Site Link Bridging By default, site links are transitive, or “bridged.” If site A has a common site link with site B, site B also has a common site link with Site C, and the two site links are bridged, domain controllers in site A can replicate directly with domain controllers in site C under certain conditions, even though there is no site link between site A and site C. In other words, the effect of bridged site links is that replication between sites in the bridge is transitive. The setting that implements automatic site link bridges is Bridge all site links, which is found in Active Directory Sites and Services in the properties of the IP or SMTP intersite transport containers. The default bridging of site links occurs automatically and no directory object represents the default bridge. Therefore, in the common case of a fully routed IP network, you do not need to create any site link bridge objects.

Transitivity and Rerouting For a set of bridged site links, where replication schedules in the respective site links overlap (replication is available on the site links during the same time period), connection objects can be automatically created, if needed, between sites that do not have site links that connect them directly. All site links for a specific transport implicitly belong to a single site link bridge for that transport. Site link transitivity enables the KCC to re-route replication when necessary. In the next diagram, a domain controller that can replicate the domain is not available in Seattle. In this case, because the site links are transitive (bridged) and the schedules on the two site links allow replication at the same time, the KCC can re-route replication by creating connections between DC3 in Portland and DC2 in Boston. Connections between domain controllers in Portland and Boston might also be created when a domain controller in Portland is a global catalog server, but no global catalog server exists in the Seattle site and the Boston site hosts a domain that is not present in the Seattle site. In this case, connections can be created between Portland and Boston to replicate the global catalog partial, read-only replica. Note



Overlapping schedules are required for site link transitivity, even when Bridge all site links is enabled. In the example, if the site link schedules for SB and PS do not overlap, no connections are possible between Boston and

Portland. Transitive Replication when Site Links Are Bridged, Schedules Overlap, and Replication Must Be Rerouted

In the preceding diagram, creating a third site link to connect the Boston and Portland sites is unnecessary and counterproductive because of the way that the KCC uses cost to route replication. In the configuration that is shown, the KCC uses cost to choose either the route between Portland and Seattle or the route between Portland and Boston. If you wanted the KCC to use the route between Portland and Boston, you would create a site link between Portland and Boston instead of the site link between Portland and Seattle.

Aggregated Site Link Cost and Routing When site links are bridged, the cost of replication from a domain controller at one end of the bridge to a domain controller at the other end is the sum of the costs on each of the intervening site links. For this reason, if a domain controller in an interim site stores the directory partition that is being replicated, the KCC will route replication to the domain controller in the interim site rather than to the more distant site. The domain controller in the more distant site in turn receives replication from the interim site (store-and-forward replication). If the schedules of the two site links overlap, this replication occurs in the same period of replication latency. The following diagram illustrates an example where two site links connecting three sites that host the same domain are bridged automatically (Bridge all site links is enabled). The aggregated cost of directly replicating between Portland and Boston illustrates why the KCC routes replication from Portland to Seattle and from Seattle to Boston in a store-andforward manner. Given the choice between replicating at a cost of 4 from Seattle or a cost of 7 from Boston, the ISTG in Portland chooses the lower cost and creates the connection object on DC3 from DC1 in Seattle. Bridged Site Links Routing Replication Between Three Sites According to Cost

In the preceding diagram, if DC3 in Portland needs to replicate a directory partition that is hosted on DC2 in Boston but not by any domain controller in Seattle, or if the directory partition is hosted in Seattle but the Seattle site cannot be reached, the ISTG creates the connection object from DC2 to DC3.

Significance of Overlapping Schedules In the preceding diagram, to replicate the same domain that is hosted in all three sites, the Portland site replicates directly with Seattle and Seattle replicates directly with Boston, transferring Portland’s changes to Boston, and vice versa, through store-and-forward replication. Whether the schedules overlap has the following effects:

• •

PS and SB site link schedules have replication available during at least one common hour of the schedule:

• •

Replication between these two sites occurs in the same period of replication latency, being routed through Seattle. If Seattle is unavailable, connections can be created between Portland and Boston.

PS and SB site link schedules have no common time:

• •

Replication of changes between Portland and Boston reach their destination in the next period of replication latency after reaching Seattle. If Seattle is unavailable, no connections are possible between Portland and Boston.

Note If Bridge all site links is disabled, a connection is never created between Boston and Portland, regardless of schedule overlap, unless you manually create a site link bridge.

Site Link Changes and Replication Path The path that replication takes between sites is computed from the information that is stored in the properties of the site link objects. When a change is made to a site link setting, the following events must occur before the change takes effect:

• •

The site link change must replicate to the ISTG of each site by using the previous replication topology. The KCC must run on each ISTG.

As the path of connections is transitively figured through a set of site links, the attributes (settings) of the site link objects are combined along the path as follows:

• • •

Costs are added together. The replication interval is the maximum of the intervals that are set for the site links along the path. The options, if any are set, are computed by using the AND operation. Note



Options are the values of the options attribute on the site link object. The value of this attribute determines special

behavior of the site link, such as reciprocal replication and intersite change notification. Thus the site link schedule is the overlap of all of the schedules of the subpaths. If none of the schedules overlap, the path is not used.

Bridging Site Links Manually If your IP network is composed of IP segments that are not fully routed, you can disable Bridge all site links for the IP transport. In this case, all IP site links are considered nontransitive, and you can create and configure site link bridge

objects to model the actual routing behavior of your network. A site link bridge has the effect of providing routing for a disjoint network (networks that are separate and unaware of each other). When you add site links to a site link bridge, all site links within the bridge can route transitively. A site link bridge object represents a set of site links, all of whose sites can communicate through some transport. Site link bridges are necessary if both of the following conditions are true:



A site contains a domain controller that hosts a domain directory partition that is not hosted by a domain controller in



That domain directory partition is hosted on a domain controller in at least one other site in the forest.

an adjacent site (a site that is in the same site link).

Note



Site link bridge objects are used by the KCC only when the Bridge all site links setting is disabled. Otherwise, site

link bridge objects are ignored. Site link bridges can also be used to diminish potentially high CPU overhead of generating a large transitive replication topology. In very large networks, transitive site links can be an issue because the KCC considers every possible connection in the bridged network, and selects only one. Therefore, in a Windows 2000 forest that has a very large network or a Windows Server 2003 forest that consists of an extremely large hub-and-spoke topology, you can reduce KCC-related CPU utilization and run time by turning off Bridge all site links and creating manual site link bridges only where they are required. Note



Turning off Bridge all site links might affect the ability of DFS clients to locate DFS servers in the closest site. An ISTG that is running Windows Server 2003 relies on the Bridge all site links setting being turned on to generate the intersite cost matrix that DFS requires for its site-costing functionality. An ISTG running Windows Server 2003 with Service Pack 1 (SP1) can accommodate the DFS requirements with Bridge all site links turned off. For more information about turning off this functionality while accommodating DFS, see "DFS Site Costing and Windows Server 2003 SP1 Site Options" later in this section. For more information about site link cost and DFS, see

“DFS Technical Reference.” You create a site link bridge object for a specific transport by specifying two or more site links for the specified transport.

Requirements for manual site link bridges Each site link in a manual site link bridge must have at least one site in common with another site link in the bridge. Otherwise, the bridge cannot compute the cost from sites in one link to the sites in other links of the bridge. If bridgehead servers that are capable of the transport that is used by the site link bridge are not available in two linked sites, a route is not available.

Manual site link bridge behavior Separate site link bridges, even for the same transport, are independent. To illustrate this independence, consider the following conditions:

• • •

Four sites have domain controllers for the same domain: Portland, Seattle, Detroit, and Boston. Three site links are configured: Portland-Seattle (PS), Seattle-Detroit (SD), and Detroit-Boston (DB). Two separate manual site link bridges link the outer site links PS and DB with the inner site link SD.

The presence of the PS-SD site link bridge means that an IP message can be sent transitively from the Portland site to the Detroit site with cost 4 + 3 = 7. The presence of the SD-DB site link bridge means that an IP message can be sent transitively from Seattle to Boston at a cost of 3 + 2 = 5. However, because there is no transitivity between the PS-SB and SB-DB site link bridges, an IP message cannot be sent between Portland and Boston with cost 4 + 3 + 2 = 9, or at any cost. In the following diagram, the two manual site link bridges means that Boston is able to replicate directly only with Detroit and Seattle, and Portland is able to replicate directly only with Seattle and Detroit. Note



If you need direct replication between Portland and Detroit, you can create the PS-SB-DB site link bridge. By excluding the PS site link, you ensure that connections are neither created nor considered by the KCC between Portland and

Detroit. Two Site Link Bridges that Are Not Transitive

In the diagram, connection objects are not possible between DC4 in Detroit and DC3 in Portland because two site link bridges are not transitive. For connection objects to be possible between DC3 and DC4, the site link DB must be added to the PS-SB site link bridge. In this case, the cost of replication between DC3 and DC4 is 9. Note



Cost is applied differently to a site link bridge than to a site link that contains more than two sites. To use the preceding example, if Seattle, Boston, and Portland are all in the same site link, the cost of replication between any of

the two sites is the same. Bridging site links manually is generally recommended for only large branch office deployments. For more information about using manual site link bridging, see the “Windows Server 2003 Active Directory Branch Office Deployment Guide.”

Site Link Schedule Replication using the RPC transport between sites is scheduled. The schedule specifies one or many time periods during which replication can occur. For example, you might schedule a site link for a dial-up line to be available during off-peak hours (when telephone rates are low) and unavailable during high-cost regular business hours. The schedule attribute of the site link object specifies the availability of the site link. The default setting is that replication is always available. Note



The Ignore schedules setting on the IP container is equivalent to replication being always available. If Ignore

schedules is selected, replication occurs at the designated intervals but ignores any schedule. If replication goes through multiple site links, there must be at least one common time period (overlap) during which replication is available; otherwise, the connection is treated as not available. For example, if site link AB has a schedule of 18:00 hours to 24:00 hours and site link BC has a schedule of 17:00 hours to 20:00 hours, the resulting overlap is 18:00 hours through 20:00 hours, which is the intersection of the schedules for site link AB and site link BC. During the time in which the schedules overlap, replication can occur from site A to site C even if a domain controller in the intermediate site B is not available. If the schedules do not overlap, replication from the intermediate site to the distant site continues when the next replication schedule opens on the respective site link. Note



Cost considerations also affect whether connections are created. However, if the site link schedules do not overlap, the cost is irrelevant.

Scheduling across time zones When scheduling replication across time zones, consider the time difference to ensure that replication does not interfere with peak production times in the destination site. Domain controllers store time in Coordinated Universal Time (UTC). When viewed through the Active Directory Sites and Services snap-in, time settings in site link object schedules are displayed according to the local time of the computer on which the snap-in is being run. However, replication occurs according to UTC. For example, suppose Seattle adheres to Pacific Standard Time (PST) and Japan adheres to Japan Standard Time (JST), which is 17 hours later. If a schedule is set on a domain controller in Seattle and the site link on which the schedule is set connects Seattle and Tokyo, the actual time of replication in Tokyo is 17 hours later. If the schedule is set to begin replication at 10:00 PM PST in Seattle, the conversion can be computed as follows:



Convert 10:00 PM PST to 22:00 PST military time.

• • •

Add 8 hours to arrive at 06:00 UTC, the following day. Add 9 hours to arrive at 15:00 JST. 15:00 JST converts to 3:00 PM.

Thus, when replication begins at 10:00 o’clock at night in Seattle, it is occurring in Tokyo at 3:00 o’clock in the afternoon the following day. By scheduling replication a few hours later in Seattle, you can avoid replication occurring during working hours in Japan.

Schedule implementation The times that you can set in the Schedule setting on the site link are in one-hour increments. For example, you can schedule replication to occur between 00:00 hours and 01:00 hours, between 01:00 hours and 02:00 hours, and so forth. However, each block in the actual connection schedule is 15 minutes. For this reason, when you set a schedule of 01:00 hours to 02:00 hours, you can assume that replication is queued at some point between 01:00 hours and 01:14:59 hours. Note



RPC synchronous inbound replication is serialized so that if the server is busy replicating this directory partition from another source, replication from a different source does not begin until the first synchronization is complete. SMTP asynchronous replication is processed serially by order of arrival, with multiple replication requests queued

simultaneously. Specifically, a replication event is queued at time t + n, where t is the replication interval that is applied across the schedule and n is a pseudo-random number from 1 minute through 15 minutes. For example, if the site link indicates that replication can occur from 02:00 hours through 07:00 hours, and the replication interval is 2 hours (120 minutes), t is 02:00 hours, 04:00 hours, and 06:00 hours. A replication event is queued on the destination domain controller between 02:00 hours and 02:14:59 hours, and another replication event is queued between 04:00 hours and 04:14:59 hours. Assuming that the first replication event that was queued is complete, another replication event is queued between 06:00 hours and 06:14:59 hours. If the synchronization took longer than two hours, the second synchronization would be ignored because an event is already in the queue. Replication can extend beyond the end of the schedule. A period of replication latency that starts before the end of the schedule runs until completion, even if the period is still running when the schedule no longer allows replication to be available. Note



The replication queue is shared with other events, and the time at which replication takes place is approximate. Duplicate replication events are not queued for the same directory partition and transport.

Connection object schedule Each connection object has a schedule that controls when (during what hours) and how frequently (how many times per hour) replication can occur:

• • • •

None (no replication) Once per hour (the default setting) Twice per hour Four times per hour

The connection object schedule and interval are derived from one of two locations, depending on whether it is an intrasite or intersite connection:



Intrasite connections inherit a default schedule from the schedule attribute of the NTDS Site Settings object. By



Intersite connections inherit the schedule and interval from the site link.

default, this schedule is always available and has an interval of one hour.

Although intrasite replication is prompted by changes, intrasite connection objects inherit a default schedule so that replication occurs periodically, regardless of whether change notification has been received. The connection object schedule ensures that intrasite replication occurs if a notification message is lost, or if notification does not take place, due to network problems or a domain controller becomes unavailable. The NTDS Site Settings schedule has a minimum replication interval of 15 minutes. This minimum replication interval is not configurable and determines the smallest interval that is possible for both intrasite and intersite replication (on a connection object or a site link, respectively).

For intersite replication, the schedule is configured on the site link object, but the connection object schedule actually determines replication; that is, the connection object schedule for an intersite connection is derived from the site link schedule, which is applied through the connection object schedule. Scheduled replication occurs independently of change notification. Note You do not need to configure the connection object schedule unless you are creating a manual intersite replication



topology that does not use the KCC automatic connection objects. The KCC uses a two-step process to compute the schedule of an intersite connection. 1. 2.

The schedules of the site links traversed by a connection are merged together. This merged schedule is modified so that it is available at only certain periods. The length of those periods is equal to

the maximum replication interval of the site links traversed by this connection. By using Active Directory Sites and Services, you can manually revise the schedule on a connection object, but such an override is effective for only administrator-owned connection objects.

Replication Interval For each site link object, you can specify a value for the replication interval (frequency), which determines how often replication occurs over the site link during the time that the schedule allows. For example, if the schedule allows replication between 02:00 hours and 04:00 hours, and the replication interval is set for 30 minutes, replication can occur up to four times during the scheduled time. The default replication interval is 180 minutes, or 3 hours. When the KCC creates a connection between a domain controller in one site and a domain controller in another site, the replication interval of the connection is the maximum interval along the minimum-cost path of site link objects from one end of the connection to the other.

Interaction of Replication Schedule and Interval When multiple site links are required to complete replication for all sites, the replication interval settings on each site link combine to affect the entire length of the connection between sites. In addition, when schedules on each site link are not identical, replication can occur only when the schedules overlap. Suppose that site A and site B have site link AB, and site B and site C have site link BC. When a domain controller in site A replicates with a domain controller in site C, it can do so only as often as the maximum interval that is set for site link AB and site link BC allows. The following table shows the site link settings that determine how often and during what times replication can occur between domain controllers in site A, site B, and site C. Replication Interval and Schedule Settings for Two Site Links

Site Link Replication Interval Schedule AB

30 minutes

12:00 hours to 04:00 hours

BC

60 minutes

01:00 hours to 05:00 hours

Given these settings, a domain controller in site A can replicate with a domain controller in site B according to the AB site link schedule and interval, which is once every 30 minutes between the hours of 12:00 and 04:00. However, assuming that there is no site link AC, a domain controller in site A can replicate with a domain controller in site C between the hours of 01:00 and 04:00, which is where the schedules on the two site links intersect. Within that timespan, they can replicate once every 60 minutes, which is the greater of the two replication intervals. Top of page

KCC and Topology Generation The Knowledge Consistency Checker (KCC) is a dynamic-link library (DLL) that runs as a distributed application on every domain controller. The KCC on each domain controller modifies data in its local instance of the directory in response to forest-wide changes, which are made known to the KCC by changes to data in the configuration directory partition. The KCC generates and maintains the replication topology for replication within sites and between sites by converting KCC-defined and administrator-defined (if any) connection objects into a configuration that is understood by the directory replication engine. By default, the KCC reviews and makes modifications to the Active Directory replication topology every 15 minutes to ensure propagation of data, either directly or transitively, by creating and deleting connection objects as needed. The KCC recognizes changes that occur in the environment and ensures that domain controllers are not orphaned in the replication topology.

Operating independently, the KCC on each domain controller uses its own view of the local replica of the configuration directory partition to arrive at the same intrasite topology. One KCC per site, the ISTG, determines the intersite replication topology for the site. Like the KCC that runs on each domain controller within a site, the instances of the ISTG in different sites do not communicate with each other. They independently use the same algorithm to produce a consistent, well-formed spanning tree of connections. Each site constructs its own part of the tree and, when all have run, a working replication topology exists across the enterprise. The predictability of all KCCs allows scalability by reducing communication requirements between KCC instances. All KCCs agree on where connections will be formed, ensuring that redundant replication does not occur and that all parts of the enterprise are connected. The KCC performs two major functions:

• •

Configures appropriate replication connections (connection objects) on the basis of the existing cross-reference, server, NTDS settings, site, site link, and site link bridge objects and the current status of replication. Converts the connection objects that represent inbound replication to the local domain controller into the replication agreements that are actually used by the replication engine. These agreements, called replica links, accommodate replication of a single directory partition from the source to the destination domain controller.

Intervals at Which the KCC Runs By default, the KCC runs its first replication topology check five minutes after the domain controller starts. The domain controller then attempts initial replication with its intrasite replication partners. If a domain controller is being used for multiple other services, such as DNS, WINS, or DHCP, extending the replication topology check interval can ensure that all services have started before the KCC begins using CPU resources. You can edit the registry to modify the interval between startup and the time the domain controller first checks the replication topology. Note



If you must edit the registry, use extreme caution. Registry information is provided here as a reference for use by only highly skilled directory service administrators. It is recommended that you do not directly edit the registry unless, as in this case, there is no Group Policy or other Windows tools to accomplish the task. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be

stored. Storage of incorrect values can result in unrecoverable errors in the system. Modifying the interval between startup and the time the domain controller first checks the replication topology requires changing the Repl topology update delay (secs) entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters as appropriate:

• • •

Value: Number of seconds to wait between the time Active Directory starts and the KCC runs for the first time. Default: 300 seconds (5 minutes) Data type: REG_DWORD

Thereafter, as long as services are running, the KCC on each domain controller checks the replication topology every 15 minutes and makes changes as necessary. Modifying the interval at which the KCC performs topology review requires changing the Repl topology update period (secs) entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters as appropriate:



Value: Number of seconds between KCC topology

• •

Default: 900 seconds (15 minutes)

updates Data type: REG_DWORD

Objects that the KCC Requires to Build the Replication Topology The following objects, which are stored in the configuration directory partition, provide the information required by the KCC to create the replication topology:



Cross-reference. Each directory partition in the forest is identified in the Partitions container by a cross-reference object. The attributes of this object are used by the replication system to locate the domain controllers that store each directory partition.

• •

Server. Each domain controller in the forest is identified as a server object in the Sites container. NTDS Settings. Each server object that represents a domain controller has a child NTDS Settings object. Its presence

identifies the server as having Active Directory installed. The NTDS Settings object must be present for the server to

• • •

be considered by the KCC for inclusion in the replication topology. Site. The presence of the above objects also indicates to the KCC the site in which each domain controller is located for replication. For example, the distinguished name of the NTDS Settings object contains the name of the site in which the server object that represents the domain controller exists. Site link. A site link must be available between any set of sites and its schedule and cost properties evaluated for routing decisions. Site link bridge. If they exist, site link bridge objects and properties are evaluated for routing decisions.

If the domain controller is physically located in one site but its server object is configured in a different site, the domain controller will attempt intrasite replication with a replication partner that is in the site of its server object. In this scenario, the improper configuration of servers in sites can affect network bandwidth. If a site object exists for a site that has no domain controllers, the KCC does not consider the site when generating the replication topology.

Topology Generation Phases The KCC generates the replication topology in two phases:

• •

Evaluation. During the evaluation phase, the KCC evaluates the current topology, determines whether replication failures have occurred with the existing connections, and constructs whatever new connection objects are required to complete the replication topology. Translation. During the translation phase, the KCC implements, or “translates,” the decisions that were made during the evaluation phase into agreements between the replication partners. During this phase, the KCC writes to the repsFrom attribute on the local domain controller (for intrasite topology) or on all bridgehead servers in a site (for intersite topology) to identify the replication partners from which each domain controller pulls replication. For more information about the information in the replication agreement, see “How the Active Directory Replication Model Works.”

KCC Modes and Scopes Because individual KCCs do not communicate directly to generate the replication topology, topology generation occurs within the scope of either a single domain controller or a single site. In performing the two topology generation phases, the KCC has three modes of operation. The following table identifies the modes and scope for each mode. Modes and Scopes of KCC Topology Generation

KCC Mode

Performing Domain Controllers

Scope

Description

Intrasite

All

Local

Evaluate all servers in a site and create connection objects locally on

server

this server from servers in the same site that are adjacent to this server in the ring topology.

Intersite

One domain controller per

Local site Evaluate the servers in all sites and create connection objects both

site that has the ISTG role Link translation

All

locally and on other servers in the site from servers in different sites. Local

Translate connection objects into replica links (partnerships) for each

server

server relative to each directory partition that it holds.

Topology Evaluation and Connection Object Generation The KCC on a destination domain controller evaluates the topology by reading the existing connection objects. For each connection object, the KCC reads attribute values of the NTDS Settings object (class nTDSDSA) of the source domain controller (indicated by the fromServer value on the connection object) to determine what directory partitions its destination domain controller has in common with the source domain controller.

Topology evaluation for all domain controllers To determine the connection objects that need to be generated, the KCC uses information stored in the attributes of the NTDS Settings object that is associated with each server object, as follows:

• • •

For all directory partitions, the multivalued attribute hasMasterNCs stores the distinguished names of all directory partitions that are stored on that domain controller. For all domain controllers, the value of the options attribute indicates whether that domain controller is configured to host the global catalog. The hasPartialReplicaNCs attribute contains the set of partial-replica directory partitions (global catalog read-only domain partitions) that are located on the domain controller that is represented by the server object.

Topology evaluation for domain controllers running Windows Server 2003 For all domain controllers that are running Windows Server 2003, the msDS-HasDomainNCs attribute of the NTDS Settings object contains the name of the domain directory partition that is hosted by the domain controller. In forests that have the forest functional level of Windows Server 2003 or Windows Server 2003 interim, the following additional information is used by the KCC to evaluate the topology for application directory partitions and to generate the needed connections:



The linked multivalued attribute msDS-NC-Replica-Locations on cross-reference objects stores the distinguished names of NTDS Settings objects for all domain controllers that are configured to host a replica of the corresponding application directory partition. Note

• •

When you remove Active Directory from a server that hosts an application directory partition, its corresponding entry in this multivalued attribute is automatically dropped because msDS-NC-Replica-Locations is a linked

attribute. Application directory partition replica locations are determined by matching the values of the hasMasterNCs attribute with the values of the msDS-NC-Replica-Locations linked multivalued attribute of cross-reference objects. The msDS-NC-Replica-Locations attribute holds distinguished name references to the NTDS Settings objects for domain controllers that have been configured to store replicas of the application directory partition. The msDS-NC-ReplicaLocations attribute facilitates the enumeration of existing replicas for a given application directory partition.

Connection objects can then be created between the domain controllers that hold matching replicas. Be aware that due to replication latency, the configuration of replicas in attribute values does not guarantee the existence of the replica on a given server. For example, you can designate a domain controller as a global catalog server by clicking the Global Catalog check box on the NTDS Settings object properties in Active Directory Sites and Services. However, until all of the partial domain directory partitions have replicated to that domain controller and global-catalogspecific SRV records are registered in DNS, it is not a functioning global catalog server (does not advertise as a global catalog server in DNS). Similarly, observing the NTDS Settings name for a server in the msDS-NC-Replica-Locations attribute on the cross-reference object does not indicate that the replica has necessarily been fully replicated to that server.

Connection Translation All KCCs process their connection objects and translate them into connection agreements, also called “replica links,” between pairs of domain controllers. At specified intervals, Active Directory replicates data from the source domain controller to the destination for directory partitions that they have in common. These replication agreements do not appear in the administrative tools; the replication engine uses them internally to track the directory partitions that are to be replicated from specified servers. For each directory partition that two domain controllers have in common and that matches the full and partial characteristics of a replication source, the KCC creates (or updates) a replication agreement on the destination domain controller. Replication agreements take the form of entries for each source domain controller in the repsFrom attribute on the topmost object of each directory partition replica. This value is stored and updated locally on the domain controller and is not replicated. The KCC updates this attribute each time it runs. For example, suppose a connection object is created between two domain controllers from different domains. Assuming that neither of these domain controllers is a global catalog server and neither stores an application directory partition, the KCC identifies the only two directory partitions that the domain controllers have in common — the schema directory partition and the configuration directory partition. If a connection object links domain controllers in the same domain, at least three directory partitions are replicated: the schema directory partition, the configuration directory partition, and the domain directory partition.

In contrast, if the connection object that is created establishes replication between two domain controllers that are global catalog servers, then in addition to the directory partitions the domain controllers have in common, a partial replica of each additional domain directory partition in the forest is also replicated between the two domain controllers over the same connection. For more information about replication agreements, see “How the Active Directory Replication Model Works.”

Read-only and Writable Replicas When computing the replication topology, the KCC must consider whether a replica is writable or read-only. For each potential set of replication partners in the topology, the considerations are as follows:

• • • •

A writable replica can receive updates from a corresponding writable replica. A read-only replica can receive updates from a corresponding writable replica. A read-only replica can receive updates from a corresponding read-only replica. A writable replica cannot receive updates from a corresponding read-only replica.

In Windows 2000 forests, for any one domain directory partition, the KCC calculates two topologies: one for the writable replicas and one for the read-only replicas. This calculation allows redundant connections for read-only replicas under certain conditions. The improved Windows Server 2003 KCC spanning tree algorithm eliminates redundancy that can occur in Windows 2000. The Windows Server 2003 algorithm computes only one topology with slightly different behavior for replicating the global catalog. The KCC on a domain controller that is not a global catalog server does not consider global catalog servers in its calculations for read-only domain replicas because it never replicates read-only data from a global catalog server.

Automated Intrasite Topology Generation For replication within a site, a topology is generated and then optimized to minimize the number of hops to three. The means by which the three-hop minimum is achieved varies according to the number of domain controllers that are hosted in the site as well as the presence of global catalog servers. Generally, the intrasite topology is formed in a ring. The topology becomes more complex as the number of servers increases. However, the KCC can accommodate thousands of domain controllers in a site.

Simplified Ring Topology Generation A simplified process for creating the topology for replication within a site begins as follows:

• • •

The KCC generates a list of all servers in the site that hold that directory partition. These servers are connected in a ring. For each neighboring server in the ring from which the current domain controller is to replicate, the KCC creates a

connection object if one does not already exist. This simple approach guarantees a topology that tolerates a single failure. If a domain controller is not available, it is not included in the ring that is generated by the list of servers. However, this topology, with no other adjustments, accommodates only seven servers. Beyond this number, the ring would require more than three hops for some servers. The simplest case scenario — seven or fewer domain controllers, all in the same domain and site — would result in the replication topology shown in the following diagram. The only directory partitions to replicate are a single domain directory partition, the schema directory partition, and the configuration directory partition. Those topologies are generated first, and at that point, sufficient connections to replicate each directory partition have already been created. In the next series of diagrams, the arrows indicate one-way or two-way replication of the type of directory partitions indicated in the Legend. Simple Ring Topology that Requires No Optimization

Because a ring topology is created for each directory partition, the topology might look different if domain controllers from a second domain were present in the site. The next diagram illustrates the topology for domain controllers from two domains in the same site with no global catalog servers defined in the site. Ring Topology for Two Domains in a Site that Has No Global Catalog Server

The next diagram illustrates replication between a global catalog server and three domains to which the global catalog server does not belong. When a global catalog server is added to the site in DomainA, additional connections are required to replicate updates of the other domain directory partitions to the global catalog server. The KCC on the global catalog server creates connection objects to replicate from domain controllers for each of the other domain directory partitions within the site, or from another global catalog server, to update the read-only partitions. Wherever a domain directory partition is replicated, the KCC also uses the connection to replicate the schema and configuration directory partitions.

Note



Connection objects are generated independently for the configuration and schema directory partitions (one connection) and for the separate domain and application directory partitions, unless a connection from the same source to destination domain controllers already exists for one directory partition. In that case, the same connection is

used for all (duplicate connections are not created). Intrasite Topology for Site with Four Domains and a Global Catalog Server

Expanded Ring Topology Within a Site When the number of servers in a site grows beyond seven, the KCC estimates the number of connections that are needed so that if a change occurs at any one domain controller, there are as many replication partners as needed to ensure that no domain controller is more than three replication hops from another domain controller (that is, a change takes no more than three hops before it reaches another domain controller that has not already received the change by another path). These optimizing connections are created at random and are not necessarily created on every third domain controller. The KCC adds connections automatically to optimize a ring topology within a site, as follows:



Given a set of nodes in a ring, create the minimum number of connections, n, that each server must have to ensure a



If the local server does not have n extra connections, the KCC does the following:

path of no more than three hops to another server. Given n, topology generation proceeds as follows.

• •

Chooses n other servers randomly in the site as source servers. For each of those servers, creates a connection object.

This approach approximates the minimum-hop goal of three servers. In addition, it grows well, because as the site grows in server count, old optimizing connections are still useful and are not removed. Also, every time an additional 9 to 11 servers are added, a connection object is deleted at random; then a new one is created, ideally having one of the new servers as its source. This process ensures that, over time, the additional connections are distributed well over the entire site. The following diagram shows an intrasite ring topology with optimizing connections in a site that has eight domain controllers in the same domain. Without optimizing connections, the hop count from DC1 to DC2 is more than three hops. The KCC creates optimizing connections to limit the hop count to three hops. The two one-way inbound optimizing connections accommodate all directory partitions that are replicated between the two domain controllers.

Intrasite Topology with Optimizing Connections

Excluded Nonresponding Servers The KCC automatically rebuilds the replication topology when it recognizes that a domain controller has failed or is unresponsive. The criteria that the KCC uses to determine when a domain controller is not responsive depend upon whether the server computer is within the site or not. Two thresholds must be reached before a domain controller is declared “unavailable” by the KCC:



The requesting domain controller must have made n attempts to replicate from the target domain controller.

• •

For replication between sites, the default value of n is 1 attempt. For replication within a site, the following distinctions are made between the two immediate neighbors (in the ring) and the optimizing connections: For immediate neighbors, the default value of n is 0 failed attempts. Thus, as soon as an attempt fails, a new server is tried. For optimizing connections, the default value of n is 1 failed attempt. Thus, as soon as a second failed attempt



occurs, a new server is tried. A certain amount of time must have passed since the last successful replication attempt.

• •

For replication between sites, the default time is 2 hours. For replication within a site, a distinction is made between the two immediate neighbors (in the ring) and the optimizing connections: For immediate neighbors, the default time is 2 hours.

For optimizing connections, the default value is 12 hours. You can edit the registry to modify the thresholds for excluding nonresponding servers. Note



If you must edit the registry, use extreme caution. Registry information is provided here as a reference for use by only highly skilled directory service administrators. It is recommended that you do not directly edit the registry unless, as in this case, there is no Group Policy or other Windows tools to accomplish the task. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be

stored. Storage of incorrect values can result in unrecoverable errors in the system. Modifying the thresholds for excluding nonresponding servers requires editing the following registry entries in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters, with the data type REG_DWORD. You can modify these values to any desired value as follows: For replication between sites, use the following entries:

• •

IntersiteFailuresAllowed Value: Number of failed attempts Default: 1 MaxFailureTimeForIntersiteLink (secs)

Value: Time that must elapse before being considered unavailable, in seconds Default: 7200 (2 hours) For optimizing connections within a site, use the following entries:

• •

NonCriticalLinkFailuresAllowed Value: Number of failed attempts Default: 1 MaxFailureTimeForNonCriticalLink Value: Time that must elapse before considered unavailable, in seconds

Default: 43200 (12 hours) For immediate neighbor connections within a site, use the following entries:

• •

CriticalLinkFailuresAllowed Value: Number of failed attempts Default: 0 MaxFailureTimeForCriticalLink Value: Time that must elapse before considered unavailable, in seconds

Default: 7200 (2 hours) When the original domain controller begins responding again, the KCC automatically restores the replication topology to its pre-failure condition the next time that the KCC runs.

Fully Optimized Ring Topology Generation Taking the addition of extra connections, management of nonresponding servers, and growth-management mechanisms into account, the KCC proceeds to fully optimize intrasite topology generation. The appropriate connection objects are created and deleted according to the available criteria. Note



Connection objects from nonresponding servers are not deleted because the condition is expected to be transient.

Automated Intersite Topology Generation To produce a replication topology for hundreds of domains and thousands of sites in a timely manner and without compromising domain controller performance, the KCC must make the best possible decision when confronted with the question of which network link to use to replicate a given directory partition between sites. Ideally, connections occur only between servers that contain the same directory partition(s), but when necessary, the KCC can also use network paths that pass through servers that do not store the directory partition. Intersite topology generation and associated processes are improved in Windows Server 2003 in the following ways:

• • •

Improved scalability: A new spanning tree algorithm achieves greater efficiency and scalability when the forest has a functional level of Windows Server 2003. For more information about this new algorithm, see “Improved KCC Scalability in Windows Server 2003 Forests” later in this section. Less network traffic: A new method of communicating the identity of the ISTG reduces the amount of network traffic that is produced by this process. For more information about this method, see “Intersite Topology Generator” later in this section. Multiple bridgehead servers per site and domain, and initial bridgehead server load balancing: An improved algorithm provides random selection of multiple bridgehead servers per domain and transport (the Windows 2000 algorithm allows selection of only one). The load among bridgehead servers is balanced the first time connections are generated. For more information about bridgehead server load balancing, see “Windows Server 2003 Multiple Bridgehead Selection” later in this section.

Factors Considered by the KCC The spanning tree algorithm used by the KCC that is running as the ISTG to create the intersite replication topology determines how to connect all the sites that need to be connected with the minimum number of connections and the least cost. The algorithm must also consider the fact that each domain controller has at least three directory partitions that potentially require synchronization with other sites, not all domain controllers store the same partitions, and not all sites host the same domains. The ISTG considers the following factors to arrive at the intersite replication topology:



Location of domain directory partitions (calculate a replication topology for each domain).

• • • • •

Bridgehead server availability in each site (at least one is available). All explicit site links. With automatic site link bridging in effect, consider all implicit paths as a single path with a combined cost. With manual site link bridging in effect, consider the implicit combined paths of only those site links included in the explicit site link bridges. With no site link bridging in effect, where the site links represent hops between domain controllers in the same domain, replication flows in a store-and-forward manner through sites.

Improved KCC Scalability in Windows Server 2003 Forests KCC scalability is greatly improved in Windows Server 2003 forests over its capacity in Windows 2000 forests. Windows 2000 forests scale safely to support 300 sites, whereas Windows Server 2003 forests have been tested to 3,000 sites. This level of scaling is achieved when the forest functional level is Windows Server 2003. At this forest functional level, the method for determining the least-cost path from each site to every other site for each directory partition is significantly more efficient than the method that is used in a Windows 2000 forest or in a Windows Server 2003 forest that has a forest functional level of Windows 2000.

Windows 2000 Spanning Tree Algorithm The ability of the KCC to generate the intersite topology in Windows 2000 forests is limited by the amount of CPU time and memory that is consumed when the KCC computes the replication topology in large environments that use transitive (bridged) site links. In a Windows 2000 forest, a potential disadvantage of bridging all site links affects only very large networks (generally, greater than 100 sites) where periods of high CPU activity occur every 15 minutes when the KCC runs. By default, the KCC creates a single bridge for the entire network, which generates more routes that must be processed than if automatic site link bridging is not used and manual site link bridges are applied selectively. In a Windows 2000 forest, or in a Windows Server 2003 forest that has a forest functional level of Windows 2000, the KCC reviews the comparison of multiple paths to and from every destination and computes the spanning tree of the least-cost path. The spanning tree algorithm works as follows:



Computes a cost matrix by identifying each site pair (that is, each pair of bridgehead servers in different sites that store the directory partition) and the cost on the site link connecting each pair. Note

• •

This matrix is actually computed by Intersite Messaging and used by the KCC.

By using the costs computed in the matrix, builds a spanning tree between sites that store the directory partition.

This method becomes inefficient when there are a large number of sites. Note

• • • •

CPU time and memory is not an issue in a Windows 2000 forest as long as the following criteria apply: D is the number of domains in your network S is the number of sites in your network (1 + D) * S^2 <= 100,000

Windows Server 2003 Spanning Tree Algorithm A more efficient spanning tree algorithm improves efficiency and scalability of replication topology generation in Windows Server 2003 forests. When the forest functional level is either Windows Server 2003 or Windows Server 2003 interim, the improved algorithm takes effect and computes a minimum-cost spanning tree of connections between the sites that host a particular directory partition, but eliminates the inefficient cost matrix. Thus, the KCC directly determines the lowest-cost spanning tree for each directory partition, considering the schema and configuration directory partitions as a single tree. Where the spanning trees overlap, the KCC generates a single connection between domain controllers for replication of all common directory partitions. In a Windows Server 2003 forest, both versions of the KCC spanning tree algorithms are available. The algorithm for Windows 2000 forests is retained for backwards compatibility with the Windows 2000 KCC. It is not possible for the two algorithms to run simultaneously in the same enterprise.

DFS Site Costing and Windows Server 2003 SP1 Site Options When the forest functional level is Windows Server 2003 or Windows Server 2003 interim and the ISTG does not use Intersite Messaging to calculate the intersite cost matrix, DFS can still use Intersite Messaging to compute the cost matrix for its site-costing functionality, provided that the Bridge all site links option is not turned off. In branch office deployments, where the large number of sites and site links makes automatic site link bridging too costly in terms of the replication connections that are generated, the Bridge all site links option is usually turned off on the IP container (CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=ForestRootDomain). In this case, DFS is unable to use Intersite Messaging to calculate site costs. When the forest functional level is Windows Server 2003 or Windows Server 2003 interim and the ISTG in a site is running Windows Server 2003 with SP1, you can use a site option to turn off automatic site link bridging for KCC operation without hampering the ability of DFS to use Intersite Messaging to calculate the cost matrix. This site option is set by running the command repadmin /siteoptions W2K3_BRIDGES_REQUIRED. This option is applied to the NTDS Site Settings object (CN=NTDS Site Settings,CN=SiteName,CN=Sites,CN=Configuration,DC=ForestRootDomain). When this method is used to disable automatic site link bridging (as opposed to turning off Bridge all site links), default Intersite Messaging options enable the site-costing calculation to occur for DFS. Note The site option on the NTDS Site Settings object can be set on any domain controller, but it does not take effect until replication of the change reaches the ISTG role holder for the site.

Intersite Topology Generator The KCC on the domain controller that has the ISTG role creates the inbound connections on all domain controllers in its site that require replication with domain controllers in other sites. The sum of these connections for all sites in the forest forms the intersite replication topology. A fundamental concept in the generation of the topology within a site is that each server does its part to create a sitewide topology. In a similar manner, the generation of the topology between sites depends on each site doing its part to create a forest-wide topology between sites.

ISTG Role Ownership and Viability The owner of the ISTG role is communicated through normal Active Directory replication. Initially, the first domain controller in the site is the ISTG role owner. It communicates its role ownership to other domain controllers in the site by writing the distinguished name of its child NTDS Settings object to the interSiteTopologyGenerator attribute of the NTDS Site Settings object for the site. As a change to the configuration directory partition, this value is replicated to all domain controllers in the forest. The ISTG role owner is selected automatically. The role ownership does not change unless:

• •

The current ISTG role owner becomes unavailable. All domain controllers in the site are running Windows 2000 and one of them is upgraded to Windows Server 2003.

If at least one domain controller in a site is running Windows Server 2003, the ISTG role is assumed by a domain controller that is running Windows Server 2003. The viability of the current ISTG is assessed by all other domain controllers in the site. The need for a new ISTG in a site is established differently, depending on the forest functional level that is in effect.



Windows 2000 functional level: At 30-minute intervals, the current ISTG notifies every other domain controller of its existence and availability by writing the interSiteTopologyGenerator attribute of the NTDS Site Settings object for the site. The change replicates to every domain controller in the forest. The KCC on each domain controller monitors this attribute for its site to verify that it has been written. If a period of 60 minutes elapses without a modification to



the attribute, a new ISTG declares itself. Windows Server 2003 or Windows Server 2003 interim functional level: Each domain controller maintains an up-todateness vector, which contains an entry for each domain controller that holds a full replica of any directory partition that the domain controller replicates. On domain controllers that are running Windows Server 2003, this up-todateness vector contains a timestamp that indicates the times that it was last contacted by its replication partners, including both direct and indirect partners (that is, every domain controller that replicates a directory partition that is stored by this domain controller). The timestamp is recorded whether or not the local domain controller actually received any changes from the partner. Because all domain controllers store the schema and configuration directory

partitions, every domain controller is guaranteed to have the ISTG for its site among the domain controllers in its upto-dateness vector. This timestamp eliminates the need to receive periodic replication of the updated interSiteTopologyGenerator attribute from the current ISTG. When the timestamp indicates that the current ISTG has not contacted the domain controller in the last 120 minutes, a new ISTG declares itself. The Windows Server 2003 method eliminates the network traffic that is generated by periodically replicating the interSiteTopologyGenerator attribute update to every domain controller in the forest.

ISTG Eligibility When at least one domain controller in a site is running Windows Server 2003, the eligibility for the ISTG role depends on the operating system of the domain controllers. When a new ISTG is required, each domain controller computes a list of domain controllers in the site. All domain controllers in the site arrive at the same ordered list. Eligibility is established as follows:

• •

If no domain controllers in the site are running Windows Server 2003, all domain controllers that are running Windows 2000 Server are eligible. The list of eligible servers is ordered by GUID. If at least one domain controller in the site is running Windows Server 2003, all domain controllers that are running Windows Server 2003 are eligible. In this case, the entries in the list are sorted first by operating system and then by GUID. In a site in which some domain controllers are running Windows 2000 Server, domain controllers that are running Windows Server 2003 remain at the top of the list and use the GUID in the same manner to maintain the

order. The domain controller that is next in the list of servers after the current owner declares itself the new ISTG by writing the interSiteTopologyGenerator attribute on the NTDS Site Settings object. If the current ISTG is temporarily disconnected from the topology, as opposed to being shut down, and a new ISTG declares itself in the interim, then two domain controllers can temporarily assume the ISTG role. When the original ISTG resumes replication, it initially considers itself to be the current ISTG and creates inbound replication connection objects, which results in duplicate intersite connections. However, as soon as the two ISTGs replicate with each other, the last domain controller to write the intersiteTopologyGenerator attribute continues as the single ISTG and removes the duplicate connections.

Bridgehead Server Selection Bridgehead servers can be selected in the following ways:

• •

Automatically by the ISTG from all domain controllers in the site.



Manually by creating a connection object on a domain controller in one site from a domain controller in a different site.

Automatically by the ISTG from all domain controllers that are identified as preferred bridgehead servers, if any preferred bridgehead servers are assigned. Preferred bridgehead servers must be assigned manually.

By default, when at least one domain controller in a site is running Windows Server 2003 (regardless of forest functional level), any domain controller that hosts a domain in the site is a candidate to be an ISTG-selected bridgehead server. If preferred bridgehead servers are selected, candidates are limited to this list. The connections from remote servers are distributed among the available candidate bridgehead servers in each site. The selection of multiple bridgehead servers per domain and transport is new in Windows Server 2003. The ISTG uses an algorithm to evaluate the list of domain controllers in the site that can replicate each directory partition. This algorithm is improved in Windows Server 2003 to randomly select multiple bridgehead servers per directory partition and transport. In sites containing only domain controllers that are running Windows 2000 Server, the ISTG selects only one bridgehead server per directory partition and transport. When bridgehead servers are selected by the ISTG, the ISTG ensures that each directory partition in the site that has a replica in any other site can be replicated to and from that site or sites. Therefore, if a single domain controller hosts the only replica of a domain in a specific site and the domain has domain controllers in another site or sites, that domain controller must be a bridgehead server in its site. In addition, that domain controller must be able to connect to a bridgehead server in the other site that also hosts the same domain directory partition. Note



If a site has a global catalog server but does not contain at least one domain controller of every domain in the forest, then at least one bridgehead server must be a global catalog server.

Preferred Bridgehead Servers Because bridgehead servers must be able to accommodate more replication traffic than non-bridgehead servers, you might want to control which servers have this responsibility. To specify servers that the ISTG can designate as bridgeheads, you can select domain controllers in the site that you want the ISTG to always consider as preferred bridgehead servers for the specified transport. These servers are used exclusively to replicate changes collected from the site to any other site over that transport. Designating preferred bridgehead servers also serves to exclude those domain controllers that, for reasons of capability, you do not want to be used as bridgehead servers. Depending on the available transports, the directory partitions that must be replicated, and the availability of global catalog servers, multiple bridgehead servers might be required to replicate full and partial copies of directory data from one site to another. The ISTG recognizes preferred bridgehead servers by reading the bridgeheadTransportList attribute of the server object. When this attribute has a value that is the distinguished name of the transport container that the server uses for intersite replication (IP or SMTP), the KCC treats the server as a preferred bridgehead server. For example, the value for the IP transport is CN=IP,CN=Inter-Site Transports,CN=Sites,CN=Configuration,DC=ForestRootDomainName. You can use Active Directory Sites and Services to designate a preferred bridgehead server by opening the server object properties and placing either the IP or SMTP transport into the preferred list, which adds the respective transport distinguished name to the bridgeheadTransportList attribute of the server. The bridgeheadServerListBl attribute of the transport container object is a backlink attribute of the bridgeheadTransportList attribute of the server object. If the bridgeheadServerListBl attribute contains the distinguished name of at least one server in a site, then the KCC uses only preferred bridgehead servers to replicate changes for that site, according to the following rules:

• •

If at least one server is designated as a preferred bridgehead server, updates to the domain directory partition hosted by that server can be replicated only from a preferred bridgehead server. If at the time of replication no preferred bridgehead server is available for that directory partition, replication of that directory partition does not occur. If any bridgehead servers are designated but no domain controller is designated as a preferred bridgehead server for a specific directory partition that has replicas in another site or sites, the KCC selects a domain controller to act as the

bridgehead server, if one is available that can replicate the directory partition to the other site or sites. Therefore, to use preferred bridgehead servers effectively, be sure to:





Assign at least two or more bridgehead servers for each of the following:

• • •

Any domain directory partition that has a replica in any other site. Any application directory partition that has a replica in another site. The schema and configuration directory partitions (one bridgehead server replicates both) if no domains in the site have replicas in other sites.

If the site has a global catalog server, select the global catalog server as one of the preferred bridgehead servers.

Windows 2000 Single Bridgehead Selection In a Windows 2000 forest or in a Windows Server 2003 forest that has a forest functional level of Windows 2000, the ISTG selects a single bridgehead server per directory partition and transport. The selection changes only when the bridgehead server becomes unavailable. The following diagram shows the automatic bridgehead server (BH) selection that occurs in the hub site where all domain controllers host the same domain directory partition and multiple sites have domain controllers that host that domain directory partition. Windows 2000 Single Bridgehead Server in a Hub Site that Serves Multiple Branch Sites

Windows Server 2003 Multiple Bridgehead Selection When at least one domain controller in a site is running Windows Server 2003 (and thereby becomes the ISTG), the ISTG begins performing random load balancing of new connections. This load balancing occurs by default, although it can be disabled. When creating a new connection, the KCC must choose endpoints from the set of eligible bridgeheads in the two endpoint sites. Whereas in Windows 2000 the ISTG always picks the same bridgehead for all connections, in Windows Server 2003 it picks one randomly from the set of possible bridgeheads. Assuming two out of three of the domain controllers have been added to the site since the ISTG was upgraded to Windows Server 2003, the following diagram shows the connections that might exist on domain controllers in the hub site to accommodate the four branch sites that have domain controllers for the same domain. Random Bridgehead Server Selection in a Hub Site in which the ISTG Runs Windows Server 2003

If one or more new domain controllers are added to the hub site, the inbound connections on the existing bridgehead servers are not automatically re-balanced. The next time it runs, the ISTG considers the newly added server(s) as follows:

• •

If preferred bridgehead servers are not selected in the site, the ISTG considers the newly added servers as candidate bridgehead servers and creates new connections randomly if new connections are needed. It does not remove or replace the existing connections. If preferred bridgehead servers are selected in the site, the ISTG does not consider the newly added servers as

candidate bridgehead servers unless they are designated as preferred bridgehead servers. The initial connections remain in place until a bridgehead server becomes unavailable, at which point the KCC randomly replaces the connection on any available bridgehead server. That is, the endpoints do not change automatically for the existing bridgehead servers. In the following diagram, two new domain controllers are added to the hub site, but the existing connections are not redistributed. New Domain Controllers with No New Connections Created

Although the ISTG does not rebalance the load among the existing bridgehead servers in the hub site after the initial connections are created, it does consider the added domain controllers as candidate bridgehead servers under either of the following conditions:

• •

A new site is added that requires a bridgehead server connection to the hub site. An existing connection to the hub site becomes unavailable.

The following diagram illustrates the inbound connection that is possible on a new domain controller in the hub site to accommodate a failed connection on one of the existing hub site bridgehead servers. In addition, a new branch site is added and a new inbound connection can potentially be created on the new domain controller in the hub site. However, because the selection is random, there is no guarantee that the ISTG creates the connections on the newly added domain controllers. Possible New Connections for Added Site and Failed Connection

Using ADLB to Balance Hub Site Bridgehead Server Load In large hub-and-spoke topologies, you can implement the redistribution of existing bridgehead server connections by using the Active Directory Load Balancing (ADLB) tool (Adlb.exe), which is included with the “Windows Server 2003 Active Directory Branch Office Deployment Guide.” ADLB makes it possible to dynamically redistribute the connections on bridgehead servers. This application works independently of the KCC, but uses the connections that are created by the KCC. Connections that are manually created are not moved by ADLB. However, manual connections are factored into the load-balancing equations that ADLB uses. The ISTG is limited to making modifications in its site, but ADLB modifies both inbound and outbound connections on eligible bridgehead servers and offers schedule staggering for outbound connections. For more information about how bridgehead server load balancing and schedule staggering work with ADLB, see the “Windows Server 2003 Active Directory Branch Office Planning and Deployment Guide” on the Web at http://go.microsoft.com/fwlink/?linkID=28523. Top of page

Network Ports Used by Replication Topology By default, RPC-based replication uses dynamic port mapping. When connecting to an RPC endpoint during Active Directory replication, the RPC run time on the client contacts the RPC endpoint mapper on the server at a well-known port (port 135). The server queries the RPC endpoint mapper on this port to determine what port has been assigned for Active Directory replication on the server. This query occurs whether the port assignment is dynamic (the default) or fixed. The client never needs to know which port to use for Active Directory replication. Note



An endpoint comprises the protocol, local address, and port address.

In addition to the dynamic port 135, other ports that are required for replication to occur are listed in the following table. Port Assignments for Active Directory Replication

Service Name UDP TCP LDAP

389

389

LDAP

636 (Secure Sockets Layer [SSL])

LDAP

3268 (global catalog)

Service Name UDP TCP Kerberos

88

88

DNS

53

53

SMB over IP

445

445

Replication within a domain also requires FRS using a dynamic RPC port.

Setting Fixed Replication Ports Across a Firewall For each service that needs to communicate across a firewall, there is a fixed port and protocol. Normally, the directory service and FRS use dynamically allocated ports that require a firewall to have a wide range of ports open. Although FRS cannot be restricted to a fixed port, you can edit the registry to restrict the directory service to communicate on a static port. Note



If you must edit the registry, use extreme caution. Registry information is provided here as a reference for use by only highly skilled directory service administrators. It is recommended that you do not directly edit the registry unless, as in this case, there is no Group Policy or other Windows tools to accomplish the task. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be

stored. Storage of incorrect values can result in unrecoverable errors in the system. Restricting the directory service to using a fixed port requires editing the TCP/IP Port registry entry (REG_DWORD), located in: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters Changing this registry entry on a domain controller and restarting it causes the directory service to use the TCP port named in the registry entry. For example, port 49152 is DWORD=0000c000 (hexadecimal). Top of page

Related Information The following resources contain additional information that is relevant to this section.

• • • • •

Active Directory Replication Model Technical Reference TCP/IP Technical Reference FRS Technical Reference DNS Support for Active Directory Technical Reference DFS Technical Reference

Active Directory Replication Tools and Settings Updated: March 28, 2003

Active Directory Replication Tools and Settings In this section

• • • • • •

Active Directory Replication Tools Active Directory Replication Registry Entries Active Directory Replication Group Policy Settings Active Directory Replication WMI Classes Network Ports Used by Active Directory Replication Related Information Top of page

Active Directory Replication Tools The following tools are associated with Active Directory replication.

Dssite.msc: Active Directory Sites and Services Category

Active Directory Administrative Tools Microsoft Management Console (MMC) snap-in. This tool is installed automatically when you install Active Directory, and is available on the Start menu under Programs\Administrative Tools. This tool also ships with the Administration Tools Pack (Adminpak.msi).

Version compatibility Active Directory Sites and Services runs on domain controllers that are running Windows Server 2003 and Windows 2000 Server. On administrative workstations that are running Windows XP Professional with Service Pack 1, you can install the Windows Server 2003 Administration Tools Pack (Adminpak.msi) from the i386 directory on the Windows Server 2003 operating system CD. This version of the Administration Tools Pack encrypts and signs LDAP traffic between the administrative tool clients and domain controllers. The Windows Server 2003 version of Active Directory Sites and Services (installed on the domain controller or on the administrative workstation by using Administration Tools Pack) can target domain controllers that are running Windows Server 2003 and Windows 2000 Server. Active Directory Sites and Services provides a view into the Sites container of the configuration directory partition. Use Active Directory Sites and Services to manage Active Directory replication topology. The following objects and their properties can be managed by using this tool:

• • •

Sites container: Add new sites.

• •

Server object: View the NTDS Settings object and designate the server as a bridgehead server.

• • •

Inter-Site Transports container: Manage IP and SMTP site links.

Site objects: Add new servers to a site. NTDS Site Settings object: For each site, view the connection object schedule and enable Universal group membership caching. NTDS Settings object: View inbound connections for the server. View the connection object schedule and change the source server for the connection. Site link objects: Manage the site link properties for a set of sites. Subnets container: Add, remove, and configure subnets with IP addresses. Associate subnets with sites.

Reapdmin.exe: Repadmin Category Windows Server 2003 Support Tools, command-line tool.

Version compatibility Repadmin runs on any computer on which Windows Server 2003 Support Tools can be installed, which includes Windows Server 2003 family and Windows XP Professional. Repadmin is used to view the replication information on domain controllers. You can determine the last successful replication of all directory partitions, identify inbound and outbound replication partners, identify the current bridgehead servers, view object metadata, and generally manage Active Directory replication topology. You can use Repadmin to force replication of an entire directory partition or of a single object. You can also list domain controllers in a site. Repadmin is extended in Windows Server 2003 to enable commands to target sets of domain controllers. For example, you can target all domain controllers in a site or domain, or all domain controllers that are global catalog servers. In Windows 2000 Server, Repadmin can report information about only one domain-controller at a time. Repadmin has also been improved in Windows Server 2003 to include the RemoveLingeringObjects command, which removes objects that are outdated (do not exist in a replica of the same directory partition on the source domain controller). For more information about removing lingering objects, see "Fixing Replication Lingering Object Problems (Event IDs 1388, 1988, 2042)" in the Windows Server 2003 Operations Guide at http://go.microsoft.com/fwlink/?LinkId=44131. For more information about Repadmin, see Repadmin Overview.

Ntdsutil.exe: Ntdsutil Category Windows Server 2003 Support Tools, command-line tool.

Version compatibility This tool is compatible with Windows Server 2003. An updated version of Ntdsutil is included with Windows Server 2003 Service Pack 1 (SP1). Ntdsutil.exe provides management capabilities for Active Directory. You can use Ntdsutil.exe to perform Active Directory database maintenance, manage and control single-master operations, and remove replication metadata left behind by domain controllers that are removed from the network without uninstalling Active Directory. The version of Ntdsutil that is included with Windows Server 2003 SP1 removes File Replication service (FRS) metadata in addition to Active Directory replication metadata. You can also use Ntdsutil to create application directory partitions and perform authoritative restore operations. This tool is intended for use by experienced administrators. Top of page

Active Directory Replication Registry Entries The information here is provided as a reference for use in troubleshooting or verifying that the required settings are applied. It is recommended that you do not directly edit the registry unless there is no other alternative. Modifications to the registry are not validated by the registry editor or by Windows before they are applied, and as a result, incorrect values can be stored. This can result in unrecoverable errors in the system. When possible, use Group Policy or other Windows tools, such as Microsoft Management Console (MMC), to accomplish tasks rather than editing the registry directly. If you must edit the registry, use extreme caution. The following registry settings cannot be modified by using Group Policy or other Windows tools.

NTDS Parameters Registry Settings The following registry entries are associated with Active Directory replication.

Replicator notify pause after modify (secs) Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows 2000 Server.

Default value Windows 2000 Server: 300 seconds. The value for the delay between an originating update on a domain controller and the first change notification. On domain controllers running Windows Server 2003, the value for initial change notification delay is stored in the msDSReplicationNotifyFirstDSADelay attribute on the cross-reference object for each directory partition in the Configuration container. The default value in Windows Server 2003 is decreased to 15 seconds when the forest functional level is Windows Server 2003.

Replicator notify pause between DSAs (secs) Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows 2000 Server.

Default value

Windows 2000 Server: 30 seconds The value for the delay before each subsequent change notification. On domain controllers running Windows Server 2003, the value for subsequent notification delay is stored in the msDSReplicationNotifySubsequentDSADelay attribute on the cross-reference object for each directory partition in the Configuration container. The default value in Windows Server 2003 is decreased to 3 seconds when the forest functional level is Windows Server 2003.

RPC Replication Timeout (mins) Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value Windows 2000 Server: 45 minutes; Windows Server 2003: 5 minutes. The number of minutes between initiation of Active Directory replication and the RPC timeout. The domain controller must be restarted before the change takes effect.

Strict replication consistency Registry path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server with SP3.

Default value Windows 2000 Server with SP3: off (0); Windows Server 2003: on (1) The value that determines the treatment of replication of outdated objects that exist on reconnected domain controllers that have not replicated in longer than a tombstone lifetime. If the destination domain controller has strict replication consistency enabled, inbound replication of an outdated object is blocked. If the destination domain controller has strict replication disabled, inbound replication of the full object occurs.

Replicator intra site packet size (objects) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of 1,000 objects. The maximum number of objects per packet for RPC replication within a site.

Replicator intra site packet size (bytes) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/100th the size of RAM, with a minimum of 1 megabyte (MB) and a maximum of 10 MB. The maximum size of objects per packet for RPC replication within a site.

Replicator inter site packet size (objects) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of 1,000 objects. The maximum number of objects per packet for RPC replication between sites.

Replicator inter site packet size (bytes) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server. The maximum size of objects per packet for RPC replication between sites.

Default value 1/100th the size of RAM, with a minimum of 1 MB and a maximum of 10 MB.

Replicator async inter site packet size (objects) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1/1,000,000th the size of RAM, with a minimum of 100 objects and a maximum of 1,000 objects. The maximum number of objects per packet for SMTP replication between sites.

Replicator async inter site packet size (bytes) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1 MB. The maximum size of objects per packet for SMTP replication between sites.

Replicator compression algorithm Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003.

Default value 3. For Windows 2000 Server compression, change the value to 2. Determines the compression algorithm that is used on a site link (Windows Server 2003 algorithm or Windows 2000 Server algorithm).

Repl topology update delay (secs) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 300 seconds. Number of seconds to wait between the time Active Directory starts and the KCC performs the first topology check. To find more information about Repl topology update delay (secs), see “Registry Reference” in Tools and Settings Collection.

Repl topology update period (secs) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 900 seconds. Interval between KCC replication topology checks. To find more information about Repl topology update period (secs), see “Registry Reference” in Tools and Settings Collection.

IntersiteFailuresAllowed Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version

Windows Server 2003, Windows 2000 Server.

Default value 1. Number of failed replication attempts prior to excluding nonresponding servers from the intersite topology.

MaxFailureTimeForIntersiteLink (sec) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 7200 seconds (2 hours). Time in seconds that must elapse prior to excluding nonresponding servers from the intersite topology.

NonCriticalLinkFailuresAllowed Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 1. Number of failed replication attempts prior to excluding nonresponding servers from the intrasite topology.

MaxFailureTimeForNonCriticalLink (sec) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 43200 seconds (12 hours). Time in seconds that must elapse prior to excluding nonresponding servers from the intrasite topology.

CriticalLinkFailuresAllowed Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value

0. Number of failed replication attempts prior to excluding nonresponding servers for immediate neighbor connections within a site.

MaxFailureTimeForCriticalLink (sec) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 7200 seconds (2 hours). Time in seconds that must elapse prior to excluding nonresponding servers for immediate neighbor connections within a site.

TCP/IP Port Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003, Windows 2000 Server.

Default value 135. TCP port that the directory service uses instead of using dynamic port 135. The domain controller must be restarted before the change takes effect.

Backup Latency Threshold (days) Registry path HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NTDS\Parameters

Version Windows Server 2003 with SP 1

Default value Half the value of the tombstone lifetime of the forest. When the value is reached, logs event ID 2089 in the Directory Service event log, warning administrators and monitoring applications to make sure that domain controllers are backed up before the tombstone lifetime expires. Top of page

Active Directory Replication Group Policy Settings The following table lists and describes the Group Policy settings that are associated with Active Directory replication updates. Group Policy Settings Associated with Active Directory Replication

Group Policy Setting

Description

Account Lockout Policy:

Changes to these settings in the Domain Security Policy trigger urgent replication.

Group Policy Setting

Description

• Account lockout duration

• Account lockout threshold

• Reset account lockout counter after

Password Policy:

Changes to these settings in the Domain Security Policy trigger urgent replication.

• Enforce password history

• Maximum password age • Minimum password age • Minimum password length

• Password must meet complexity

requirements

• Store passwords using reversible encryption

Contact PDC on logon

Account lockout and domain password changes rely on contacting the primary domain

failure

controller (PDC) emulator urgently to update the PDC emulator with the change. If Contact PDC on logon failure is disabled, replication of password changes to the PDC emulator occurs non-urgently.

To find more information about these Group Policy settings, see “Group Policy Settings Reference” in Tools and Settings Collection. Top of page

Active Directory Replication WMI Classes The following table lists and describes the WMI classes that are associated with Active Directory replication. These classes are shipped with Windows Server 2003, but are also compatible with Windows 2000 Server. WMI Classes Associated with Active Directory Replication

Class Name

Namespace

Version Compatibility

MSAD_DomainController \\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server MSAD_NamingContext

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

MSAD_ReplNeighbor

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

MSAD_ReplCursor

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

MSAD_ReplPendingOp

\\root\MicrosoftActiveDirectory Windows Server 2003 Windows 2000 Server

For more information about these WMI classes, see the WMI SDK documentation on MSDN. Top of page

Network Ports Used by Active Directory Replication

By default, RPC-based replication uses dynamic port mapping. When connecting to an RPC endpoint during Active Directory replication, the RPC run time on the client contacts the RPC endpoint mapper on the server at a well-known port (port 135). The server queries the RPC endpoint mapper on this port to determine what port has been assigned for Active Directory replication on the server. This query occurs whether the port assignment is dynamic (the default) or fixed. The client never needs to know which port to use for Active Directory replication. Note



An endpoint comprises the protocol, local address, and port address.

In addition to the dynamic port 135, other ports that are required for replication to occur are listed in the following table. Port Assignments for Active Directory Replication

Service Name UDP TCP LDAP

389

389

LDAP

636 (Secure Sockets Layer [SSL])

LDAP

3268 (global catalog)

Kerberos

88

88

DNS

53

53

SMB over IP

445

445

Replication within a domain also requires FRS using a dynamic RPC port. Top of page

Related Information The following resources contain additional information that is relevant to this section.

• •

How the Active Directory Replication Model Works How Active Directory Replication Topology Works

Related Documents