real time replication in the real world - perforce.com · high availability solutions. ... replicas...
TRANSCRIPT
Doc Revision 1.1
Real Time Replication in the Real World
Richard E. Baum
and
C. Thomas Tyler
Perforce Software, Inc.
Copyright © 2010 Perforce Software i
Table of Contents 1 Overview ........................................................................................................................ 1 2 Definitions ...................................................................................................................... 1
3 Solutions Involving Replication ..................................................................................... 2 3.1 High Availability Solutions ..................................................................................... 2
3.1.1 High Availability Thinking .............................................................................. 2 3.2 Disaster Recovery Solutions ................................................................................... 2
3.2.1 Disaster Recovery Thinking ............................................................................. 3
3.3 Read-only Replicas .................................................................................................. 3
3.3.1 Read-only Replica Thinking ............................................................................ 3
4 Perforce Tools that Support Metadata Replication ........................................................ 4 4.1 Journal Truncation ................................................................................................... 4 4.2 p4jrep (deprecated) .................................................................................................. 4
4.3 p4 replicate .................................................................................................... 4
4.3.1 Replication to Journal File ............................................................................... 5
4.3.2 Replication to live DB files .............................................................................. 5 4.3.3 Filtering Example ............................................................................................. 6
4.4 p4 export ........................................................................................................... 6
4.4.1 Report Generation from Scripts ....................................................................... 6 4.4.2 Report Generation from SQL Databases.......................................................... 6
4.4.3 Full Replication (Metadata and Archive Files) ................................................ 6
4.4.4 Daily Security Reports ..................................................................................... 6 4.5 Built-in Replication Tools - Summary .................................................................... 7
5 Tools that Support Archive File Replication ................................................................. 7
5.1 Rsync / Robocopy ................................................................................................... 7 5.2 Filesystem or Block-Level Replication ................................................................... 7
5.3 Perfect Consistency vs. Minimum Data Loss ......................................................... 7 6 Putting it All Together ................................................................................................... 8
6.1 Classic DR: Truncation + Commercial WAN Replicator ...................................... 8
6.2 The SDP Solution Using p4 replicate ........................................................... 9
6.2.1 Failover .......................................................................................................... 10
6.3 A Read-only Replica for Continuous Integration .................................................. 10
6.3.1 Define how Users Will Connect to the Replica. ............................................ 10 6.3.2 Use Filtered Replication ................................................................................. 10 6.3.3 Make Archive Files Accessible (read-only) to the Replica ........................... 11
7 Operational Issues with Replication............................................................................. 11 7.1 Obliterates: Replication vs. Classic Journal Truncation ...................................... 12
8 Summary ...................................................................................................................... 12 9 References & Resources .............................................................................................. 13
Copyright © 2010 Perforce Software Page 1
1 Overview There are myriad ways to configure a Perforce environment to allow for multiple,
replicated servers. Configurations are chosen for a wide variety of reasons. Some provide
high availability solutions. Some provide disaster recovery. Some provide read-only
replicas that take workload off of a main server. There are also combinations of these.
What you should deploy depends on your desired business goals, the availability of
hardware and network infrastructure, and other a number of factors.
The 2009.2 release of Perforce Server provides built-in tools to allow for near-real-time
replication of metadata. These tools allow for much easier implementation of both read-
only replica servers and high-availability/disaster-recovery solutions. This paper discusses
some of the most common replication configurations, the ways they are supported by the
Perforce Server, and characteristics of each.
2 Definitions A number of terms are used throughout this document. They are defined as follows:
HA – High Availability – A system design protocol and associated implementation
that ensures a certain degree of operational continuity during a given measurement
period, even in the event of certain failures of hardware or software components.
DR – Disaster Recovery – The process, policies and procedures related to preparing
for recovery or continuation of technology infrastructure critical to an organization
after a natural or human-induced disaster.
RPO – Recovery Point Objective – Describes an acceptable amount of data loss
measured in time.
RTO – Recovery Time Objective – The duration of time and a service level within
which service must be restored after a disaster.
Metadata – Data contained in the Perforce database files (db.* files in the
P4ROOT).
Archive Files – All revisions of all files submitted to the Perforce Server and
currently shelved files.
Read-only Replica – A Perforce Server instance that operates using a copy of the
Perforce metadata.
Copyright © 2010 Perforce Software Page 2
DRBD (Distributed Replicated Block Device) – a distributed storage system
available as of the 2.6.33 kernel of Linux. DRBD runs over a network and works
very much like RAID 1.
3 Solutions Involving Replication
3.1 High Availability Solutions High Availability solutions keep Perforce servers available to users, despite failures of
hardware components. HA solutions are typically deployed in environments where there is
little tolerance for unplanned downtime. Large sites with 24x7 uptime requirements due to
globally distributed development environments strive for HA.
Perforce has excellent built-in journaling capabilities. It is fairly easy to implement a
solution that is tolerant to faults, prevents data loss in any single point of failure situation,
and that limits data loss in more significant failures. With HA, prevention of data loss for
any single point of failure is assumed to be accounted for, and the focus is on limiting
downtime.
HA solutions generally offer a short RTO and low RPO. They offer the fastest recovery
from a strict hardware failure scenario. They also cost more, as they involve additional
standby hardware at the same location as the main server. Full metadata replication is also
used, and the backup server is typically located on the same local LAN as the main server.
This results in good performance of replication processes. Due to the proximity of the
primary and backup servers, these solutions offer do not offer much in the way of disaster
recovery. A site-wide disaster or regional problem (earthquake, hurricane, etc.) can result
in a total outage.
3.1.1 High Availability Thinking
The following are sample thoughts that lead to the deployment of HA solutions:
• We‟re willing to invest in a more sophisticated deployment architecture to reduce
unplanned downtime.
• We will not accept data loss for any Single Point of Failure (SPOF).
• Downtime is extremely expensive for us. We are willing to spend a lot to reduce the
likelihood of downtime, and minimize it when it is unavoidable.
3.2 Disaster Recovery Solutions In order to offer a true disaster recovery solution, a secondary server needs to be located in
a site that is physically separate from the main server. Full metadata replication provides a
reliable failover server in a geographically separate area. Thus, if one site becomes
unavailable due to a natural disaster, another can take its place. As WAN connections are
Copyright © 2010 Perforce Software Page 3
often considerably slower than local area network connections, these solutions tend to have
a higher RTO and a longer RPO than HA solutions.
Near real-time replication over the WAN is possible in some environments. Solutions that
achieve this sometimes rely on commercial WAN replication to handle archive files, and
p4 replicate to keep metadata up to date.
3.2.1 Disaster Recovery Thinking
The following are sample thoughts that lead to the deployment of DR solutions:
• We‟re willing to invest in a more sophisticated deployment architecture to ensure
business continuity in event of a disaster.
• We need to ensure access to our intellectual property, even in the event of a sudden and
total loss of one of our data centers.
3.3 Read-only Replicas Read-only replicas are generally used to offload processing from live production servers.
Tasks run against read-only replicas will not block read/write users from accessing the live
production Perforce instance. One common use for this is with automated build farms,
where large sync operations could otherwise cause users to have to wait to submit.
Read-only servers are often created from a subset of depot metadata. They usually do not
need db.have data, for example. That database tells the server which client workspaces
contain which files – information not needed for automated builds.
Building a read-only replica typically involves using shared storage (typically a SAN) for
archive files, so that the archive files written on the primary server are mounted read-only
at the same location on the replica. In some cases, read-only replicas are run on the same
server machine as the primary server. In that configuration, the replicas are run under a
different login that does not have access to write to the archive files, ensuring they remain
read-only.
To obscure the details of the deployment architecture from users and keep things simple
for them, p4broker can be used. When p4broker is used, humans and automated build
processes set their P4PORT value to that of the broker rather than the real server.
The broker implements heuristics to determine which requests need to be handled by the
primary server, and which can be sent to the replica. For example, the broker might
forward all „sync‟ command from the „builder‟ user to the replica, while the submit at the
end of a successful build would go to the primary server.
3.3.1 Read-only Replica Thinking
The following are sample thoughts that lead to the deployment of read-only replica
solutions:
Copyright © 2010 Perforce Software Page 4
• We have automation that interacts with Perforce, such as continuous integration build
systems or reports, that impact performance on our primary server.
• We are willing to invest in a more sophisticated deployment architecture to improve
performance and increase our scalability.
4 Perforce Tools that Support Metadata Replication In any replication scheme, both the depot metadata and the archive files must be addressed.
Perforce supports a number of different ways to replicate depot metadata. Each has
different uses. Some are more suited to certain types of operations.
4.1 Journal Truncation The classic way to replicate Perforce depot metadata is to truncate the running journal file,
which maintains a log of all metadata transactions, and ship it to the remote server where it
is replayed. This has several advantages. It results in the remote metadata being updated to
a known point in time. It also allows the primary server to continue to run without any
interruption. The truncated journal file can copied via a variety of methods, including
rsync/Robocopy, FTP, block-level replication software, etc. Automating such tasks is not
generally very difficult. For systems where a low RPO is needed, however, the number of
journal file fragments shipped over may make this approach impractical. Truncating the
journal every few minutes can result in a huge number of journal files, and confusion in the
event of a system problem. Extending the time between journal truncations, on the other
hand, causes the servers to become out of sync for greater periods of time. This solution,
therefore, tends to be more of a DR one rather than a HA one.
4.2 p4jrep (deprecated)
An interim solution to the problem of repeated journal truncation was p4jrep. This utility,
available from the Perforce public depot, provided a way to move journal data between
servers in real time. Journal records were read and sent through a pipe from one server to
another. This required some amount of synchronization, and servers needed to be
connected via a stable network connection. It was not available for Windows.
4.3 p4 replicate
While p4jrep demonstrated that near real-time replication was possible, it also showed
the need for a built-in solution. Starting with the 2009.2 version of Perforce Server, the
p4 replicate command provides the same basic functionality. It works on all server
platforms, and is a fully supported component of Perforce.
p4 replicate is also much more robust than its predecessor. Transaction markers
within the journal file are used to ensure transactional integrity of the replayed data, and
the data is passed from the server by Perforce and not some outside process. Additionally,
as the command runs via the Perforce command line client, the servers it joins can easily
be located anywhere.
Copyright © 2010 Perforce Software Page 5
4.3.1 Replication to Journal File
Here is a sample p4 replicate command, wrapped in p4rep.sh:
#!/bin/bash
P4MASTERPORT=perforce.myco.com:1742
CHECKPOINT_PREFIX=/p4servers/master/checkpoints/myco
P4ROOT_REPLICA=/p4servers/replica/root
REPSTATE=/p4servers/replica/root/rep.state
p4 -p $P4MASTERPORT replicate \
-s $REPSTATE \
-J $CHECKPOINT_PREFIX \
-o /p4servers/replica/logs/journal
4.3.2 Replication to live DB files
Here is a sample p4 replicate command, wrapped in p4rep.sh:
#!/bin/bash
P4MASTERPORT=perforce.myco.com:1742
CHECKPOINT_PREFIX=/p4servers/master/checkpoints/myco
P4ROOT_REPLICA=/p4servers/replica/root
REPSTATE=/p4servers/replica/root/rep.state
p4 -p $P4MASTERPORT replicate \
-s $REPSTATE \
-J $CHECKPOINT_PREFIX -k \
p4d -r $P4ROOT_REPLICA -f -b 1 -jrc -
This example starts a replication on the local host with
P4ROOT=/p4servers/replica/root, pulling data from a (presumably remote)
primary server on port perforce.myco.com:1742. This runs constantly, polling every
2 seconds (the default).
Prior to the first execution of this program, some one-time initial setup is needed on the
replica. That configuration would be:
1. Load the most recent checkpoint.
2. Create a rep.state file and in it store the journal counter number embedded in
the name of the most recent checkpoint file.
After this initialization procedure, the contents of rep.state file are maintained by the
p4 replicate command with the journal counter number and offset, indicating the
current state of replication. If replication is stopped and restarted hours later, it will know
where to start from and which journal records to replay.
Copyright © 2010 Perforce Software Page 6
4.3.3 Filtering Example
This example excludes db.have records from replication.
#!/bin/bash
P4MASTERPORT=perforce.myco.com:1742
CHECKPOINT_PREFIX=/p4servers/master/checkpoints/myco
P4ROOT_REPLICA=/p4servers/replica/root
REPSTATE=/p4servers/replica/root/rep.state
p4 -p $P4MASTERPORT replicate \
-s $REPSTATE \
-J $CHECKPOINT_PREFIX -k \
grep --line-buffered -v '@db\.have@' |\
p4d -r $P4ROOT_REPLICA -f -b 1 -jrc -
4.4 p4 export
The p4 export command also provides a way to extract journal records from a live
server. It extracts journal records in tagged format, more easily readable by humans. The
tagged form also makes it easier to write scripts that parse the output. It has the ability to
filter journal records.
There are a variety of uses for p4 export. Some samples are the following:
4.4.1 Report Generation from Scripts
Report generation of all kinds can be done using using p4 export and tools like Perl,
Python, and Ruby.
4.4.2 Report Generation from SQL Databases
Using the P4toDB utility one can load Perforce metadata directly into an SQL database. A
wide variety of reporting can be done from there. P4toDB is a tool that retrieves existing
and updated metadata from the Perforce server and translates it into SQL statements for
replay into a third-party SQL database. Under the covers, it uses p4 export.
4.4.3 Full Replication (Metadata and Archive Files)
To replicate the archive files in addition to the metadata, p4 export can be called with -
F table=db.rev to learn about archive changes (or F table=db.revsh to learn about
shelved files). Then you can schedule specific files for rsync or some other external
copying. This metadata-driven archive file update process can be very fast indeed.
4.4.4 Daily Security Reports
If a daily security report of changes to protections or groups is desired, p4 export could
facilitate this.
Copyright © 2010 Perforce Software Page 7
4.5 Built-in Replication Tools - Summary
The p4 replicate and p4 export commands provide good alternatives to journal
tailing, as they provide access to the same data via a p4 command.
One typical use case for p4 replicate is to duplicate metadata into a journal file on a
client machine. Another is to feed the duplicated metadata into a replica server via a
variant of the p4d -jr command. Tagged p4 export output allows culling of journal
records by specifying filter patterns. It is possible to filter journal records using p4
replicate with a wrapper script handling the culling, as opposed to using p4 export.
5 Tools that Support Archive File Replication There are no built-in Perforce tools that replicate the archive files. Such operations involve
keeping directory trees of files in sync with each other. There are a number of tools
available to do this.
5.1 Rsync / Robocopy Rsync (on Linux) and Robocopy (Windows) provide a way to keep the archive files
synced across multiple machines. They operate at the user level, and are smart about how
they copy data. For example, they analyze the content of the file trees, and the files
themselves, sending only the files and file parts necessary to synchronize things. For
smaller Perforce deployments, or large ones with a relaxed RPO (e.g. 24 hours in a disaster
scenario), these tools are generally up to the task of replicating the archive files between
Perforce servers.
For a large tree of files, the computation involved and the time to copy the data can be
quite long. Even if only a few dozen files have changed out of several hundred thousand,
it may take some time to determine the extent of what changed. While the filesystem
continues to be written to, rsync/Robocopy is still figuring out which files to propagate. In
some cases, parallelizing calls to rsync/Robocopy can speed up the transfer process
considerably, allowing these tools to be used with larger data sets.
5.2 Filesystem or Block-Level Replication There are commercial software packages available that mirror filesystem data at a low,
disk block, level. As long as there is a fast network connection between the machines, data
is replicated as it is written, in real-time. For Linux users, the open source DRBD system
is an alternative to consider.
5.3 Perfect Consistency vs. Minimum Data Loss The archive file transfer process tends to be the limiting factor in defining RPO. It takes
time to guarantee that the archive files are updated on the remote machine. Even if few
files change, it can take a while for the tools to verify that and transfer just the right set of
files.
Copyright © 2010 Perforce Software Page 8
It‟s nice to know that the metadata and archive files are in sync, and that you have a nice,
clean recovery state. Thus, it is common for DR solutions to start at some point in time,
create the truncated journal, and then “hold off” on replaying the journals until all
corresponding archive files have been transferred. This approach favors consistency.
Users will not see any “Librarian” errors after recovery.
If an HA or DR solution is architected so that metadata replays are done as quickly as
possible, instead of waiting for the typically slower transfer of archive files, journal records
will reference archive files that have not yet been transferred. This can cause users to see
Perforce “Librarian” errors after journal recovery. Some might consider this messy, but
there are in fact advantages to this approach. The main advantage is that the metadata is as
up-to-date as possible. More data, like pending changelist descriptions, fixes for jobs,
changes to workspaces, users, labels – all valuable information worth preserving – is
preserved. In the event of a disaster, the p4 verify errors provide extremely useful
information. They point the administrator to the archive files that are missing. Submitted
changelists point the administrator to the client workspaces where copies of these files may
be found. As these file revisions are the most recent to be added to the repository, it is
extremely likely that they will still exist in user workspaces.
Which of these approaches is better depends on your organization. Perforce administrators
may be loathe to restore files from build servers or user desktops. The need to do so
complicates the recovery process and increases the downtime. Taking the additional
downtime to recover files from client machines, however, will decrease the data loss, and
ultimately simplify recovery for users.
For large data sets, the effect on RPO of maintaining perfect consistency is much greater
with rsync/Robocopy than it is with faster block-level replication systems. Block-level
replication systems or metadata-driven archive file transfers can, on many data sets, deliver
near-zero RPO even in a worst-case, sudden disaster scenario.
Running p4 verify regularly is a routine practice. Having known-clean archive files
prior to a disaster can help in a recovery situation, since you can assume any new errors are
the result of whatever fate befell the primary server. More importantly, sometimes errors
provide an early warning, the harbinger of the disk or RAM chip failures.
6 Putting it All Together
6.1 Classic DR: Truncation + Commercial WAN Replicator In one enterprise customer deployment (running on a version earlier than 2009.2), Perforce
Consulting deployed a DR solution using the classic journal truncation methodology. A
business decision was made that an 8 hour RPO was acceptable in a true disaster scenario.
This solution involved a commercial block-level replication solution that transferred
archive files from the primary server to the DR machine. Journals were deposited in the
Copyright © 2010 Perforce Software Page 9
same volume monitored by the commercial replication solution, so the journal files went
along for the ride.
The core approach was very straightforward:
1. Run p4d -jj every 8 hours on the primary server, depositing the journal files on the
same volume as the archive files (gaining the benefit of free file transfer).
2. On the DR server, replay any outstanding journals using p4d –jr (using p4
counter journal to get the current journal counter and thus determine which are
outstanding).
3. Keep the Perforce instance on the spare server up and running. Its daily job is running
p4 verify.
The actual implementation addressed various complexities:
1. The DR machine might be shut down for a while and restored to service. A scheduled
task periodically replays only outstanding journal files.
2. Human intervention was chosen over a fully automated failover solution. This
approach increases RTO compared to fully automated solutions by making it necessary
for a human to be involved. It also reduces the likelihood of dangerous scenarios
where there is a lack of clarity among humans and systems about which server is
currently the live production server.
6.2 The SDP Solution Using p4 replicate
After the release of 2009.2, the Perforce Server Deployment Package (SDP) was upgraded
to take advantage of the new p4 replicate command. The SDP solution is easy to set
up, and provides real-time metadata replication. It does not provide transfer of the archive
files. Actual deployments of the SDP use other means for handling transfer of archive files
from the primary to the failover machine.
The p4 replicate command is wrapped in either of two ways. For replication to a set
of live db.* files, there is a p4_journal_backup_dr.sh script (.bat on Windows).
For replication to a journal file on a spare machine (not replaying into db.* files), there is
another script, p4_journal_backup_rep.sh. Both wrappers provide the command line
and recommended options, and defines a standard location for state information file
maintained by p4 replicate.
Normally, if the primary server stops, replication also stops and must be restarted. The
SDP wrapper provides a configurable window (10 minutes by default) in which replication
will be automatically restarted, so that a simple server restart won‟t impact replication.
Copyright © 2010 Perforce Software Page 10
A script is provided to enable/disable replication, p4_drrep1_init, which starts and
stops replication.
Once enabled, replication runs constantly, polling the server at a defined interval. The
SDP scripts use the default polling interval of 2 seconds – effectively real-time.
6.2.1 Failover
The SDP solution assumes the presence of a trained Perforce Administrator. It provides
digital assets needed for recovery. It is not an automated failover solution. The
documentation describes how to handle those aspects of failover related to the SDP scripts.
Environment-specific failover tasks, such as DNS switchover, mounting SAN volumes,
cluster node switches, etc., are left for the local administrator.
6.3 A Read-only Replica for Continuous Integration With Agile methodologies being all the rage, there is a proliferation of continuous
integration build solutions used with Perforce. Many of these systems poll the Perforce
server incessantly, always wanting to know if something new has been submitted to some
area of the repository. Continuous integration builds are just one example of frequent
read-only transactions that can keep a Perforce server machine extremely busy.
A popular solution to performance woes cause by various automated systems that interact
with Perforce is to use a read-only replica server. The core technology for setting up a
read-only replica is straightforward:
6.3.1 Define how Users Will Connect to the Replica.
Either use multiple P4PORT values selected by users, or use a more sophisticated
approach involving the p4broker.
6.3.1.1 Simple (for administrators):
Modify build scripts to use appropriate P4PORT values, either for the live server or the
replica, depending on whether they will be writing (either archive files or metadata), or
doing purely read-only operations. This works well in smaller environments, where the
team administering Perforce often also owns the build scripts.
6.3.1.2 Simple (for end users):
Use p4broker to route requests to the appropriate P4PORT, either to the live server or the
read-only replica. For example, route all requests from user „builder‟ that are not „submit‟
commands to the replica. End users always set their P4PORT to the broker. This
approach is better suited to large enterprises with dedicated Perforce administrators. The
fact that a replica is in use is largely transparent to end users.
6.3.2 Use Filtered Replication
Call p4 replica from the replica machine while pointing to the live server. Use filtering
to improve performance. Filter out db.have, and various other journal records. For a
Copyright © 2010 Perforce Software Page 11
simple case like db.have which are always single-line entries, grep can be used, but line
buffering should be used to guarantee transactional integrity. A filtering line might look
like:
grep --line-buffered -v '@db\.have@'
Some types of journal entries can span multiple lines. When working with those, you‟ll
need a more advanced approach. For a more general case handling records that may span
multiple lines, see this file in the Perforce public depot:
//guest/michael_shields/src/p4jrep/awkfilter.sh
6.3.3 Make Archive Files Accessible (read-only) to the Replica
Make the archive files available to the replica, ensuring they are read-only to the replica:
Use a SAN or other shared storage solution for archive files, and have the replica
server access the same storage system, with archive file volumes mounted read-
only on the replica. Thus, the archive files accessed by the read-only Perforce
Server instance are the same physical archive files accessed by the live server.
Alternately, if there are sufficient resources (CPU, RAM, multiple processor cores)
on the primary server machine, the replica instance can be run on the same machine
as the primary server. The replica instance runs under a different login account that
can‟t write to the archive files. This negates the need to involve a shared storage
solution.
Maintain a separate copy of the archive files on the replica, and scan journal entries
harvested by p4 export to determine when individual archive files need to be
pulled from the live server. Metadata-driven archive file updates can be faster than
high-level rsync commands run at the root of an entire depot.
7 Operational Issues with Replication There are risks to using a “hot” real-time replication approach. There are some failure
scenarios that can foil these systems. It is possible to propagate bad data either because it‟s
bad at the source or due to errors during transfer. Happily, p4 replicate provides
insulation and fails safely if metadata is garbled in transfer.
There may be cases where replication stops and needs to be restarted. To maintain low
RPO, it is important to quickly detect that replication has stopped so that it can be
corrected.
Copyright © 2010 Perforce Software Page 12
7.1 Obliterates: Replication vs. Classic Journal Truncation Replication is robust. However, there are some human error scenarios that are better
handled with classic journal truncation. Compared with replication, periodic truncation of
the journal will not keep the metadata on the standby machine as fresh. Perhaps the best
solution is to use a combination of p4 replicate and classic journal truncation.
Frequent journal truncations make journal files available for use with alternative journal
replay options using p4d -jr. These journal files might come in handy if, for example, a
p4 obliterate was done accidentally. Working with Perforce Technical Support, the
deleted metadata could be extracted from journals prior to replaying it to a standby
machine. This makes it possible to undo the effects of the obliteration in the metadata,
while preserving other metadata transactions in the journal. This process requires
downtime, but metadata recovery is complete. So long as the archive files are propagated
in a way that does not propagate file deletions (e.g. with proper rsync or Robocopy
options), the archive files can be restored as well, using p4 verify to identify files to
restore.
8 Summary
The p4 replicate and p4 export commands make advanced replication and journal
scanning solutions more reliable and easier to implement. They are supported, cross-
platform, reliable, and robust. The new tools in 2009.2 simplify a variety of replication-
enabled solutions for Perforce, including High Availability, Disaster Recovery, and Read-
only replicas.
The solutions described herein are being implemented and used in the real world every
day. They support a range of critical business tasks and can be tailored to almost every
budget. The tools described here are all documented, and Perforce Technical Support can
help you to use them. Perforce also has related resources in its knowledge base. Perforce
Consulting is also available to implement its SDP and to provide hands-on design and
implementation of a solution customized for your environment.
Copyright © 2010 Perforce Software Page 13
9 References & Resources
Wikipedia was used as a resource for common definitions and illustrations for HA, DR,
and DRBD.
http://en.wikipedia.org/wiki/DRBD
Perforce Metadata Replication: http://kb.perforce.com/?article=1099
Relevant Perforce Documentation on replication:
http://www.perforce.com/perforce/doc.092/manuals/p4sag/10_replication.html#1062591
P4toSQL: http://kb.perforce.com/article/1172