Perforce 2003.1 System Administrator's Guide | ||
<< Previous Chapter Welcome to Perforce: Installing and Upgrading |
Table of Contents Index Perforce on the Web |
Next Chapter >> Administering Perforce: Superuser Tasks |
As mentioned earlier, versioned files are stored in subdirectories beneath your Perforce server root, and can be restored directly from backups without any loss of integrity.
The files making up the Perforce database, on the other hand, may not have been in a state of transactional integrity at the moment they were copied to the system backups. Restoring the db.* files from system backups may result in an inconsistent database. The only way to guarantee the integrity of the database after it's been damaged is to reconstruct the db.* files from Perforce checkpoint and journal files.
Both the checkpoint and journal are text files, and have the same format. A checkpoint and, if available, its subsequent journal, can restore the Perforce database.
Because the information stored in the Perforce database is as irreplaceable as your versioned files, checkpointing and journaling are an integral part of administering a Perforce server, and should be performed regularly.
Versioned files are backed up separately from checkpoints. This means that a checkpoint does not contain the contents of versioned files, and as such, you cannot restore any versioned files from a checkpoint. You can, however, restore all changelists, labels, jobs, and so on, from a checkpoint.
To guarantee database integrity upon restoration, the checkpoint must be as old as, or older than, the versioned files in the depot. This means that the database should be checkpointed, and the checkpoint generation must be complete, before the backup of the versioned files starts.
Regular checkpointing is important to keep the journal from getting too long. Making a checkpoint immediately before backing up your system is good practice.
p4d -r root -jc
This can be run while the Perforce server (p4d) is running.
To make the checkpoint, p4d locks the database and then dumps its contents to a file named checkpoint.n, where n is a sequence number. Before it unlocks the database, p4d also copies the journal file to a file named journal.n-1, and then truncates the current journal. This guarantees that the last checkpoint (checkpoint.n) combined with the current journal (journal) will always reflect the full contents of the database at the time the checkpoint was created.
(The sequence numbers reflect the roll-forward nature of the journal; to restore databases to older checkpoints, match the sequence numbers. That is, the database reflected by checkpoint.6 can be restored by restoring the database stored in checkpoint.5 and rolling forward the changes recorded in journal.5. In most cases, you're only interested in restoring the current database, which is reflected by the highest-numbered checkpoint.n rolled forward with the changes in the current journal.)
You can specify a prefix for the checkpoint and journal filename by using the -jc option. That is, if you create a checkpoint with:
p4d -jc prefix
your checkpoint and journal files will be named prefix.ckp.n, or prefix.jnl.n respectively, where prefix is as specified on the command line and n is a sequence number. If no prefix is specified, the default filenames checkpoint.n and journal.n will be used.
As of Release 99.2, if you need to take a checkpoint but are not on the machine running the Perforce server, you can create a checkpoint remotely with the p4 admin command. Use:
p4 admin checkpoint [prefix]
to take the checkpoint and optionally specify a prefix to the checkpoint and journal files. (You must be a Perforce superuser to use p4 admin.)
A checkpoint file may be compressed, archived, or moved onto another disk. At that time or shortly thereafter, the files in the depot subdirectories should be archived as well.
When recovering, the checkpoint must be at least as old as the files in the depots. (that is, the versioned files can be newer than the checkpoint, but not the other way around.) As you might expect, the shorter this time gap, the better.
You can set up an automated program to create your checkpoints on a regular schedule. Be sure to always check the program's output to ensure that the checkpoint creation was successful. The first time you need a checkpoint is not a good time to discover your checkpoint program wasn't working.
If the checkpoint command itself fails, contact Perforce Technical Support immediately. Checkpoint failure is usually a symptom of a resource problem (disk space, permissions, etc.) that can put your database at risk if not handled correctly.
If you have Monday's checkpoint and the journal that was collected from then until Wednesday, those two files (Monday's checkpoint plus the accumulated journal) contain the same information as a checkpoint made Wednesday. If a disk crash were to cause corruption in your Perforce database on Wednesday at noon, for instance, you could still restore the database even though Wednesday's checkpoint hadn't yet been made.
To restore your database, you only need to keep the most recent journal file accessible, but it doesn't hurt to archive old journals with old checkpoints, should you ever need to restore to an older checkpoint.
If you installed Perforce without the installer (for an example of when you might do this, see "Multiple Perforce services under Windows" on page 115), you do not have to create an empty file named journal in order to enable journaling under a manual installation on Windows.
If P4JOURNAL is left unset (and no location is specified on the command line), the default location for the journal is $P4ROOT/journal.
Every checkpoint after your first checkpoint starts a new journal file and renames the old one. The old journal is renamed to journal.n, (or prefix.jnl.n for Release 99.2 or later) where n is a sequence number, and a new journal file is created.
By default, the journal is written to the file journal in the server root directory (P4ROOT). Since there is no sure protection against disk crashes, the journal file and the Perforce server root should be located on different filesystems, ideally on different physical disk drives. The name and location of the journal can be changed by specifying the name of the journal file in the environment variable P4JOURNAL, or by providing the -J filename flag to p4d.
Warning! |
If you create a journal file with the -J filename flag, make sure that subsequent checkpoints use the same file, or the journal will not be properly renamed. |
Whether you use P4JOURNAL or the -J journalfile option to p4d, the journal file name can be provided either as an absolute path, or as a path relative to the server root.
requires that you either checkpoint with:
$ p4d -r $P4ROOT -J /usr/local/perforce/journalfile -jc Checkpointing to checkpoint.19... |
or set P4JOURNAL to /usr/local/perforce/journal and use
Checkpointing to checkpoint.19... |
If your P4JOURNAL environment variable (or command-line specification) doesn't match the
setting used when you started the Perforce server, the checkpoint is still created, but the
journal is neither saved nor truncated. This is highly undesirable!
As of Release 99.2, Perforce also supports the AppleSingle file format for Macintosh. On the server, these files are stored in full, compressed, just like other binary files. They are stored in the Mac's AppleSingle file format; if need be, these files can be copied directly from the server root, uncompressed, and used as-is on a Macintosh.
Because Perforce uses compression in the depot files, a system administrator should not rely on the compressibility of the data when sizing backup media. Both text and binary files are either compressed by the Perforce server (denoted by the .gz suffix) before storage, or are stored uncompressed. At most installations, if any binary files in the depot subdirectories are being stored uncompressed, they were probably incompressible to begin with. (For example, many image, music, and video file formats are incompressible.)
While your versioned files can be newer than the data stored in your checkpoint, it is in your best interest to keep this difference to a minimum; in general, you'll want your backup script to back up your versioned files immediately after successfully completing a checkpoint.
p4 verify //...
p4 verify -u //...
You may wish to pass the -q (quiet) option to p4 verify. If called with the -q option, p4 verify will produce output only when errors are detected.
The first command (p4 verify) recomputes the MD5 signatures of all of your archived files and compares them with those stored when p4 verify -u was first run on them. It also ensures that all files known to Perforce actually exist in the depot subdirectories; a disk-full condition that results in corruption of the database or archived files during the day can be detected by examining the output of these commands.
The second command (p4 verify -u) updates the database with MD5 signatures for any new file revisions for which checksums have not yet been computed.
By running p4 verify -u before the backup, you ensure that you create and store checksums for any files new to the depot since your last backup, and that these checksums are stored as part of the backup you're about to take.
The use of p4 verify is optional, but is good practice not only because it allows you to spot any server corruption before a backup is made, but it also gives you the ability, following a crash, to detect whether or not the files restored from your backups are in good condition.
Note |
If your site is very large, p4 verify may take some time to run, and you may wish to perform this step on a weekly basis rather than on a daily basis. For more about the p4 verify command, see "File verification by signature" on page 43. |
or (as of Release 99.2 or higher):
Because p4d locks the entire database when making the checkpoint, you do not generally have to stop your Perforce server during any part of the backup procedure.
If you are using the -z flag to create a gzip-compressed checkpoint, the checkpoint file will be named as specified. If you want the compressed checkpoint file to end in .gz, you should explicitly specify the .gz on the command line.
You can tell that the checkpoint command has completed successfully by examining the error code returned from p4d -jc, or by observing the truncation of the current journal file.
(If you don't require an audit trail, you don't actually need to back up the journal. It is, however, usually good practice to do so.)
You never need to back up the db.* files. Your latest checkpoint and journal contain all the information necessary to re-create them. More significantly, a database restored from db.* files is not guaranteed to be in a state of transactional integrity. A database restored from a checkpoint is.
There are many ways in which systems can fail; while this guide cannot address all of them, it can at least provide a general guideline for recovery from the two most common situations, specifically:
If you suspect corruption in either your database or versioned files, contact Perforce technical support.
The corrupt db.* files aren't actually used in the restoration process, but it's safe practice not to delete them until you're certain your restoration was successful.
p4d -r $P4ROOT -jr checkpoint_file journal_file
This recovers the database as it existed when the last checkpoint was taken, and then apply the changes recorded in the journal file since the checkpoint was taken.
After recovery, both your database and versioned files should reflect all changes made up to the time of the crash, and no data should have been lost.
The corrupt db.* files aren't actually used in the restoration process, but it's safe practice not to delete them until you're certain your restoration was successful.
p4d -r $P4ROOT -jr checkpoint_file
This recovers the database as it existed when the last checkpoint was taken, but does not apply any of the changes in the journal file. (The -r $P4ROOT argument must precede the -jr flag.)
The database recovery without the roll-forward of changes in the journal file brings the database up to date as of the time of your last backup. In this scenario, you do not want to apply the changes in the journal file, because the versioned files you restored reflect only the depot as it existed as of the last checkpoint.
Note that files submitted to the depot between the time of the last system backup and the disk crash will not be present in the depot.
p4 verify -q //...
This command verifies the integrity of the versioned files. The -q (quiet) option tells the command to only produce output on error conditions. Ideally, this command should produce no output.
If any versioned files are reported as MISSING by the p4 verify command, you'll know that there is information in the database concerning files that didn't get restored. The usual cause is that you restored from a checkpoint and journal made after the backup of your versioned files. (that is, that your backup of the versioned files was older than the database.)
If (as recommended) you've been using p4 verify -u to generate and store MD5 signatures for your versioned files as part of your backup routine, you can run p4 verify on the server after restoration to reassure yourself that your restoration was successful.
If you have any difficulties restoring your system after a crash, contact Perforce Technical Support for assistance.
Perforce 2003.1 System Administrator's Guide | ||
<< Previous Chapter Welcome to Perforce: Installing and Upgrading |
Table of Contents Index Perforce on the Web |
Next Chapter >> Administering Perforce: Superuser Tasks |