Trigger basics
This section contains information for working with triggers. Detailed information about implementing each type of trigger is found in the sections that follow. The information in this section applies to all types of triggers.
- Communication between a trigger and the server describes how to select the method used for communication and how to parse dictionary input.
- Storing triggers in the depot describes how to format depot paths if you want to run a trigger from the depot.
- Using multiple triggers explains how Helix Core Server interprets and processes the trigger table when it includes multiple trigger definitions.
- Writing triggers to support multiple Helix Core Servers describes how you can write a trigger so that it is portable across Helix Core Server installations.
- Triggers and multi-server architecture explains the issues you must address when locating triggers on replicas.
For information about debugging triggers, see the Perforce Knowledge Base article, Debugging Triggers.
Communication between a trigger and the server
Triggers can communicate with the server in one of two ways:
-
by using the variables described in Trigger script variables
or
-
by using a dictionary of key/value pairs accessed via
STDIN
andSTDOUT
.
The setting of the
triggers.io
configurable determines which method
is used. The method determines the content of STDIN
and STDOUT
and also affects how trigger failure is handled.
The following table summarizes the effect of these settings.
Client refers to the client application, such as Swarm,
P4V, P4, that is
connected to the server where the trigger executes.
triggers.io = 0 (default) | triggers.io = 1 | |
---|---|---|
Trigger succeeds |
The trigger communicates with the server using trigger variables. STDIN is only used by archive or authentication triggers. It is the file content for an archive trigger, and it is the password for an authentication trigger. The trigger’s STDOUT is sent as an unadorned message to the client for all triggers except archive triggers. For archive triggers, the command’s standard output is the file content. The trigger should exit with a zero value. |
The trigger communicates with the server using STDIN and STDOUT. STDIN is a textual dictionary of name-value pairs of all the
trigger variables except for This setting does not affect STDIN values for archive and authentication triggers. The trigger should exit with a zero value. |
Trigger fails |
The trigger’s STDOUT and STDERR are sent to the client as the text of a trigger failure error message. The trigger should exit with a non-zero value. |
STDOUT is a textual dictionary that contains error information. STDERR is merged with STDOUT. Failure indicates that the trigger script can’t be run, that the output dictionary includes a failure message, or that the output is mis-formatted. The execution error is logged by the server, and the server sends the client the information specified by STDOUT. If no dictionary is provided, the server sends the client a generic message that something has gone wrong. |
The dictionary format is a sequence of lines containing key:value pairs. Any non-printable characters must be percent-encoded. Data is expected to be UTF8-encoded on unicode-enabled servers. Here are some examples of how the %client%, %clientprog%, %command%, and %user% variables would be represented in the dictionary of key:value pairs:
client:mgonzales-2 clientprog:P4/LINUX45X86_128/2017.9.MAIN/1773263782 (2022/OCT/09). command:user-dwim user:mgonzales
The example above shows only a part of the dictionary. When variables are passed in this way, all the variables described in Trigger script variables are passed in STDIN, and the trigger script must read all of STDIN, even if the script only references some of these variables. If the script does not read all of STDIN, the script will fail and the server will see errors like this:
write: yourTriggerScript: Broken pipe
The trigger must send back a dictionary to the server via STDOUT. The
dictionary must at a minimum contain an action with an optional message.
The action is either pass
or fail
.
Non-printable characters must be percent-encoded. For example:
action:fail message:action failed!
Malformed trigger response dictionaries and execution problems are reported to the client with a generic error. A detailed message is recorded in the server log.
Special case
Generally, the two ways of
communicating with the server are mutually exclusive. However, if you want to reference the %peerhost%
or
%clienthost%
variables, you must specify them on
the command line even if you set triggers.io
to 1
. These variables are expensive
to pass. For their values to be included in the dictionary, you must
specify one or both on the command line.
The following is a sample Perl program that echoes its input dictionary to the user:
use strict; use warnings FATAL=>"all"; use open qw/ :std :utf8 /; use Data::Dumper; use URI::Escape; $Data::Dumper::Quotekeys = 0; $Data::Dumper::Sortkeys = 1; my %keys = map { /(.*):(.*)/ } <STDIN>; print "action:pass\nmessage:" . uri_escape Dumper \ %keys;
The listing begins with some code that sets Perl up for basic Unicode support and adds some error handling. The gist of the program is
my %keys = map { /(.*):(.*)/ } <STDIN>;
because <STDIN>
is a file handle that is applied to the
map{}
, where the map takes one line of input at a time and
runs the function between the map’s {}.
The expression
(.*):(.*)
is a regular expression with a pair of capture
groups that are split by the colon. No key the server sends has a colon
in it, so the first .*
will not match. Since most
non-printable characters (like newline) are percent-encoded in the
dictionary, a trigger can expect every key/value pair to be a single
line. Therefore, the single regular expression can extract both the key and
the value. The return values of the regular expression are treated as the
return values for the map’s function, which is a list of strings. When a
list is assigned to a hash, Perl tries to make it into a list of
key/value pairs. Because we know that this is an even list, this works. The print
command makes the result
dictionary and sends it to the server. Calling it a pass action tells the
server to let the command continue and that the message to send the user
is the formated hash of the trigger’s input dictionary.
Exceptions
Setting triggers.io
to 1 does not affect authentication and
archive triggers; these behave as if triggers.io
were set to
0 no matter what the actual setting is.
Compatibility with old triggers
When you set the triggers.io
variable to 1, it affects how
the server runs all scripts, both old and new. If you don’t want to
rewrite your old trigger scripts, you can insert a shim between the
trigger table and the old trigger script, which collects trigger output
and formats it as the server now expects it. That is, the shim runs the
old trigger, captures its output and return code, and then emits the
appropriate dictionary back to the server. The following Perl script
illustrates such a shim:
t form-out label unset "perl shim.pl original_trigger.exe orig_args..."
The shim.pl
program might look like this:
use strict; use warnings FATAL => "all"; use open qw/ :std :utf8 /; use URI::Escape; use IPC::Run3; @_=<STDIN>; run3 \@ARGV, undef, \$_, \$_; print 'action:' . ($? ? 'fail' : 'pass' ) . "\nmessage:" . uri_escape $_;
Storing triggers in the depot
You can store a trigger in the depot. This has two advantages:
- It allows you to version the trigger and be able to access prior versions if needed.
- In a multi-server deployment architecture, it enables Helix Core Server to propagate the latest trigger script to every replica without your having to manually update the file in the filesystem of each server.
Triggers that run from the depot do not work on replicas that are metadata-only. See Server options to control metadata and depot access.
When you store a trigger in the depot, you must specify the trigger name
in a special way in the command
field of the trigger
definition by enclosing the file path of the file containing the trigger
in %
signs. If you need to pass additional variables to the trigger, add
them in the command field as you usually do. The server will create a
temporary file that holds the contents of the file path name you have
specified for the command field. (Working with a temporary file is
preferable for security reasons and because depot files cannot generally
be executed without some further processing.)
Multiple files can be loaded from the depot. In the next trigger definition, two depot paths are provided. Multiple depot paths can be used to load multiple files out of the depot when the trigger executes. For example, the triggers script might require a configuration file that is stored next to the script in the depot:
lo form-out label "perl %//admin/validate.pl% %//admin/validate.conf%"
The depot file must already exist to be used as a trigger. All file types are acceptable if the content is available. For text types on unicode-enabled servers, the temporary file will be in UTF8. Protections on the depot script file must be such that only trusted users can see or write the content.
If the file path name contains spaces, or if you need to pass additional
parameters, enclose the command
field in
quotes.
In the next trigger definition, note that an interpreter is specified
for the trigger. Specifying the interpreter is needed for those platforms
where the operating system does not know how to run the trigger. For
example, Windows does not know how to run .pl
files.
lo form-out label "perl %//admin/validate.pl%"
In the next trigger definition, the depot path is quoted because of the revision number. The absence of an interpreter value implies that the operating system knows how to run the script directly.
lo form-out branch "%//depot/scripts/validate.exe#123%"
A depot file path name cannot contain reserved characters. This is
because the hex replacement contains a percent sign, which is the
terminator for a %var%
. For example, no file named
@myScript
can be used because it would be processed as
%40myScript
inside a var %%40myScript%
.
Using multiple triggers
Submit and form triggers are run in the order in which they appear in the triggers table. If you have multiple triggers of the same type that fire on the same path, each is run in the order in which it appears in the triggers table.
Example Multiple triggers on the same file
All *.c
files must pass through the scripts
check1.sh
, check2.sh
, and
check3.sh
:
Triggers: check1 change-submit //depot/src/*.c "/usr/bin/check1.sh %change%" check2 change-submit //depot/src/*.c "/usr/bin/check2.sh %change%" check3 change-submit //depot/src/*.c "/usr/bin/check3.sh %change%"
If any trigger fails (for instance, check1.sh
), the
submit fails immediately, and none of the subsequent triggers (that is,
check2.sh
and check3.sh
) are called. Each
time a trigger succeeds, the next matching trigger is run.
To link multiple file specifications to the same trigger (and trigger type), list the trigger multiple times in the trigger table.
Example Activating the same trigger for multiple filespecs
Triggers: bugcheck change-submit //depot/*.c "/usr/bin/check4.sh %change%" bugcheck change-submit //depot/*.h "/usr/bin/check4.sh %change%" bugcheck change-submit //depot/*.cpp "/usr/bin/check4.sh %change%"
In this case, the bugcheck
trigger runs on the
*.c
files, the *.h
files, and the
*.cpp
files.
Multiple submit triggers of different types that fire on the same path fire in the following order:
- change-submit (fired on changelist submission, before file transmission)
- change-content triggers (after changelist submission and file transmission)
- change-commit triggers (on any automatic changelist renumbering by the server)
Similarly, form triggers of different types are fired in the following order:
- form-out (form generation)
- form-in (changed form is transmitted to the server)
- form-save (validated form is ready for storage in the Helix Core Server database)
- form-delete (validated form is already stored in the Helix Core Server database)
Exclusionary mappings for triggers
Example
trig1 change-submit //depot/... "trig.pl %changelist%" trig1 change-submit -//depot/products/doc/... "trig.pl %changelist%"
Submitting a change in //depot/products/doc/...
results in the /usr/bin/trig.pl
script NOT running.
Submitting a change in any other directory runs the first instance of a trig1
script, that is, the script on the first trig1
line and ignores the second instance of usr/bin/trig.pl
.
Rules for exclusionary mappings
- Exclusions must be LAST.
- The same script or action must be associated with each different line of the same named trigger. When the path or file check falls through to a triggerable path or file, the script or action runs that is associated with the FIRST trigger line.
- If you want a submit to fail, associate an exit(1) return code with the successful match of the path or file.
Writing triggers to support multiple Helix Core Servers
To call the same trigger script from more than one
Helix Core Server, use the
%serverhost%
, %serverip%
, and
%serverport%
variables to make your trigger script more
portable.
For instance, if you have a script that uses hardcoded port numbers and addresses…
#!/bin/sh # Usage: jobcheck.sh changelist CHANGE=$1 P4CMD="/usr/local/bin/p4 -p 192.168.0.12:1666" $P4CMD describe -s $1 | grep "Jobs fixed...\n\n\t" > /dev/null
and you call it with the following line in the trigger table…
jc1 change-submit //depot/qa/... "jobcheck.sh %change%"
you can improve portability by changing the script:
#!/bin/sh # Usage: jobcheck.sh changelist server:port CHANGE=$1 P4PORT=$2 P4CMD="/usr/local/bin/p4 -p $P4PORT" $P4CMD describe -s $1 | grep "Jobs fixed...\n\n\t" > /dev/null
and passing the server-specific data as an argument to the trigger script:
jc2 change-submit //depot/qa/... "jobcheck.sh %change% %serverport%"
Note that the %serverport%
variable can contain a transport
prefix: ssl
, tcp6
, or ssl6
.
For a complete list of variables that apply for each trigger type, see Trigger script variables.
Triggers and multi-server architecture
Triggers installed on the master server must also exist on its replicas.
- The trigger definition is automatically propagated to all replicas.
-
It is your responsibility to make sure that the program file that implements the trigger exists on every replica where the trigger might be activated. Its location on every replica must correspond to the location provided in the
command
field of the trigger definition.You can do this either by placing the trigger script in the same location in the file system on every server, or by storing the trigger script in the depot on the master or Commit Server and using depot syntax to specify the file name. In this case, the file is automatically propagated to all the replicas. See Storing triggers in the depot.
Triggers installed on the replicas must have the same execution environment for the triggers and the trigger bodies. This typically include trigger login tickets or trigger script runtimes, such as Perl or Python.
Edge Servers have triggers that fire between client and Edge Server, and between Edge Server and Commit Server. See Triggers and commit-edge.