nnlojet is hosted by Hepforge, IPPP Durham
nnlojet is hosted by Hepforge, IPPP Durham

NNLOJET manual

\(\newcommand{\footnotename}{footnote}\) \(\def \LWRfootnote {1}\) \(\newcommand {\footnote }[2][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\newcommand {\footnotemark }[1][\LWRfootnote ]{{}^{\mathrm {#1}}}\) \(\let \LWRorighspace \hspace \) \(\renewcommand {\hspace }{\ifstar \LWRorighspace \LWRorighspace }\) \(\newcommand {\TextOrMath }[2]{#2}\) \(\newcommand {\mathnormal }[1]{{#1}}\) \(\newcommand \ensuremath [1]{#1}\) \(\newcommand {\LWRframebox }[2][]{\fbox {#2}} \newcommand {\framebox }[1][]{\LWRframebox } \) \(\newcommand {\setlength }[2]{}\) \(\newcommand {\addtolength }[2]{}\) \(\newcommand {\setcounter }[2]{}\) \(\newcommand {\addtocounter }[2]{}\) \(\newcommand {\arabic }[1]{}\) \(\newcommand {\number }[1]{}\) \(\newcommand {\noalign }[1]{\text {#1}\notag \\}\) \(\newcommand {\cline }[1]{}\) \(\newcommand {\directlua }[1]{\text {(directlua)}}\) \(\newcommand {\luatexdirectlua }[1]{\text {(directlua)}}\) \(\newcommand {\protect }{}\) \(\def \LWRabsorbnumber #1 {}\) \(\def \LWRabsorbquotenumber "#1 {}\) \(\newcommand {\LWRabsorboption }[1][]{}\) \(\newcommand {\LWRabsorbtwooptions }[1][]{\LWRabsorboption }\) \(\def \mathchar {\ifnextchar "\LWRabsorbquotenumber \LWRabsorbnumber }\) \(\def \mathcode #1={\mathchar }\) \(\let \delcode \mathcode \) \(\let \delimiter \mathchar \) \(\def \oe {\unicode {x0153}}\) \(\def \OE {\unicode {x0152}}\) \(\def \ae {\unicode {x00E6}}\) \(\def \AE {\unicode {x00C6}}\) \(\def \aa {\unicode {x00E5}}\) \(\def \AA {\unicode {x00C5}}\) \(\def \o {\unicode {x00F8}}\) \(\def \O {\unicode {x00D8}}\) \(\def \l {\unicode {x0142}}\) \(\def \L {\unicode {x0141}}\) \(\def \ss {\unicode {x00DF}}\) \(\def \SS {\unicode {x1E9E}}\) \(\def \dag {\unicode {x2020}}\) \(\def \ddag {\unicode {x2021}}\) \(\def \P {\unicode {x00B6}}\) \(\def \copyright {\unicode {x00A9}}\) \(\def \pounds {\unicode {x00A3}}\) \(\let \LWRref \ref \) \(\renewcommand {\ref }{\ifstar \LWRref \LWRref }\) \( \newcommand {\multicolumn }[3]{#3}\) \(\require {textcomp}\) \(\newcommand {\intertext }[1]{\text {#1}\notag \\}\) \(\let \Hat \hat \) \(\let \Check \check \) \(\let \Tilde \tilde \) \(\let \Acute \acute \) \(\let \Grave \grave \) \(\let \Dot \dot \) \(\let \Ddot \ddot \) \(\let \Breve \breve \) \(\let \Bar \bar \) \(\let \Vec \vec \) \(\newcommand {\LWRsubmultirow }[2][]{#2}\) \(\newcommand {\LWRmultirow }[2][]{\LWRsubmultirow }\) \(\newcommand {\multirow }[2][]{\LWRmultirow }\) \(\newcommand {\mrowcell }{}\) \(\newcommand {\mcolrowcell }{}\) \(\newcommand {\STneed }[1]{}\) \(\newcommand {\toprule }[1][]{\hline }\) \(\let \midrule \toprule \) \(\let \bottomrule \toprule \) \(\def \LWRbooktabscmidruleparen (#1)#2{}\) \(\newcommand {\LWRbooktabscmidrulenoparen }[1]{}\) \(\newcommand {\cmidrule }[1][]{\ifnextchar (\LWRbooktabscmidruleparen \LWRbooktabscmidrulenoparen }\) \(\newcommand {\morecmidrules }{}\) \(\newcommand {\specialrule }[3]{\hline }\) \(\newcommand {\addlinespace }[1][]{}\) \(\newcommand {\LWRoverlaysymbols }[2]{\mathord {\smash {\mathop {#2\strut }\limits ^{\smash {\lower 3ex{#1}}}}\strut }}\) \(\def\alphaup{\unicode{x03B1}}\) \(\def\betaup{\unicode{x03B2}}\) \(\def\varbetaup{\unicode{x03D0}}\) \(\def\gammaup{\unicode{x03B3}}\) \(\def\digammaup{\unicode{x03DD}}\) \(\def\deltaup{\unicode{x03B4}}\) \(\def\epsilonup{\unicode{x03F5}}\) \(\def\varepsilonup{\unicode{x03B5}}\) \(\def\zetaup{\unicode{x03B6}}\) \(\def\etaup{\unicode{x03B7}}\) \(\def\thetaup{\unicode{x03B8}}\) \(\def\varthetaup{\unicode{x03D1}}\) \(\def\iotaup{\unicode{x03B9}}\) \(\def\kappaup{\unicode{x03BA}}\) \(\def\varkappaup{\unicode{x03F0}}\) \(\def\lambdaup{\unicode{x03BB}}\) \(\def\muup{\unicode{x03BC}}\) \(\def\nuup{\unicode{x03BD}}\) \(\def\xiup{\unicode{x03BE}}\) \(\def\omicronup{\unicode{x03BF}}\) \(\def\piup{\unicode{x03C0}}\) \(\def\varpiup{\unicode{x03D6}}\) \(\def\rhoup{\unicode{x03C1}}\) \(\def\varrhoup{\unicode{x03F1}}\) \(\def\sigmaup{\unicode{x03C3}}\) \(\def\varsigmaup{\unicode{x03C2}}\) \(\def\tauup{\unicode{x03C4}}\) \(\def\upsilonup{\unicode{x03C5}}\) \(\def\phiup{\unicode{x03D5}}\) \(\def\varphiup{\unicode{x03C6}}\) \(\def\chiup{\unicode{x03C7}}\) \(\def\psiup{\unicode{x03C8}}\) \(\def\omegaup{\unicode{x03C9}}\) \(\def\Alphaup{\unicode{x0391}}\) \(\def\Betaup{\unicode{x0392}}\) \(\def\Gammaup{\unicode{x0393}}\) \(\def\Digammaup{\unicode{x03DC}}\) \(\def\Deltaup{\unicode{x0394}}\) \(\def\Epsilonup{\unicode{x0395}}\) \(\def\Zetaup{\unicode{x0396}}\) \(\def\Etaup{\unicode{x0397}}\) \(\def\Thetaup{\unicode{x0398}}\) \(\def\Varthetaup{\unicode{x03F4}}\) \(\def\Iotaup{\unicode{x0399}}\) \(\def\Kappaup{\unicode{x039A}}\) \(\def\Lambdaup{\unicode{x039B}}\) \(\def\Muup{\unicode{x039C}}\) \(\def\Nuup{\unicode{x039D}}\) \(\def\Xiup{\unicode{x039E}}\) \(\def\Omicronup{\unicode{x039F}}\) \(\def\Piup{\unicode{x03A0}}\) \(\def\Varpiup{\unicode{x03D6}}\) \(\def\Rhoup{\unicode{x03A1}}\) \(\def\Sigmaup{\unicode{x03A3}}\) \(\def\Tauup{\unicode{x03A4}}\) \(\def\Upsilonup{\unicode{x03A5}}\) \(\def\Phiup{\unicode{x03A6}}\) \(\def\Chiup{\unicode{x03A7}}\) \(\def\Psiup{\unicode{x03A8}}\) \(\def\Omegaup{\unicode{x03A9}}\) \(\def\alphait{\unicode{x1D6FC}}\) \(\def\betait{\unicode{x1D6FD}}\) \(\def\varbetait{\unicode{x03D0}}\) \(\def\gammait{\unicode{x1D6FE}}\) \(\def\digammait{\mathit{\unicode{x03DD}}}\) \(\def\deltait{\unicode{x1D6FF}}\) \(\def\epsilonit{\unicode{x1D716}}\) \(\def\varepsilonit{\unicode{x1D700}}\) \(\def\zetait{\unicode{x1D701}}\) \(\def\etait{\unicode{x1D702}}\) \(\def\thetait{\unicode{x1D703}}\) \(\def\varthetait{\unicode{x1D717}}\) \(\def\iotait{\unicode{x1D704}}\) \(\def\kappait{\unicode{x1D705}}\) \(\def\varkappait{\unicode{x1D718}}\) \(\def\lambdait{\unicode{x1D706}}\) \(\def\muit{\unicode{x1D707}}\) \(\def\nuit{\unicode{x1D708}}\) \(\def\xiit{\unicode{x1D709}}\) \(\def\omicronit{\unicode{x1D70A}}\) \(\def\piit{\unicode{x1D70B}}\) \(\def\varpiit{\unicode{x1D71B}}\) \(\def\rhoit{\unicode{x1D70C}}\) \(\def\varrhoit{\unicode{x1D71A}}\) \(\def\sigmait{\unicode{x1D70E}}\) \(\def\varsigmait{\unicode{x1D70D}}\) \(\def\tauit{\unicode{x1D70F}}\) \(\def\upsilonit{\unicode{x1D710}}\) \(\def\phiit{\unicode{x1D719}}\) \(\def\varphiit{\unicode{x1D711}}\) \(\def\chiit{\unicode{x1D712}}\) \(\def\psiit{\unicode{x1D713}}\) \(\def\omegait{\unicode{x1D714}}\) \(\def\Alphait{\unicode{x1D6E2}}\) \(\def\Betait{\unicode{x1D6E3}}\) \(\def\Gammait{\unicode{x1D6E4}}\) \(\def\Digammait{\mathit{\unicode{x03DC}}}\) \(\def\Deltait{\unicode{x1D6E5}}\) \(\def\Epsilonit{\unicode{x1D6E6}}\) \(\def\Zetait{\unicode{x1D6E7}}\) \(\def\Etait{\unicode{x1D6E8}}\) \(\def\Thetait{\unicode{x1D6E9}}\) \(\def\Varthetait{\unicode{x1D6F3}}\) \(\def\Iotait{\unicode{x1D6EA}}\) \(\def\Kappait{\unicode{x1D6EB}}\) \(\def\Lambdait{\unicode{x1D6EC}}\) \(\def\Muit{\unicode{x1D6ED}}\) \(\def\Nuit{\unicode{x1D6EE}}\) \(\def\Xiit{\unicode{x1D6EF}}\) \(\def\Omicronit{\unicode{x1D6F0}}\) \(\def\Piit{\unicode{x1D6F1}}\) \(\def\Rhoit{\unicode{x1D6F2}}\) \(\def\Sigmait{\unicode{x1D6F4}}\) \(\def\Tauit{\unicode{x1D6F5}}\) \(\def\Upsilonit{\unicode{x1D6F6}}\) \(\def\Phiit{\unicode{x1D6F7}}\) \(\def\Chiit{\unicode{x1D6F8}}\) \(\def\Psiit{\unicode{x1D6F9}}\) \(\def\Omegait{\unicode{x1D6FA}}\) \(\let \digammaup \Digammaup \) \(\renewcommand {\digammait }{\mathit {\digammaup }}\) \(\newcommand {\smallin }{\mathrel {\unicode {x220A}}}\) \(\newcommand {\smallowns }{\mathrel {\unicode {x220D}}}\) \(\newcommand {\notsmallin }{\mathrel {\LWRoverlaysymbols {/}{\unicode {x220A}}}}\) \(\newcommand {\notsmallowns }{\mathrel {\LWRoverlaysymbols {/}{\unicode {x220D}}}}\) \(\newcommand {\rightangle }{\mathord {\unicode {x221F}}}\) \(\newcommand {\intclockwise }{\mathop {\unicode {x2231}}\limits }\) \(\newcommand {\ointclockwise }{\mathop {\unicode {x2232}}\limits }\) \(\newcommand {\ointctrclockwise }{\mathop {\unicode {x2233}}\limits }\) \(\newcommand {\oiint }{\mathop {\unicode {x222F}}\limits }\) \(\newcommand {\oiiint }{\mathop {\unicode {x2230}}\limits }\) \(\newcommand {\ddag }{\unicode {x2021}}\) \(\newcommand {\P }{\unicode {x00B6}}\) \(\newcommand {\copyright }{\unicode {x00A9}}\) \(\newcommand {\dag }{\unicode {x2020}}\) \(\newcommand {\pounds }{\unicode {x00A3}}\) \(\newcommand {\iddots }{\mathinner {\unicode {x22F0}}}\) \(\newcommand {\utimes }{\mathbin {\overline {\times }}}\) \(\newcommand {\dtimes }{\mathbin {\underline {\times }}}\) \(\newcommand {\udtimes }{\mathbin {\overline {\underline {\times }}}}\) \(\newcommand {\leftwave }{\left \{}\) \(\newcommand {\rightwave }{\right \}}\)

6 Using NNLOJET

Following the installation steps outlined in Sect. 3.2, both the core executable (NNLOJET) as well as the Python workflow (nnlojet-run) are installed in the bin directory of the installation path. The main user-facing interface to NNLOJET is the workflow script nnlojet-run , which provides an automated way to set up and run calculations and is described in Sect. 6.1. The direct execution of the core executable is possible but requires a more in-depth knowledge of the inner workings of the program, and is therefore only recommended for expert users. Section 6.2 is therefore limited to a description of the output files that is produced by the core NNLOJET executable but will refrain from a detailed guide of a workflow based on it. The understanding of the output files is important in order to inspect individual runs submitted by the Python workflow in case issues are encountered. The main results are provided in the form of histogram files that are described in Sect. 6.3.

6.1 Job submission workflow

Every NNLOJET calculation (or run) is associated with a directory that contains all the necessary input and configuration files as well as the intermediate output and the final result. The nnlojet-run workflow is designed to manage the creation of the run directory, the execution of the NNLOJET core executable, and the retrieval of the final results in a user-friendly way. To this end, the workflow provides four separate sub-commands: init , config , submit , and finalize , which are described in the following. A short description of the commands and available options can be displayed by adding the --help command line argument:

$ nnlojet-run --help
$ nnlojet-run <sub-command> --help

6.1.1  init

The init sub-command initializes a new run directory with the necessary input and configuration files. It takes an NNLOJET runcard as input:

$ nnlojet-run [--exe <exe-path>] init [-o <run-path>] path/to/runcard

The command will create a new run directory at the current path named after the <run-name> specified in the input runcard. Optionally, the option -o <run-path> can be used to specify a location where the run directory should be initialized. The input runcard will be used to generate a template runcard for the calculation, which is saved as the file template.run in the run directory. The NNLOJET executable is automatically searched for in the system path but can also be set manually with the optional --exe <exe-path> argument.

The init command will first display the relevant references for the selected process that should be cited in a scientific publication using results obtained from the NNLOJET calculation; these references will also be saved as .tex and .bib files in the run directory. Subsequently, the user will be prompted to set several configuration options that are necessary for the execution:

  • policy = local|htcondor|slurm : specifies the execution policy for how the NNLOJET calculations should be submitted. Currently supported targets are local execution on the current machine (local) or submission on a cluster using the job scheduling systems slurm or htcondor . The cluster submission requires additional settings that are documented below. (default: local)

  • order = lo|nlo|nlo_only|nnlo|nnlo_only : specifies the perturbative order of the calculation to be performed. It is also possible to restrict the calculation to only a specific coefficient, nnlo=lo+nlo_only+nnlo_only . (default: nnlo)

  • target-rel-acc = DBLE : specifies the desired relative accuracy on the total cross section evaluated with the chosen fiducial cuts and for the specified order . While the specification of a REWEIGHT function can be introduced to bias the phase-space sampling, the target accuracy is always defined with respect to the unweighted cross section. (default: 0.01)

  • job-max-runtime = STR|DBLE : specifies the maximum runtime for a single NNLOJET execution or a job on the cluster. The format is either a time interval specified with units (s, m, h, d, w), e.g. 1h 30m , or a floating point number in seconds. (default: 1h)

  • job-fill-max-runtime = yes|no : specifies if the workflow should attempt to exhaust the maximum runtime that was set for each NNLOJET execution. This option is intended to fully utilize the wallclock time limit of cluster queues. (default: no)

  • jobs-max-total = INT : specify the maximum number of NNLOJET executions that can be run in total. The workflow will terminate either once the target accuracy is reached or this limit on the total number of jobs is exhausted. (default: 100)

  • jobs-max-concurrent = INT : specifies the maximum number of NNLOJET executions that can be run simultaneously. If cluster submission is selected, this option specifies the maximum number of jobs that are simultaneously submitted to the cluster. (default: 10)

If a cluster submission is selected, the user will be prompted to provide additional settings required for an execution on the respective cluster type:

  • poll-time = STR|DBLE : specifies the interval in which the scheduler is queried for an update on the status of the submitted jobs. The format is either a time interval specified with units (s, m, h, d, w), e.g. 1h 30m , or a floating point number in seconds. (default: 10% of the job-max-runtime)

  • submission-template = INT (index from a list displayed to the user): the job submission on a cluster relies on a submission script for the job scheduler. The workflow provides a set of built-in submission templates that can be selected by the user. The chosen template will be copied as a file <cluster>.template to the run directory and can be modified to any specific needs. Note that it is up to the user to ensure that the time limits set by the submission script, e.g. as imposed by a wallclock time limit of special queues of the cluster, are consistent with the job-max-runtime setting.

All configurations are stored in the config.json file inside the run directory, however, it is strongly advised against modifying this file manually. Instead, all interactions with the run directory should proceed through the nnlojet-run interface. The configuration of the run can be changed at any time using the config sub-command described below.

6.1.2  submit

With a properly initialized run directory at <run-path> , the main calculation can be triggered using the submit sub-command:

$ nnlojet-run submit <run-path>

This calculation will employ the options that were set during the init step (or alternatively overwritten using the config sub-command). Some of the options can also be overridden temporarily for a single submission by passing the desired settings as parameters, e.g.

$ nnlojet-run submit <run-path> --job-max-runtime 1h30m \
--jobs-max-total 10 --target-rel-acc 1e-3

A complete list of option overrides with a short description is provided via

$ nnlojet-run submit --help

Once the submit sub-command is executed, the workflow is started and the process remains active for the entirety of the calculation, spawning sub-processes to execute the core NNLOJET program. The full calculation is divided into separate components represented by the individual lines of Eqs. (6) and (7) and further decomposed into independent partonic luminosities. The workflow proceeds by first adapting the phase-space grids in a warmup phase, followed by the actual production phase as described in Sect. 5.3.2. The warmup phase proceeds in steps, iteratively increasing the invested statistics to gradually refine the adaption until either a satisfactory grid is obtained or the specified maximum runtime is reached. The production phase optimizes the allocation of computing resources into the different parts of the calculation to minimize the final uncertainty. To this end, it is necessary to obtain an estimate on the error and the runtime per event for each piece, which is initially determined from a single low-statistics pre-production run. As both the warmup and the pre-production stages are necessary to proceed to the main calculation, the associated jobs do not count towards the maximum number of jobs that was set for the submission and this part of the computation will not attempt to exhaust the available resources. During the main production stage, on the other hand, the workflow will continuously dispatch new jobs in an attempt to exhaust the specified concurrent resource limits and accumulate results until either the target accuracy is reached or the maximum number of jobs is exhausted. Upon termination of all jobs, a summary of the calculation is given by printing the final cross section number with its associated uncertainty. In case the target accuracy was not reached with the allocated resources, an estimate of the number of additional jobs required to reach the requested accuracy is provided. The final results of the calculation are saved to <run-path>/results/final .

(image)

Figure 1: Example of the graphical interface to monitor the status of the calculation. Each column (LO, R, V, RR, RV, VV) represents an active component of the calculation. The indexed rows represent independent luminosity channels. In each cell of the table, the phase of calculation is indicated: warmup ("WRM") or production ("PRD"). The number \(n_A\) of active jobs is indicated as A[$n_A$] , which includes both pending and actively running jobs, while the number \(n_D\) of completed jobs is indicated as D[$n_D$] . In case of unsuccessful NNLOJET executions, the cell will additionally include an entry F[$n_F$] , indicating the number of failed jobs \(n_F\).

During the workflow execution, a summary table together with logging information will be displayed to the user, see Fig. 1, which is updated in real-time as the jobs are submitted and completed.

As the submit command continuously monitors running jobs and dispatches new ones, it is preferrable to keep it running for the entire duration of the calculation. If necessary, it can be stopped by pressing Ctrl+C and re-launched at a later time. In this case, the calculation will resume from the last saved state. Jobs that are already submitted to the cluster will continue to run. If the previous execution terminated exceptionally, e.g. due to a lost ssh connection, the workflow will automatically attempt to recover the state and prompt the user how to proceed with the resurrection of previously active jobs.

Inbetween executions of submit , the settings of a run can be modified, see the config sub-command below. This is especially useful to increase the target accuracy, or to increase the maximal number of jobs, in case too few where initially specified. It is also possible to change the perturbative order in this way, e.g. to compute the NNLO corrections to a run that was previously calculated to NLO.

6.1.3  finalize

All completed jobs inside a run directory <run-path> can be collected and merged into a final result using the finalize sub-command:

$ nnlojet-run finalize <run-path>

It should be noted that the same combination procedure is also triggered at the end of the submit sub-command and it is therefore not necessary to explicitly execute a finalize after a submit .

The finalize sub-command facilitates a manual trigger to perform a re-combination of results with changed internal parameters of the merging algorithm. The main purpose of the merging algorithm is to combine the partial results of different parts of the calculation into a final prediction and to mitigate the impact from outliers. Potential issues pertaining to the treatment of outliers, as well as the approach employed in our implementation are presented in Ref.  [104]. The underlying algorithm is controlled by a set of parameters that have sensible defaults but can also be overridden by the user. This is useful to check if the combine step is potentially introducing a systematic bias as well as to optimize the combination to obtain the best possible result from the raw data. The merging proceeds in steps with a dynamical termination condition, which are applied for each histogram separately on a bin-by-bin basis:

  • 1. trim: apply an outlier-rejection procedure based on the inter-quartile-range to discard data points which are identified as severe outliers;

  • 2. \(k\)-scan: on the trimmed dataset, merge pairs of (pseudo-)jobs in an unweighted manner into a new pseudo-job, which is statistically equivalent to running a single job with the combined statistics;

  • 3. weighted combination: all pseudo-jobs are merged using a weighted average to further suppress outliers and the result is stored;

  • 4. repeat steps 2 & 3 until nsteps results from weighted combinations are accumulated;

  • 5. termination: check for a plateau in the last nsteps results; if no plateau is found, go back to step 2.

The core strategy is to combine as many individual jobs into pseudo-jobs as necessary to ensure that the resulting statistical uncertainty for the pseudo-job is reliable and therefore the weighted average can be safely taken in the next step to further suppress the impact of outliers.

The parameters controlling the trimming step are:

  • trim-threshold = DBLE : a threshold value for the distance from the median in units of the inter-quartile-distance beyond which a data point will be considered an outlier and is discarded. Larger values for this parameter reduce the number of data points that are discarded. (default = 3.5)

  • trim-max-fraction = DBLE : a safeguard on the maximal fraction of data points that are allowed to be trimmed away. For each bin in violation of this condition, its threshold value is dynamically increased until the ratio of trimmed data in that bin falls below this value. (default = 0.01)

The termination condition of identifying a plateau (the \(k\)-scan) is controlled by the following parameters:

  • k-scan-nsteps = INT : the number of pair-combination steps across which the pleateau is checked. (default: 2)

  • k-scan-maxdev-steps = DBLE : the tolerance for the plateau condition in units of the standard deviation. Across the last k-scan-nsteps steps of the \(k\)-scan, require that all points are mutually compatible within this tolerance. Note that the default value is smaller than 1 as the individual points from the \(k\)-scan are determined from the same dataset and thus are strongly correlated. (default = 0.5)

The default values for this parameter can be set using the config sub-command described below. Alternatively, these options can also be overridden temporarily for a single finalize step by passing the desired settings as options, e.g.

$ nnlojet-run finalize <run-path> --trim-threshold 5 \
--k-scan-nsteps 3 --k-scan-maxdev-steps 0.1

which is documented in the corresponding help output: nnlojet-run finalize --help .

6.1.4  config

All default configuration settings can be changed at a later time using the config sub-command:

$ nnlojet-run config <run-path>

This will open an interactive prompt that allows the user to change the configuration settings that were set during the init step. The merge settings used for the finalize sub-command can instead be overwritten using:

$ nnlojet-run config <run-path> --merge

In addition, advanced settings of the workflow can be viewed and set with

$ nnlojet-run config <run-path> --advanced

which provides, e.g. the possibility to change the seed offset of the workflow.

6.1.5 Run directory

The run directory is structured as follows:

  • NNLOJET_references_<PROC>. [bib|tex] for the references to be cited in publications;

  • config.json : contains all the configuration settings for the run;

  • db.sqlite : a database file that contains the status of the run (process and job information);

  • log.sqlite : a database file that contains the logging information of the submission;

  • template.run : the template runcard for the calculation;

  • raw directory: contains the raw output of the NNLOJET runs saved in two subdirectories (warmup , production) with the individual job results;

  • results directory: contains the final results of the calculation;

    • final directory: contains the final results in the form of data files <order>.<obs>.dat for all complete orders;

    • merge directory: contains the result of the currently active order;

    • parts directory: all intermediate combined results separately for each part of the calculation;

The file format of the histogram data files is described below in Sect. 6.3.

6.2 Direct execution

The direct execution of the core NNLOJET executable produces output files with the following naming conventions:

  • Log files: <proc>.<run>.s<seed>.log ;

  • Grid files (machine readable):
    <proc>.<run>.y<t-cut>.<cont> ;

  • Grid files (human readable):
    <proc>.<run>.y<t-cut>.<cont>_iterations.txt ;

  • Histogram files:
    <proc>.<run>.<cont>.<obs>.s<seed>.dat ;

where <proc> denotes the process name, <run> the run name, <seed> the seed number, <cont> the contribution (LO, R, V, …), and <obs> the observable name.

6.3 Histogram data files

The data files produced by NNLOJET are in a simple text-based format with units of \([\mathrm {fb}]\) for cross sections and \([\mathrm {GeV}]\) for dimensionful quantities. The first \(n_x\) columns of the files are reserved for the binning information, if necessary:

  • cross section data files have no bins and thus \(n_x = 0\);

  • differential distributions have \(n_x = 3\) for the lower edge, centre, and upper edge of each bin;

  • cumulant distributions have \(n_x=1\) for the integration limit for the cumulant.

The remaining columns are composed of a series of pairs of numbers that represent the value and the associated Monte Carlo integration uncertainty. First, these pairs are written out for all the active scales, as specified in the SCALES block and its order. Depending on the output_type specified for the histograms, this is optionally repeated with the same pattern for the separate partonic luminosities.

A description of the individual columns is also provided as a comment line starting with #labels . In addition, the cross section contribution from events that fall outside of chosen histogram range are collected and written out in the #overflow line following the same column structure.