2. Quick Start Guide¶
This chapter provides a brief summary of how to build and run the SRW Application. The steps will run most smoothly on Level 1 systems. Users should expect to reference other chapters of this User’s Guide, particularly Chapter 4: Building the SRW App and Chapter 5: Running the SRW App, for additional explanations regarding each step.
2.1. Install the HPC-Stack¶
SRW App users who are not working on a Level 1 platform will need to install the prerequisite software stack via HPC-Stack prior to building the SRW App on a new machine. Users can find installation instructions in the HPC-Stack documentation. The steps will vary slightly depending on the user’s platform. However, in all cases, the process involves (1) cloning the HPC-Stack repository, (2) creating and entering a build directory, and (3) invoking
make commands to build the stack. This process will create a number of modulefiles and scripts that will be used for setting up the build environment for the SRW App.
Once the HPC-Stack has been successfully installed, users can move on to building the SRW Application.
2.2. Building and Running the UFS SRW Application¶
For a detailed explanation of how to build and run the SRW App on any supported system, see Chapter 4: Building the SRW App and Chapter 5: Running the SRW App. Figure 4.1 outlines the steps of the build process. The overall procedure for generating an experiment is shown in Figure 5.1, with the scripts to generate and run the workflow shown in red. An overview of the required steps appears below. However, users can expect to access other referenced sections of this User’s Guide for more detail.
Clone the SRW App from GitHub:git clone -b release/public-v2.1.0 https://github.com/ufs-community/ufs-srweather-app.git
Check out the external repositories:cd ufs-srweather-app ./manage_externals/checkout_externals
Set up the build environment and build the executables:./devbuild.sh --platform=<machine_name>
<machine_name>is replaced with the name of the user’s platform/system. Valid values include:
For additional details, see Section 4.4.1, or view Section 4.4.2 to try the CMake build approach instead.
Users on a Level 2-4 system must download and stage data (both the fix files and the IC/LBC files) according to the instructions in Section 7.3. Standard data locations for Level 1 systems appear in Table 5.1.
Load the python environment for the regional workflow. Users on Level 2-4 systems will need to use one of the existing
wflow_macos) and adapt it to their system. Then, run:source <path/to/etc/lmod-setup.sh/or/lmod-setup.csh> <platform> module use <path/to/modulefiles> module load wflow_<platform>
<platform>refers to a valid machine name (see Section 9.1). After loading the workflow, users should follow the instructions printed to the console. For example, if the output says:Please do the following to activate conda: > conda activate regional_workflow
then the user should run
conda activate regional_workflowto activate the regional workflow environment.
If users source the lmod-setup file on a system that doesn’t need it, it will not cause any problems (it will simply do a
Configure the experiment:cd ush cp config.community.yaml config.yaml
Users will need to open the
config.yamlfile and adjust the experiment parameters in it to suit the needs of their experiment (e.g., date, grid, physics suite). At a minimum, users need to modify the
MACHINEparameter. In most cases, users will need to specify the
ACCOUNTparameter and the location of the experiment data (see Section 5.1 for Level 1 system default locations). Additional changes may be required based on the system and experiment. More detailed guidance is available in Section 126.96.36.199. Parameters and valid values are listed in Chapter 9.
To determine whether the
config.yamlfile adjustments are valid, users can run:./config_utils.py -c $PWD/config.yaml -v $PWD/config_defaults.yaml
config.yamlfile will output a
config.yamlfile with problems will output a
FAILUREmessage describing the problem.
Generate the experiment workflow../generate_FV3LAM_wflow.py
Run the regional workflow. There are several methods available for this step, which are discussed in Section 5.4. One possible method is summarized below. It requires the Rocoto Workflow Manager.cd $EXPTDIR ./launch_FV3LAM_wflow.sh
To (re)launch the workflow and check the experiment’s progress:./launch_FV3LAM_wflow.sh; tail -n 40 log.launch_FV3LAM_wflow
The workflow must be relaunched regularly and repeatedly until the log output includes a
Workflow status: SUCCESSmessage indicating that the experiment has finished.
Optionally, users may configure their own grid, instead of using a predefined grid, and/or plot the output of their experiment(s).