2.1. Quick Start Guide

This chapter provides a brief summary of how to build and run the SRW Application. The steps will run most smoothly on Level 1 systems. Users should expect to reference other chapters of this User’s Guide, particularly Section 2.3: Building the SRW App and Section 2.4: Running the SRW App, for additional explanations regarding each step.

2.1.1. Install the HPC-Stack

SRW App users who are not working on a Level 1 platform will need to install the prerequisite software stack via HPC-Stack prior to building the SRW App on a new machine. Users can find installation instructions in the HPC-Stack documentation. The steps will vary slightly depending on the user’s platform. However, in all cases, the process involves (1) cloning the HPC-Stack repository, (2) reviewing/modifying the config/config_<system>.sh and stack/stack_<system>.yaml files, and (3) running the commands to build the stack. This process will create a number of modulefiles required for building the SRW App.

Once the HPC-Stack has been successfully installed, users can move on to building the SRW Application.


Although HPC-Stack is currently the fully-supported software stack option, UFS applications are gradually shifting to spack-stack, which is a Spack-based method for installing UFS prerequisite software libraries. Users are encouraged to check out spack-stack to prepare for the upcoming shift in support from HPC-Stack to spack-stack.

2.1.2. Building and Running the UFS SRW Application

For a detailed explanation of how to build and run the SRW App on any supported system, see Section 2.3: Building the SRW App and Section 2.4: Running the SRW App. Figure 2.1 outlines the steps of the build process. The overall procedure for generating an experiment is shown in Figure 2.2, with the scripts to generate and run the workflow shown in red. An overview of the required steps appears below. However, users can expect to access other referenced sections of this User’s Guide for more detail.

  1. Clone the SRW App from GitHub:

    git clone -b develop https://github.com/ufs-community/ufs-srweather-app.git
  2. Check out the external repositories:

    cd ufs-srweather-app
  3. Set up the build environment and build the executables:

    ./devbuild.sh --platform=<machine_name>

    where <machine_name> is replaced with the name of the user’s platform/system. Valid values include: cheyenne | gaea | hera | jet | linux | macos | noaacloud | orion | wcoss2

    For additional details, see Section, or view Section to try the CMake build approach instead.

  4. Users on a Level 2-4 system must download and stage data (both the fix files and the IC/LBC files) according to the instructions in Section 3.2.3. Standard data locations for Level 1 systems appear in Table 2.4.

  5. Load the python environment for the workflow. Users on Level 2-4 systems will need to use one of the existing wflow_<platform> modulefiles (e.g., wflow_macos) and adapt it to their system. Then, run:

    module use /path/to/ufs-srweather-app/modulefiles
    module load wflow_<platform>

    where <platform> refers to a valid machine name (see Section 3.1.1). After loading the workflow, users should follow the instructions printed to the console. For example, if the output says:

    Please do the following to activate conda:
       > conda activate srw_app

    then the user should run conda activate srw_app to activate the workflow environment.

  6. Configure the experiment:

    Copy the contents of the sample experiment from config.community.yaml to config.yaml:

    cd ush
    cp config.community.yaml config.yaml

    Users will need to open the config.yaml file and adjust the experiment parameters in it to suit the needs of their experiment (e.g., date, grid, physics suite). At a minimum, users need to modify the MACHINE parameter. In most cases, users will need to specify the ACCOUNT parameter and the location of the experiment data (see Section 2.4.1 for Level 1 system default locations). Additional changes may be required based on the system and experiment. More detailed guidance is available in Section Parameters and valid values are listed in Chapter 3.1.

  7. Generate the experiment workflow.

  8. Run the workflow from the experiment directory ($EXPTDIR). By default, the path to this directory is ${EXPT_BASEDIR}/${EXPT_SUBDIR} (see Section for more detail). There are several methods for running the workflow, which are discussed in Section 2.4.4. One possible method is summarized below. It requires the Rocoto Workflow Manager.

    cd $EXPTDIR

    To (re)launch the workflow and check the experiment’s progress:

    ./launch_FV3LAM_wflow.sh; tail -n 40 log.launch_FV3LAM_wflow

    The workflow must be relaunched regularly and repeatedly until the log output includes a Workflow status: SUCCESS message indicating that the experiment has finished. The cron utility may be used to automate repeated runs. The last section of the log messages from running ./generate_FV3LAM_wflow.py instruct users how to use that functionality. Users may also refer to Section for instructions.

Optionally, users may configure their own grid, instead of using a predefined grid, and/or plot the output of their experiment(s).